Last month we held the December session of the Indianapolis Workshop on Software Testing. The attendees were:
The topic we focused on for the five-hour workshop was security testing.
The first experience report was from Tate Stuntz. Tate spoke about a company that approached him with a "security event" and they had asked him to audit their system to help them track down the problem. He gave an hour-long experience report, I'll do my best to summarize.
He found the root cause to be based on a development team using web application code from a previous project. The code had worked in production for the first application, but had failed for the second application. The failure was due to order of magnitude change in the number of concurrent users in the system - it was a failure in the systems ability to keep unique sessions. In the first (smaller) system, there wasn't enough load to notice the problem. In the second (much larger) system, the users were able to "jump" into another users session and see their records. A big security no no...
It turns out the Session ID generator didn't generate unique numbers after users authenticated. He found some basic coding errors leading to the problem:
After listening to his presentation I thought of James Bach's SFDPO heuristic and how that might map to some of the problems identified. I figured any of the following would have identified some of the problems:
Structure: Code Review and/or database review
Function: Testing for session time out and just about any load testing
Data: Monitoring the database during session generation and testing for the unique constraint
Platform: Testability features like logging, monitoring, and alerts
Operation: Threat modeling
The next experience report came from Marc Labranche (also a long one - I did my best with the notes). Mark had several experiences and ideas to share; all under the broad theme of cracking desktop applications (not web apps - the current hot topic).
His first suggestion was to look for anything on the disk that the software trusts as good or safe and then change those files for the desired effects. For example, for authentication done at install time you can use tools like regmon and filemon available for free at http://www.sysinternals.com to easily see all the file/registry reads/writes being done by the program. He also recommended using Hex Workshop (http://www.hexworkshop.com/) and PE Explorer (http://www.heaventools.com/) for tasks like this.
His next suggestion was to modify the code by using jump inverting. In jump inverting you find test conditions in code and change 1byte of the jump instruction to it's negative. Whatever un-wanted behavior is now changed. This can be easily defeated by developers who use critical routines elsewhere for unrelated reasons, but it works for many applications. Also, a simple checksum will many times prevent this from working. More tools: softIce (http://www.compuware.com) and PE Explorer (http://www.heaventools.com/) again.
In a direct example from Marc:
He also provided a quick reference: http://www.jegerlehner.ch/intel/IntelCodeTable.pdf
He next tackled thread/code injection. A process can use the windows API (MSDN.microsoft.com) to allocate memory in another process's memory and execute it as a thread in the target process. This new thread then has access to all of the memory from the inside, bypassing any operating system RAM protection. This can be done with a combination of the following win32APIs and a little ingenuity: ReadProcessMemory, WriteProcessMemory, SuspendThread, and ResumeThread.
He then tied that in with the classic buffer overflow. Using the same or similar process as local code injection, you exploit the overflow with remote code injection.
He wrapped it up by talking about some of the things he thinks we'll see in the future, like exploits for PCI BUS master mode.
Following Marc, I talked about security testing with WebScarab and Ethereal. Something I've been talking about a lot lately. We looked at some examples, hacked into my email, and looked at a previous exploit in a major online retailer (it's been fixed). I also shared some comments and insights from a book I just finished reading: Stealing the Network: How to Own a Continent. It's excellent. I found it both entertaining and very informative.
Overall, I think we all had a blast. It was a lot of fun, A LOT of good dialog. Thanks again to all who attended.
- Andrew Andrada
- Charlie Audritsh
- Mike Goempel
- Michael Kelly
- Marc Labranche
- Kenn Petty
- Vishal Pujary
- Tate Stuntz
The topic we focused on for the five-hour workshop was security testing.
The first experience report was from Tate Stuntz. Tate spoke about a company that approached him with a "security event" and they had asked him to audit their system to help them track down the problem. He gave an hour-long experience report, I'll do my best to summarize.
He found the root cause to be based on a development team using web application code from a previous project. The code had worked in production for the first application, but had failed for the second application. The failure was due to order of magnitude change in the number of concurrent users in the system - it was a failure in the systems ability to keep unique sessions. In the first (smaller) system, there wasn't enough load to notice the problem. In the second (much larger) system, the users were able to "jump" into another users session and see their records. A big security no no...
It turns out the Session ID generator didn't generate unique numbers after users authenticated. He found some basic coding errors leading to the problem:
- There were no unique constraints on the Session ID table.
- They were using the Java random number generator, which I guess has a known issue of using the system clock as it's seed. So any calls at the exact same time generated the same random number for multiple users. They should have used something like SecureRandom which has higher security standards.
- Each thread instantiated a new class for the random number generator. They did not use a singleton.
- They did not have enough auditing features in the system to tell what had happened once they started receiving helpdesk calls about "events."
- They never expired a session. They remained active indefinitely.
- They made the fundamental mistake of relying on code they didn't know anything about and failed to test because it had been used previously.
After listening to his presentation I thought of James Bach's SFDPO heuristic and how that might map to some of the problems identified. I figured any of the following would have identified some of the problems:
Structure: Code Review and/or database review
Function: Testing for session time out and just about any load testing
Data: Monitoring the database during session generation and testing for the unique constraint
Platform: Testability features like logging, monitoring, and alerts
Operation: Threat modeling
The next experience report came from Marc Labranche (also a long one - I did my best with the notes). Mark had several experiences and ideas to share; all under the broad theme of cracking desktop applications (not web apps - the current hot topic).
His first suggestion was to look for anything on the disk that the software trusts as good or safe and then change those files for the desired effects. For example, for authentication done at install time you can use tools like regmon and filemon available for free at http://www.sysinternals.com to easily see all the file/registry reads/writes being done by the program. He also recommended using Hex Workshop (http://www.hexworkshop.com/) and PE Explorer (http://www.heaventools.com/) for tasks like this.
His next suggestion was to modify the code by using jump inverting. In jump inverting you find test conditions in code and change 1byte of the jump instruction to it's negative. Whatever un-wanted behavior is now changed. This can be easily defeated by developers who use critical routines elsewhere for unrelated reasons, but it works for many applications. Also, a simple checksum will many times prevent this from working. More tools: softIce (http://www.compuware.com) and PE Explorer (http://www.heaventools.com/) again.
In a direct example from Marc:
Basically, every conditional jump routine has an inverse. The classic one being JZ and JNZ (jump if zero and jump if not zero). Basically by changing one byte in your program I can change something like
If(registration_code_ok)
{
Run_app();
}
Else
{
BAD_USER();
}
To something like:
If(registration_code_ok)
{
Run_app();
}
Else
{
BAD_USER();
}
With this change, as long as my registration code is bad the program will run. This can also be applied to checks like if(correct_cd_in_drive) and if(has_remote_license_key)
He also provided a quick reference: http://www.jegerlehner.ch/intel/IntelCodeTable.pdf
He next tackled thread/code injection. A process can use the windows API (MSDN.microsoft.com) to allocate memory in another process's memory and execute it as a thread in the target process. This new thread then has access to all of the memory from the inside, bypassing any operating system RAM protection. This can be done with a combination of the following win32APIs and a little ingenuity: ReadProcessMemory, WriteProcessMemory, SuspendThread, and ResumeThread.
Suspend a thread in the target application, overwrite the next few bytes to allocate process local memory and start a thread (loading a DLL is a very easy way to do this). Resume the thread that does our bidding and then suspends it's self at the original ProgramCounter address. Copy the original code back in and resume the thread as if nothing had happened. Make sure you do not corrupt the heep/stack.
There are other ways to do this if the program detects the above somehow. You can prevent having to use suspend/resume if you make a high priority thread that has the following code: While(true); and then your injection thread should be running realtime priority. The whole system will pause except for your injection thread.
He then tied that in with the classic buffer overflow. Using the same or similar process as local code injection, you exploit the overflow with remote code injection.
He wrapped it up by talking about some of the things he thinks we'll see in the future, like exploits for PCI BUS master mode.
Every PCI card in the system is given the ability to become the BUS master for an unlimited amount of time. In this mode, the PCI card has full access to system resources including the CPU and RAM. Nothing is sacred in this mode; the card can modify any location in RAM as well as any of the CPU control registers. Any security your O/S may have built in is disabled.
Following Marc, I talked about security testing with WebScarab and Ethereal. Something I've been talking about a lot lately. We looked at some examples, hacked into my email, and looked at a previous exploit in a major online retailer (it's been fixed). I also shared some comments and insights from a book I just finished reading: Stealing the Network: How to Own a Continent. It's excellent. I found it both entertaining and very informative.
Overall, I think we all had a blast. It was a lot of fun, A LOT of good dialog. Thanks again to all who attended.