Investigation of Methods for Automated System Vulnerability Testing, 10-R8052

Printer Friendly Version

Principal Investigators
Mark J. Brooks
Joseph G. Loomis
Ben A. Abbott
Jeremy C. Price

Inclusive Dates:  04/01/09 – 03/31/10

Background - The prevalence of complex systems continues to drive the creation of increasingly intricate systems-of-systems. This trend adds new capabilities and efficiencies to existing systems, but also often leads to new weaknesses and vulnerabilities. This creates two primary problems for security testing that must be addressed. First, the complexity of vulnerability assessment chains that can be deployed against a system increases significantly, making it difficult for a security tester to reason about sequences of tests that provide the best coverage of threats. Second, the number of tests that must be executed increases substantially, requiring an automated approach for test creation and deployment.

Approach - The goal of this project was to create and evaluate a prototype-integrated analysis environment containing descriptive models of systems, threats, and security testing applications. This testing environment provides an automated analysis of the external threats to a target system, assesses the testing coverage required to effectively analyze the system's vulnerability to attack, and automates the generation and deployment of vulnerability assessment chains. Rapid change in networking technology continuously produces new vulnerabilities, threats, attack tactics and defensive tools. Therefore, an adaptable, automated testing environment is required for effective security testing. SwRI's Automated System Vulnerability Testing (ASVT) tool automates the process of reasoning about threats to the target system and provides an improved method for selecting and implementing security tests appropriate for those threats.

The research approach was to create and evaluate a proof-of-concept system for reasoning about threats to an application under test and to automate the process of test generation and deployment. To accomplish this effort researchers: (1) developed a framework for integrating test generators, application monitors and analysis engines; (2) integrated several best-of-class security testing tools within that framework; (3) implemented a custom analysis engine and synthesizer for generating targeted vulnerability assessment chains; and (4) compared results with the current state-of-the-art "by-hand" approaches. The research focused on automated testing of Windows™ desktop executables.

Accomplishments - The major scientific conclusions of this research were: 1) integrating best-in-class tools coupled with an automated reasoning process can provide significant improvements in vulnerability assessment speed and effectiveness as compared to manual assessment processes, 2) monitoring test applications using a debugger during fuzzing can improve vulnerability discovery, and 3) automated generation and deployment of tests can reduce vulnerability discovery time.
 

2010 Program Home