Investigation of Methods for Automated System Vulnerability Testing, 10-R8052

Printer Friendly Version

Principal Investigators
Mark J. Brooks
Joe Loomis
Ben A. Abbott

Inclusive Dates:  10/01/04 – Current

Background - The prevalence of complex systems continues to drive the creation of increasingly intricate systems-of-systems. This trend adds new capabilities and efficiencies to existing systems but also often leads to new weaknesses and vulnerabilities. This creates two primary problems for security testing that must be addressed. First, the complexity of vulnerability assessment chains that can be deployed against a system increases significantly, making it difficult for a security tester to reason about sequences of tests that provide the best coverage of threats. Second, the number of tests that must be executed increases substantially, requiring an automated approach for test creation and deployment.

Approach - The specific research goal is to create and evaluate a proof-of-concept system for reasoning about the threats to a system under test and automating the process of test generation and deployment. There are several steps required to accomplish this effort: (1) developing a modeling framework for reasoning about and generating models of vulnerability assessment chains, (2) integrating several best-of-class security testing tools within the modeling framework, (3) implementing a reasoning engine and synthesizer for generating the vulnerability assessment chain, and (4) comparing results with a baseline "by-hand" approach.

The major scientific contributions of this research will be 1) a formal language for modeling and specifying threats, vulnerability assessment chains, and security testing tools; 2) a formal knowledge representation to allow reasoning about threats and test results to automate the generation of vulnerability assessment chains; and 3) automated generation and deployment of test scenarios for the system under test.

Accomplishments - The team is conducting research to evaluate whether vulnerability discovery methods have common elements that can be described appropriately to support a generic reasoning approach applicable to multiple cases. The team is also examining a set of common vulnerability discovery tools to ensure that current methods for conducting security analysis of systems are incorporated and to facilitate inclusion of new tools as they become available. When formal definitions of the vulnerability discovery process are in place, the team will choose an appropriate Artificial Intelligence (AI) search algorithm to provide a reasoning framework that will automate the vulnerability discovery process.

2009 Program Home