Hermes: A Targeted Fuzz Testing Framework

Date

2015-03-12

Authors

Shortt, Caleb James

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The use of security assurance cases (security cases) to provide evidence-based assurance of security properties in software is a young field in Software Engineering. A security case uses evidence to argue that a particular claim is true. For example, the highest-level claim may be that a given system is sufficiently secure, and it would include sub claims to break that general claim down into more granular, and tangible, items - such as evidence or other claims. Random negative testing (fuzz testing) is used as evidence to support security cases and the assurance they provide. Many current approaches apply fuzz testing to a target system for a given amount of time due to resource constraints. This may leave entire sections of code untouched [60]. These results may be used as evidence in a security case but their quality varies based on controllable variables, such as time, and uncontrollable variables, such as the random paths chosen by the fuzz testing engine. This thesis presents Hermes, a proof-of-concept fuzz testing framework that provides improved evidence for security cases by automatically targeting problem sections in software and selectively fuzz tests them in a repeatable and timely manner. During our experiments Hermes produced results with comparable target code coverage to a full, exhaustive, fuzz test run while significantly reducing the test execution time that is associated with an exhaustive fuzz test. These results provide a targeted piece of evidence for security cases which can be audited and refined for further assurance. Hermes' design allows it to be easily attached to continuous integration frameworks where it can be executed in addition to other frameworks in a given test suite.

Description

Keywords

security, fuzz testing, genetic algorithm, static analysis, dynamic analysis, hermes, assurance

Citation