Double Helix: High Assurance N-Variant Systems
Funding Agency: Advanced Research Projects Agency (DARPA)
Award: $5,880,041
Dates: 01-MAY-2015 through 31-OCT-2018
Dates: 01-MAY-2015 through 31-OCT-2018
Double Helix is a joint project of University of Virginia (UVa), SRI International (SRI) and the University of New Mexico (UNM). Double Helix is a binary analysis and transformation system that will process binary applications to defend (ATDs) and produce variants with diverse binary structures that are intended to be deployed within a multi-variant system. A unique aspect of Double Helix is that it will employ structured diversity to guarantee that variants behave differently
when attacked. Along with the variants that are produced, Double Helix will establish relevant system security properties and prove functional equivalence of variants in normal operation. Double Helix leverages the complementary strengths of the team which include:
- pioneering experience in the development of the concept of N-variant systems and their implementation (UVa),
- deep expertise in proving properties of programs (SRI),
- state-of-the-art technology for the static and dynamic analysis of arbitrary binary programs and components (UVa),
- deep experience in developing novel diversity transformations and applying them to protect applications from attack (UVa and UNM),
- breakthrough technology for both dynamic and static rewriting of arbitrary binary programs (UVa),
- extensive experience in managing and participating in large, successful multi-institutional projects (UVa, SRI, and UNM).
Currently deployed diversity approaches provide some level of protection against certain classes of attacks, but these approaches suffer from a number of serious deficiencies and limitations.
First, currently deployed diversity defenses are probabilistic. Depending on the granularity of the diversity approach and entropy provided by the diversity technique, a persistent attacker might get lucky and compromise the system. If the compromised system is within a trusted enclave, then the attacker may be able perform other attacks to compromise additional machines.
Moreover, current diversity techniques rely on keeping secrets such as randomization keys, code or data location. Derandomizing attacks, probing or side-channel information-leakage attacks have time and again broken through diversity defenses. With enough determination, a skilled adversary can bootstrap even a small bit of leaked or inferred knowledge and turn it into a working exploit.
Another deficiency of current approaches is that attacks often result in crashes or service interruptions---effectively turning the intended attack into a denial-of-service (DOS) attack. For mission-critical systems, a DOS attack may have devastating consequences.
Most currently deployed diversity defenses provide only certain forms of data diversity}. Techniques that defend against attacks that rely on the representation of data have not been fully explored or evaluated.
Currently deployed diversity approaches are mostly fine-grained; they diversify low-level features such as code and data locations. Such fine-grained (or medium-grained) diversity is largely ineffective if the attack attempts to exploit existing functionality (e.g., a bug in a sorting routine or a protocol).
Many diversity systems require source code, which is often problematic because of intellectual property concerns, use of assembly code, or legacy executables. Even when source code is available, it may be difficult to process. For example, many systems consist of several components implemented in different programming languages (including hand-written assembly language). Source code that is written for a specific compiler may also be hard for diversity systems to handle (e.g., a program written to use Microsoft Visual Studio extensions, it is unlikely that it can be compiled with GNU gcc or LLVM).