TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Security / Software Development

Developers: Fuzzing, Symbolic Execution with Regression Testing Offers App Resilience

Securing your apps can be done through a number of resilience tests
Oct 8th, 2019 10:55am by
Featued image for: Developers: Fuzzing, Symbolic Execution with Regression Testing Offers App Resilience

Mel Llaguno
Mel Llaguno is the Commercial Solutions Lead at ForAllSecure where he’s responsible for uncovering new markets and industry adoption of the company's award-winning technology. Previously, he worked at Synopsys running their Coverity SCAN project which provided commercial-grade SAST to some of the most important (and largest) OSS projects in the world.

The realization that software is becoming an essential component of our everyday lives was reflected yet again in this year’s Black Hat. Even more solutions are being touted to deal with the ever-growing exposure of software to malicious threats. Unfortunately, a lot of the solutions focus on dealing with the symptoms of our current predicament without addressing the fundamental truth — software is built insecurely despite our best efforts.

What is required is a change of perspective. Software is Infrastructure.

This is particularly true in safety criticality systems. Think of recent advances in the automotive industry, aeronautics, and medical devices. All would not have been possible without the introduction of software as part of the innovation. This, however, has the unfortunate side-effect of imbuing these systems with an additional characteristic — the fusion of hardware and software makes these systems essentially cyber-physical systems. The problem is that the processes that we’ve developed to deal with the challenges of modern software development have in general not yet reached the level of maturity required for systems where life and death are at stake.

What’s missing from the process is the concept of Resilience. Resilience is the ability to resist catastrophic failure in the face of adverse conditions. Resilience is an essential requirement for safety-critical cyber-physical systems specially when these systems are expected to function for decades, not merely years.

While there are a number of technologies that help address the challenge of building resilient systems, by themselves, they only address a fraction of the problem. Let’s look at the various strengths and weaknesses of these solutions:

  • Software Composition Analysis allows organizations to find outdated software dependencies. By using non-vulnerable versions of these components, security can be immediately improved. The challenge is that this sense of safety is at a point-in-time. There is no guarantee that having the latest components that your application is secure against future threats.
  • Static Analysis (SA) can be applied to a program’s source code but works with an abstraction that does not operate against the code that actually executes. In addition, even the best tools required organizational effort to employ as the technique suffers from a fundamental issue of False Positives (FP), the misidentification of issues which are in fact not defects. The application of SA is further complicated by the ever-increasing size of codebases. While the best SA tools can have FP rates under 5%, when applied to projects with 1MLoC to 10+MLoC (Lines of Code), this results in the identification of approximately 50k – 500k defects (using 5% as the FP rate). This number of defects requires significant _time_ and _developer_ resources to address. Imagine when the SA tool being used has an even higher FP rate …
  • Dynamic Analysis (such as protocol fuzzers, Interactive Application Security Tools- IAST, vulnerability scanners) are useful in the context of acceptance testing, but the application of these tools requires an understanding of when in the Software Development Life Cycle (SDLC) they can be applied. These tools generally work on fully developed/deployed applications which fundamentally shifts them rightmost in the SDLC. There is a cost associated with this lag in the developer feedback cycle.
  • Software Auditing and Penetration Testing can also be used to secure software, but with significant cost (as it requires a degree of expertise) and is limited by human scale. This option is generally only available to organizations with the resources to hire/purchase these services which leaves a majority of companies unnecessarily exposed.

So what’s the solution?

Coverage guided fuzzing is a technique gaining popularity that is empowered by recent advances in cloud-scale infrastructure. Fuzzing is the process of generating pseudo-random inputs and feeding into a program to see if it behaves in an unexpected manner. Surprisingly, this technique is very effective in discovering new defects which can have stability/security implications. Hackers have been known to use fuzzing to discover new vulnerabilities.

Google (through the OSS-Fuzz initiative) and Microsoft (through the development of its Security Risk Detection engine) have been extremely successful applying this technology to make their applications more resilient.

The cutting-edge of this technique combines both fuzzing with Symbolic Execution (SE). While fuzzing can be thought of as brute force mutational input testing, SE can look at the execution context of program and discover interesting paths for analysis which fuzzing by itself would have difficulty making progress against.

In addition, test cases are automatically generated as part of the analysis. These test cases are important because:

  1. They can function as regression tests for future versions of the software without additional developer effort. Instead of waiting for defects/vulnerabilities to be reported against future versions, you can test the most current version of a dependency to ensure the integrity of the program’s behavior.
  2. A discovered defect has a direct/measurable impact on the running program and is extremely unlikely to result in a False Positive.
  3. They can be reduced to a minimum set of cases that exercise the discovered execution paths. This is much faster than running a full analysis of the program and can be easily incorporated into a DevOps pipeline. This gives developers immediate visibility into regressions/defects discovered through analysis.
  4. They can be used to provide defect reproducers so the developers can quickly identify where the code needs to be fixed. In essence, tests give the type of context an experienced auditor/pen-tester can provide.
  5. As analysis progresses, new test cases are generated.

While no analysis can ever claim to find all possible bugs, having a collection of test cases that evolves with the program gives organizations confidence that a program that has undergone analysis will be resilient.

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.