TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Observability / Security

Prioritize Runtime Vulnerabilities via Dynamic Observability

Developers need more granular data about the exploitability of a vulnerability in production to ensure they’re working on the most pressing threats first.
Feb 17th, 2023 6:40am by
Featued image for: Prioritize Runtime Vulnerabilities via Dynamic Observability

Figuring out how to best identify potential vulnerabilities in a codebase, and quickly getting mitigations to these vulnerabilities into production, is a major challenge in enterprise software development.

Existing approaches tend to be a major headache for developers, as they aren’t accurate enough, take too much time and energy and disrupt developers’ workflows. Developers can either risk ignoring a vulnerability and keep delivery on track, or invest lots of time and energy figuring out which vulnerabilities are true positives and delay their releases.

Third-Party Libraries: A Mixed Blessing

An estimated 60% to 80% of code in enterprise applications comes from third-party code (libraries, components and software development kits), largely due to the widespread use of open source software within the enterprise.

This means that a lot of code deployed to production has not been written by that organization’s developers but by the creators and maintainers of the hundreds (or often, thousands) of third-party libraries the code relies on. This cuts costs and accelerates software development and benefits from wide community support for these libraries.

Unfortunately, this also presents a significant risk as vulnerabilities within these third-party sources, whether commercially provided or open source, can affect the entire software supply chain. For instance, the Log4Shell vulnerability found in the popular Log4j Java logging utility was deemed by the Department of Homeland Security as “one of the most serious software vulnerabilities in history.”

Traditional remediation methods of such vulnerabilities include static application security testing (SAST) or software composition analysis (SCA). These two methods are being executed continuously via automated tools and many times from within the continuous integration (CI) jobs. The problem with these tools’ outputs is that they provide a lot of information that is not properly prioritized or digestible by developers, hence, causing a lot of “noise,” false positive alerts and delays in release velocity.

Upon execution of such tools, either on demand or through CI scheduling, developers typically go through the following process:

  1. Review the list of findings and prioritize those most urgent. (Note that many vulnerabilities can have the same severity grade.)
  2. Fix each vulnerability present in the code to ensure that the application is safe across all attack vectors.
  3. Rinse and repeat.

We should clearly distinguish between the outputs received from SAST tools vs. SCA tools. SAST tools only scan the internal source code for security vulnerabilities, while SCA tools scan for third-party libraries, and once they determine which libraries are being used, they cross-reference these with a list of known vulnerabilities (CVEs) to identify any that could affect the code.

Below is an example from a dashboard of a popular SCA application and the problems it brings to the surface. Note that almost 50% of vulnerabilities are deemed high risk.

This problem is further amplified by two other issues:

  1. Transitive dependencies: Every library that a developer uses comes with a list of other libraries that it depends on, usually called transitive dependencies. This makes the total number of potentially vulnerable libraries even higher with no way to determine which of those transitive dependencies is actually needed.
  2. Docker images: Even more libraries — each with their own set of CVEs — are packed inside each Docker image with no way to know why they are there and if they are needed.

With the above in mind, developers simply don’t have the information needed to decide whether a given vulnerability is important. They have to investigate a potential threat to even determine whether it is real, which is supposed to be the purpose of the tools.

If that’s not complicated enough, we are seeing a growing standard of vulnerability issues prioritization called VEX that shows that in many cases, 90% of the findings around security vulnerabilities are noise or are issues that do not affect the running code in production, hence the importance of proper prioritization and remediation.

Dynamic Observability: Giving Developers the Information They Need to Prioritize Vulnerabilities at Runtime

There is a new step in the evolution of application security that can massively reduce these problems by giving developers the information they need to prioritize vulnerabilities effectively.

Developers need much more granular information about the existence and the exploitability of the vulnerability in production to ensure they’re working on mitigating the most pressing (and real!) threats first.

Dynamic observability is the ability to understand anything that happens in a live application — on demand, in real time and regardless of where the application is deployed.

This approach means that you don’t scan the entire codebase for vulnerabilities, as with SAST, nor do you scan the software bill of materials for CVEs, like with SCA.

Instead, dynamic observability allows developers and application security experts to get answers to the questions that actually matter about the impact of the vulnerability in production, like:

  • Determine if a given vulnerability is actually part of the execution path of their application.
  • Identify which specific users/customers are vulnerable.
  • Determine which parts of the application are vulnerable.

We’ve seen a noise reduction of up to 85% with users that adopted dynamic observability because most of their CVEs were not actually exploitable (or exist at all!) in production.

In addition to the noise reduction, how does this help to prioritize vulnerabilities?

Imagine you have three modules of code, each carrying a certain vulnerability. Rather than assuming they all need to be urgently fixed, you can check to see which of them is running that code in production and prioritize fixing that code module first.

This process can be used for any and all security vulnerabilities. But let’s take a look at a specific example to demonstrate how dynamic observability can be used to prioritize vulnerabilities in runtime and eliminate false positives.

Using Dynamic Observability to Prioritize Vulnerabilities

When transitioning to a dynamic observability solution as part of the overall security vulnerabilities prioritization process, developers can follow the below optimized process and enhance their overall productivity, remediation of real high-priority issues and safer production code.

This process consists of three steps.

1. Receive CVE alert via SCA tool or equivalent.

You receive an alert notifying you that a vulnerability has been flagged in one of the third-party libraries your application uses.

2. Determine the impact of the vulnerability on the actual deployment.

Using platforms like Lightrun, you investigate whether:

  • The vulnerability’s code is actually loaded in a live code path.
  • Which users/paths/customers are affected.
  • How widespread the vulnerability is (which sections of the code are affected).
  • How often the vulnerability could be exploited (by looking at the amount of code path invocations).
  • Add logs, take snapshots and set metrics in various places in the code to better understand the impact of the vulnerability.

3. Re-prioritize based on the actual effect on the runtime application.

Armed with the relevant knowledge, you can now re-prioritize all “critical” and “high” severity vulnerabilities and mitigate them in the right order.

Prioritizing Vulnerabilities with Dynamic Observability

By using dynamic observability platforms like Lightrun, you can instantly get the information you need to prioritize security alerts from the live application without even leaving your integrated development environment (IDE).

You can directly query the code running in production and discern which specific modules of your code are vulnerable and which are not. This is the information you need to then prioritize your security alerts and retain a little sanity.

In a very recent vulnerability that was classified under CVE- 2021-37136 and was caused by the following code commit, using the dynamic observability of the Lightrun platform, we were able to scan the usage of that exploited code (specifically the following class name: “Bzip2BlockDecompressor.java:230”) and determine that the code is not being reached in runtime.

We placed a virtual breakpoint using the Lightrun IDE plugin and confirmed that there is no impact of this vulnerability as part of the application runtime.

This latest evolution in dynamic observability empowers developers to properly prioritize their security alerts and massively reduce the number of false positives. They can spend less time scratching their heads over confusing alerts and more time getting on with writing valuable (and secure) code.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Lightrun, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.