TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Linux / Security

Checking the Linux Kernel with Static Analysis Tools

Static analysis tools can help find security gaps in source code, such as in the Linux kernel, but such tools are notorious for generating false positive results. Here's a look at some of the tools used with Linux, including their challenges.
Jun 2nd, 2021 3:00am by
Featued image for: Checking the Linux Kernel with Static Analysis Tools
Photo by Jen Theodore on Unsplash.

Earlier this year, Greg Kroah-Hartman, the Linux kernel maintainer for the stable branch, was enraged to find that University of Minnesota (UMN) security “researchers” had tried to poison the Linux kernel with deliberately corrupt patches. Later, the UMN graduate students claimed their patches were good, based on their new static analyzer. Kroah-Hartman didn’t buy it.

In response, he banned the entire university from submitting kernel patches.

[The patches] obviously were _NOT_ created by a static analysis tool that is of any intelligence, as they all are the result of totally different patterns and all of which are obviously not even fixing anything at all. So what am I supposed to think here, other than that you and your group are continuing to experiment on the kernel community developers by sending such nonsense patches?

When submitting patches created by a tool, everyone who does so submits them with wording like “found by tool XXX, we are not sure if this is correct or not, please advise.” which is NOT what you did here at all. You were not asking for help, you were claiming that these were legitimate fixes, which you KNEW to be incorrect.

The UMN eventually apologized for its actions. Since then, the school has followed up with the Linux kernel community. After due consideration, the kernel developers and the Linux Foundation’s Technical Advisory Board (TAB) have gone through all the potential damaging patches and decided to explore the possibility of working with the UMN again if the school improves “the quality of the changes that are proposed for inclusion into the kernel.”

Specifically, the TAB demands that UMN, as many other organizations do:

[D]esignate a set of experienced internal developers to review and provide feedback on proposed kernel changes before those changes are submitted publicly. This review catches obvious mistakes and relieves the community of the need to repeatedly remind developers of elementary practices like adherence to coding standards and thorough testing of patches. It results in a higher-quality patch stream that will encounter fewer problems in the kernel community.

Until that’s done, TAB’s report stated, “patches from UMN will continue to find a chilly reception.” Why, yes, they surely will.

What’s a Static Analysis Tool?

Let’s take a step back and look at what kicked up this explosion. What is a static analysis tool anyway, and how are they used in Linux?

As the Open Web Application Security Project (OWASP)  states, static analysis tools are source code analysis tools, aka Static Application Security Testing Tools (SAST). They’re designed to analyze source code or compiled versions of code to help find security holes. They do this by analyzing the code against a set of rules.

For example, almost all of you have used or least know of lint, the primitive C and C++ static analysis program. It looks for such “obvious” problems as uninitialized variables, indexing beyond array bounding, and the ever-popular misuse of null pointers.

While that’s fine for finding the real howlers in code to find sneaky security holes, you need far more than lint. Other static analysis programs use coding guidelines, such as Perforce’s MISRA, or programming standards. The details vary, but the name of the game is always the same: Check for errors against a known standard.

Typically, static analysis is used to hunt down defects in source code before a program is run. For example, you’d do it between coding and unit testing or dynamic code testing.

But, Linux is a different story, one going back almost 30 years. By 2020, the Linux kernel alone came to 27.8-million lines of code. That’s a lot of history, a lot of code, and many errors have snuck in over the years. As a result, Linux developers are, besides writing new code, constantly — albeit sometimes very reluctantly — looking through the kernel’s old code for mistakes.

One person who doesn’t mind sifting through ancient lines of code for errors is Shuah Khan, kernel maintainer and the Linux Foundation’s third Linux Fellow. Khan has many jobs but one of them is to lead developers into working on Linux security.

“It is easier to detect and fix problems during the development process,” Khan said in an interview with The New Stack. “Static analysis is a necessary component for developing dependable software. I prefer to incorporate static analysis in my development and patch workflow as much as possible.”

As a result, Khan said, she looks for the following qualities in static analysis tools:

  • Ease of installation and use
  • Easy to incorporate into development and patch workflow
  • Easy to upgrade and manage on development and test system
  • Easy to add to Kernel integration rings

Of the many Linux static analysis tools, Khan said, the Linux kernel-specific “checkpatch.pl does pattern matching based static analysis. It’s a good one to use for new code. It’s easy to incorporate in the development and patch acceptance workflow.”

Sparse, Smatch, and Coccinelle

Two other must-use Linux kernel static analysis tools are Sparse and Smatch. Sparse was written by Linus Torvalds. It’s a simple C parser that can view a program’s structure and create a symbol table showing exactly where every global symbol is defined.

Sparse, however, only looks at a program’s local code. Smatch lets you see how values change across a sequence of code. It also enables you to detect conditions that will always, or never, be true; null pointers; and locks that end up in different states depending on the code path. Needless to say, this can be very helpful for validating error paths and rarely tested code.

“Sparse and Smatch are designed with kernel in mind and have hooks to run from the kernel Makefile,” Khan said. “They are easy to incorporate into the development workflow and patch-acceptance workflow. Sparse and Smatch have external dependencies but they are easier to manage and maintain these tools on development and test systems.”

Another important tool, which is no longer Linux kernel-specific, is Coccinelle.  This is a pattern-matching and text-transformation tool that can analyze complex, tree-wide patches and detect problematic programming patterns. It works by applying semantic patches via the top-level Makefile. Typically it delivers a report to the developer, but you can run it to produce proposed patches for the problem it encounters.

Khan thinks it’s “a powerful and complex tool. However, it suffers from a steep learning curve to use effectively and [it’s] a bit hard to incorporate into the development and patch acceptance workflow.” Therefore, she favors Sparse and Smatch.

Still, developers agree that Cocinelle is worth the time and trouble to master. For examples on how to use it, check out Julia Lawall’s “Coccinelle: 10 Years of Automated Evolution in the Linux Kernel” presentation.

Khan pointed to gcc and clang as tools outside the kernel that are easier to use as part of the kernel-development workflow. Specifically, Khan recommends gcc 10 because of the new -fanalyzer option. This is a built-in C static analysis tool. It’s not yet available for C++.

She likes it because it’s “helping find problems in the kernel code. This feature is useful because it added support for detecting [the] Common Weakness Enumeration Software Development category.” This category includes such mistakes as null-dereference and use-after-free, among others.

Khan continued, “I am excited to see gcc adding -fanalyzer support and would like to see increased static analysis coverage to detect Common Weakness Enumeration (CWEs) that pertain to software development.” This, in turn, “will make it easier for Linux and its ecosystem to support the needs of safety-critical domains that require/enforce static analysis.”

The Challenge of False Positives

These developments are especially interesting to Khan because she’s also the chair of the Technical Steering Committee for the ELISA Project. ELISA is an open source initiative that aims to create a shared set of tools and processes to help companies build and certify Linux-based, safety-critical applications and systems.

But, as good as static analysis tools are, they’re not perfect. “Most static analysis tools suffer from false positives,” Khan said. That makes it “hard to sift through for the real errors. I would also like to see tools improve in this area. False positives are one of the main reasons a lot of developers find it difficult to use them effectively.”

Still, they’re worth the effort. To learn more about static analysis testing and other code security testing methods on Linux, Khan recommends the LF Live: Mentorship Series webinars on static analysis concepts and security tools. These are free webinars and valuable for anyone wanting to get their head around Linux development and security.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.