Why Locking Down the Kernel Won’t Stall Linux Improvements
The Linux Foundation sponsored this post.
The Linux Kernel Hardening Project is making significant strides in reducing vulnerabilities and increasing the effort required to exploit vulnerabilities that remain. Much of what has been implemented is obviously valuable, but sometimes the benefit is more subtle. In some cases, changes with clear merit face opposition because of performance issues. In other instances, the amount of code change required can be prohibitive. Sometimes the cost of additional security development overwhelms the value expected from it.
The Linux Kernel Hardening Project is not about adding new access controls or scouring the system for backdoors. It’s about making the kernel harder to abuse and less likely for any abuse to result in actual harm. The former is important because the kernel is the ultimate protector of system resources. The latter is important because with 5,000 developers working on 25 million lines of code, there are going to be mistakes in both how code is written and in judgment about how vulnerable a mechanism might be. Also, the raw amount of ingenuity being applied to the process of getting the kernel to do things it oughtn’t continues to grow in lockstep with the financial possibilities of doing so.
The Linux kernel is written almost exclusively in the C programming language — while the most significant reasons that the kernel needs to be hardened arise from aspects of this programming language.
The C language was created in the 1970s in lockstep with the Unix operating system. Like the Fortran and COBOL languages, C was developed for a specific purpose. Also, like those languages, C is considered cryptic and archaic by many developers. The C language was designed for operating system development and it allows direct control over code flow and data management. This C is often compared to a circular saw without a guard on the blade, because it’s both effective when used correctly and dangerous when used carelessly.
Benefits and Risks
The Linux Kernel Hardening Project is using several approaches to improve how the C language is used. While C lacks strong data typing, the existing typing can be used to reduce the occurrence of common errors. One example is the introduction of a data type for reference counters. Reference counters have interesting properties: they should never be found to contain a value less than one and they should never be directly assigned. Using code to detect either of these actions makes it significantly easier to identify the correct allocation and freeing of resources. Another example is the increased use of the “const” modifier on variable and parameter declarations. This instructs the compiler to identify any cases where the value is changed, as the programmer believes that it should not be changing. While this is not strictly enforced by the compiler, it provides an easy way to identify cases where the data is not being used in the way it was intended.
An additional focus of the project is the function call parameter stack. This contains not only system call parameters but may include saved register contents and return execution addresses on some processor architectures. Changing this information in ways the system doesn’t intend can disrupt the expected behavior, so it is a common target for attempted exploits.
The approach that is being taken to make tampering with this data more difficult has been to change the way memory is allocated for the data so that it is not contiguous. This makes it much harder to locate the data that needs to be changed. Another scheme being implemented clears the memory of all stack components as soon as they’re no longer required. Neither of these comes for free, unfortunately. The former introduces memory fragmentation, while the latter can have prohibitive processing and cache access impact.
The final area to talk about is address hiding and randomization. The more information an attacker has about the memory layout and organization, the easier it is to identify points that may be vulnerable. Unfortunately, this is the same information that kernel developers use to track down coding errors and other unexpected behaviors. System log messages that contain data or code addresses can be used to trick code execution, so changing them to provide symbolic addresses can instead slow the exploit process. Similarly, if all code is loaded in the same order every time the system runs, breaking in is much easier than if the order of functions is determined randomly. Even the order of members of data structures can be scrambled, thus disrupting an attack even after the data’s address has been determined.
None of these approaches are free of cost, and an important goal is to keep the impact small enough that the value gained exceeds the difficulty introduced.
More Good Than Bad
To answer the question, yes, making the kernel harder does make making the kernel harder to develop in the future. The project has to maintain awareness of all aspects of the system and ensure that the value added in security isn’t overwhelmed by the impact elsewhere. The typing and stack changes have already improved the overall system quality by identifying code with subtle flaws. By making the effort open and active in addressing concerns raised during development, the Linux Kernel Hardening Project has accelerated the acceptance and overall level of security in the Linux kernel. There’s a lot more to be done. Find out more about what you can do at the Kernel Self Protection Project page.
This article is part of a series by speakers at the upcoming Open Source Summit, coming to Vancouver August 29-31. Open Source Summit connects the open source ecosystem under one roof. It covers cornerstone open source technologies; helps ecosystem leaders to navigate open source transformation with the Diversity Empowerment Summit and tracks on business and compliance; and delves into the newest technologies and latest trends touching open source, including networking, cloud-native, edge computing, AI and much more. It is an extraordinary opportunity for cross-pollination between the developers, sysadmins, DevOps professionals and IT architects driving the future of technology.
Feature image via Pixabay.