Shifting Zero Trust Left with Cloud Native Software
As companies seek to reduce the time required to deliver new features in cloud native applications, the use of off-the-shelf and third-party code, particularly open source, is altering the scope of cybersecurity for developers. Estimates go as high as 80 to 90 percent of the code in cloud native applications originates from open source components.
This change in the composition of code forces a shift in the territory that today must be protected by DevOps professionals. Rather than focusing solely on the software development lifecycle, DevOps professionals must now expand their perspective on how to secure the entire software supply chain.
“Combining Zero Trust and continuous scanning allows enterprises to balance performance needs with security requirements.”
The software supply chain represents all of the contributed software components (whether the source code or as pre-packaged components) as well as the delivery systems, channels and processes that eventually deploy code into a staging or production environment. The unknown development skills and motivations of third parties create a challenging security risk, which can lead to inadvertent security flaws, or deliberate injection of malware. Security and DevOps teams must now protect against components that were produced, and sometimes integrated into the application code, without supervision or proper security vetting.
Apply Zero Trust to Kubernetes and Container Environments
The natural response to the substantial scope of software supply chain risk is to trust no one and nothing, and to expand the notion of Zero Trust to include other risk vectors. While Zero Trust is an excellent place to establish a baseline of security, it must be done in a way that does not compromise the business’ agility or innovation.
Begin with a foundation of best practices:
- Ensure the start environment for clusters are initially configured for “full hygiene” in accordance with best practices recommended by platforms such as Kubernetes and Istio. The default configuration is sometimes optimized to make the system easily accessible to development teams, but does not necessarily represent a production-ready, hardened and locked-down configuration.
- Make sure the infrastructure software has the latest patches and updates, with the increasing number of vulnerabilities being disclosed around container runtime.
Deploy the cluster and fine-tune access controls:
- Use admission control in production to enforce policies and prevent resources that violate policies and hygiene level from being admitted to the cluster.
- Unless explicitly approved and required, reduce the runtime privileges of your workloads, and avoid running them as root or at any elevated privileges; use AppArmor/seccomp profiles to control the risk surface.
- Run workloads with an immutable file system, to reduce the risk if the system is compromised.
- Apply segmentation and isolation policies based on the workload at runtime.
- Watch the configuration to avoid leaking secrets, passwords and keys.
- Ensure network policies are applied.
- Control network access to worker nodes.
These guidelines will establish a strong initial baseline for the security of our applications, but it’s not all that can be done.
Continuous Kubernetes Hygiene — From Continuous Deployment
Total application of Zero Trust is a process that enterprises may take longer to adopt and implement. Enterprises may want to balance that effort against delivery velocity. The result is that within Kubernetes access controls for less critical components — and sometimes the entire cluster — are loosened. While this creates security gaps from a network and access control perspective, applying guard rails to risks introduces an important mitigation layer. These guard rails can be plugged into the CD part of CI/CD. This extended version of Zero Trust can work in harmony with DevOps, acting as an enabler for velocity and security.
Just as traditional image vulnerability scanning served as a workload pre-flight risk analysis that may be employed at runtime, we can apply similar policy and risk-driven checks for each and every deployment event to achieve a continuous scanning of the workload to see what is running and to understand the levels of integrity and hygiene. Whether the trigger is a single code commit or a batch, we can catch drifts before they end up in production.
For example, we look for embedded secrets or secrets wired into the wrong locations that an astute intruder, internal user or other system component could leverage to access sensitive data. Applying these guard rails on the test cluster can yield immediate results.
Continuous scanning enables DevOps to monitor the evolving security state of the application. Rather than depending on stale knowledge of the security state of the application at deployment time, scanning detects new vulnerabilities that appear after deployment. DevOps monitors the evolving security status and reacts to changes in the security situation.
Balance the Guard Rails and Delivery Velocity
Enterprises must now protect their cloud native applications from security risks introduced by the software supply chain. Combining Zero Trust and continuous scanning allows enterprises to balance performance needs with security requirements. Critical components are hardened and less critical components are freed to perform under careful supervision. In this way, companies can implement a Zero Trust approach to security that addresses the complexities of new, accelerated development models and empower DevOps teams to employ continuous security practices in a balanced way that doesn’t hinder agility or speed.
Feature image via Pixabay.