CloudBees sponsored this story, as part of an ongoing series on “Cloud Native DevOps.” Check back through the month on further editions.
Continuous integration and delivery (CI/CD) processes are a common component of modern software development. Since most of them involve containerized or virtualized services, often hosted in the cloud, many organizations fail to fully understand the security risks associated with these processes being compromised.
CI/CD deployments are made up of multiple parts, sometimes separate tools, that interact with each other in an automated fashion to spin up and allocate resources as needed to build, test and deploy new code. In most set-ups, the build servers have an implied trust relationship to code repositories and trigger the processes automatically on pull requests.
The code is built conditionally, run through tests and the resulting image is pushed to an image repository. From there, it’s automatically picked up by deployment services and pushed to microservices running inside containers.
All this is achieved through a chain of trust relationships, dependencies, configurations and credentials, some of which can be modified or stolen by attackers if not properly protected. At the very least, compromised CI/CD servers can provide hackers with access to free computing resources that can be abused for crypto mining, building distributed denial-of-service (DDoS) botnets or proxying malicious traffic. But the security implications far exceed that, particularly for on-premise, self-hosted deployments.
“If you take over one of these build systems, even though it’s running in a container, you could take over a network because these containers share IP space,” said security researcher Tyler Welton in a talk about CI/CD hacking at the DEF CON 25 conference. “Even if they might be on their own mesh network of IPs, they still often have ports mapped to the hosts.”
In some cases, it’s even possible to exploit the trust relationship between these servers and code repositories in order to make commits back to master, compromising the code. At the very least, they can abuse the authorized SSH keys that these services use.
Another common practice observed by Welton in CI/CD deployments is the storing of credentials for other services in environment variables. And while this is better than hard-coding credentials in configuration files, they are still exposed to theft in case of a compromise.
“When you compromise one of these services, you haven’t compromised the entire system, but dump some environment variables and you’ll probably be able to pivot to some of the other systems,” Welton said.
Even though some modern CI/CD tools allow restricting privileges inside containers, a lot of systems are configured to run services inside containers as root. At first glance, this doesn’t seem to be a big deal, because any potential attackers would only be able to perform actions inside those particular containers, which are often short-lived.
However, root access allows attackers to scan the entire IP space in order to find other potentially exploitable services running on the host and, if the container has internet access, it allows them to download and install additional packages they need to launch further attacks.
Welton developed and released a framework for CI/CD exploitation called CIDER (Continuous Integration and Deployment Exploiter) that supports various build chains like Travis-CI, Drone or Circle-CI. The main exploitation vector used by CIDER consists of pull requests through open GitHub repositories, but the same techniques apply if attackers gain access to private source code repositories.
There’s also an older CI/CD audit framework called Rotten Apple that was created by Mozilla ethical hacker Jonathan Claudius. This can be used to determine if the root user is being used to build projects and if attackers can deploy malicious code to steal API keys, to pivot to private networks, to authenticate using GitHub credentials, to create reverse shells, to exfiltrate data, to access other projects on the same server or to steal SSH keys. The framework also has an attack mode, which can be used for penetration testing.
Welton’s 2017 talk at DEF CON contains real-world CI hacks and a wealth of information about different configuration issues. However, the risks posed by CI/CD tools has been known in the security industry for years.
Nikhil Mittal‘s presentation at Black Hat Europe two years earlier is also a great resource about insecure default configurations in CI environments. At the time, Mittal described CI tools as “an attacker’s best friend” and said that he never encountered a penetration test where unauthorized access to a CI tool didn’t result in administrative access to the whole network domain.
It’s important for organizations to understand that default CI configurations are not designed with security in mind and that running things in containers in not sufficient to stop attackers from breaking in and performing lateral movement through the rest of the infrastructure.
The security of CI/CD deployments is even more important these days in light of a recent spike in software supply chain attacks where hackers break into software development infrastructure in order to insert backdoors and malicious code into resulting applications. This allows them to compromise a large number of end users by taking advantage of trusted software distribution channels. It also makes developers a highly attractive target.
Continuous integration and delivery can have benefits for security. For one, it makes remediation and the deployment of patches much faster. Also, splitting applications into microservices helps reduce single points of failure and contain compromises, if configured properly. However, having insecure CI/CD systems in your infrastructure increases your attack surface and opens entry points for hackers.
Unfortunately, “the automated build systems, like the CI systems and the CI pipelines, are checked less for security than the code in which they’re deploying,” Welton said. “They sit in between the infrastructure components, which are being tested through network penetration tests, and the application code which is handled through application pen tests. But then you’ve got this quasi-containerized environment that’s sitting on its own IP space, in its own containers, on top of the infrastructure, but below the code, and it’s really not being tested.”
Feature image via Pixabay.