In the world of containers and microservices, Linux security based on iptables and ports just doesn’t cut it anymore, according to Thomas Graf, Chief Technology Officer of Covalent, the company behind the Cilium project.
It’s using the extended version of Berkeley Packet Filter (BPF) to improve and simplify visibility, performance and scalability of applications on Kubernetes.
“If you look at the highly dynamic world of microservices and containers, we see containers pop up and go away in seconds, so an IP address or port is becoming almost meaningless. A container that used a certain IP address 20 seconds ago may no longer exist. So if you have security logs or audit trails, there’s no meaningful information attached to an IP address,” Graf explained in an interview.
The other thing is that application developers use protocols such as REST, Redis, Memcache, cloud-native technologies that use protocols that funnel everything through a single port. So services are talking to each other through REST or an HTTP-based API, from a security perspective, the security operator has the choice to open up that port or close it. If it’s open, all the API calls can be made; if it’s closed, none of the API calls can be made.
Cilium is addressing this with a “new” technology called BPF. Cilium solves the security problem around IP addresses and ports, basing identity on container labels, a meaning that developers actually understand, he said.
BPF is far from new. It was created in 1992 at Lawrence Berkeley Labs as a way to better filter and sort network packets. It since has been extended to take advantage of advances in modern hardware. It’s getting a lot of attention: Google, Facebook and Netflix are using BPF in ways including network security, load-balancing, performance monitoring and troubleshooting.
Graf calls BPF “the most exciting technology shift I’ve experienced over the past 20 years.”
It’s revolutionizing a lot of things inside the Linux kernel, not only in networking and security but also profiling, visibility and more, he said.
BPF is basically the ability of an application developer to write a program, load that into the Linux kernel, then run it when certain events happen — when a network packet is being received or a system call is being made, when a certain system function is being called. Then this small program can enforce security policies, collect information and so on. It’s basically making the Linux kernel programmable, he said.
It runs in a sandbox, so it cannot taint the kernel he said. These small programs are just-in-time compiled and just as if you had re-compiled your kernel.
“It sounds crazy, but it’s incredibly powerful,” he said.
“We saw this coming four or five years ago, but totally underestimated the impact it would have. …It’s been proven that this is the technology that’s driving the next wave of Linux-based security networking,” he said.
Easy and Secure
Cilium’s goal is to bring the power of BPF in an easily consumable way to known Kubernetes interfaces. It provides a translation of high-level declarative intent such as Kubernetes services, policy, networking, load balancing and implements them in the most efficient and secure manner using cloud-native protocols such as grpc, REST, Kafka and others.
It will connect pods and provide load balancing in a scalable manner that is many times more efficient than the Kube-proxy, for example, Graf said in an episode of The New Stack Makers.
It also implements segmentation and security. Security policies in Cilium can be defined in a Kubernetes YAML file. It will enforce that only certain pods can talk to each other or that a pod can only talk to a certain external service. … Or it will define that a pod can only talk to a certain port. But it also goes into the API call level. If you allow two services to talk to each other on a certain port, say port 80, through API calls, in a traditional security world, you either open up the port to all API calls or none. Cilium enables two services to talk to each other, but can enforce that they can only issue a certain API call, Graf explained. However, each individual connection has to be expressly whitelisted.
It uses Envoy, a well-established proxy, to enforce Layer 7 security. Feeding requests on a demand basis — only the connections that require Layer 7 or API-layer security go through Envoy, yet it is able to do so in an almost zero-overhead way, he said.
A key factor, when acting as a service mesh or with one such as Istio, is that no changes are required to the application. It doesn’t don’t require running anything inside the application. It runs outside on the Linux kernel level while a service mesh is using a sidecar proxy that runs inside the application pods.
At KubeCon, the demonstrated how, with Istio, it’s possible to achieve a three-fold performance improvement instead of using iptables to check the service mesh sidecar.
All Open Source
CEO Dan Wendlandt, known in the networking community for his work at VMware on software-defined networking, and CTO Graf, a core Linux kernel networking developer, created the company Covalent around the Cilium project around two and a half years ago. The company’s software is all open source.
In The New Stack Makers, Graf explained that rather than starting a company, building a technology, then open-sourcing it, Covalent started with the open source project to build the company. The reason: It wanted user feedback from Day One. It released version 1.0 in April.
Cilium is not a full-stack solution, but it crosses multiple layers, Graf said. It comes as the Container Networking Interface (CNI) plug-in, so at that level, it would complete with Calico, Weave, Flannel and others. However, not all CNI plug-ins provide do not provide Layer 7 or API call network security, Graf said. Then it goes one layer above to the service mesh layer.
“I wouldn’t say we compete with Istio, we complement each other,” he said. “Cilium is the ideal data path, data layer, beneath Istio. We provide the best performance possible. If you want to run Istio, we can reduce the overhead and make it minimal. A service mesh runs security policy in a sidecar inside of the application pod. That means if that pod gets compromised, the sidecar is compromised as well. We can provide a safety net outside of that.”