For a few weeks in late March, COVID-19 brought business around the globe to a stand-still, as everyone tried to figure out the best way to combat the pandemic that, to date, has led to nearly 2 million deaths and unmeasurable suffering worldwide. No one was sure what would happen, so conferences got canceled, contracts were put on hold, projects were delayed. Everything stopped.
By early April, however, we witnessed work in the IT community come roaring back to life. While other industries, such as restaurants, have been badly crippled by social distancing efforts, it had become rapidly clear that we still needed IT to carry on economic and social activities. You could even make the argument that in these months, IT had saved the global economy, as Slack, Google, Zoom and others helped companies adopt remote working techniques within days.
In our sector, however, development on cloud native technologies continued apace, as this community is no stranger to remote, distributed work. From The New Stack (virtual) news desk, here are a few interesting technology trends that have emerged in the last 12 months in this space, and how we see them impacting cloud native computing in the years to come.
System Design: Return of the Monolith
For the past several years, TNS has been trumpeting the emerging microservices style of cloud native application building, which breaks a large applications into smaller, interconnected components, allowing separate teams to work on different parts of the application without stepping over one another. Microservices, however, comes with their own set of challenges, one of the most notorious being difficulty in debugging across components. Kubernetes evangelist Kelsey Hightower raised the idea, somewhat tongue-in-cheek, that “Monoliths are the future because the problem people are trying to solve with microservices doesn’t really line up with reality.” This was just about the time that the design team behind one of the core cloud native applications, the Istio service mesh, admitted that they were migrating to a monolithic approach, where more services were integrated into a single daemon.
As in all matters, the correct approach for any given project might be somewhere in the middle of these two extremes, but this year the ideals of microservices were balanced across other factors in enterprise software design.
Cloud Services: A Unified Control Plane
Kubernetes has ushered in a revolution in how to easily scale and manage distributed applications, though its interface is primarily geared towards system operators. For developers, it presents a formidable learning curve, requiring considerable translation between its own operational concepts – i.e. “ingress,” “pods,” “services,” — and the actual requirements of the application as they are understood by the dev. So, not surprisingly, we’ve seen a lot of interest this year around the idea of a universal control plane, which would set the stage for enterprises to build their own Kubernetes-based self-service style Platforms-as-a-Service for their developers. Kubernetes would be underneath, where the developers need not worry about it. The crucial early work was a standardized template called the Open Application Model (OAM), which is quickly becoming the de facto standard in the Kubernetes community. Crossplane, an open source OAM-based control plane built by Upbound, garnered a lot of early buzz in this field: IBM is testing Crossplane now to help users unify operations on its IBM Cloud. Another OAM-based project gaining traction is KubeVela, an extensible “platform engine.” As the developers of the project explained:
“For developers, KubeVela is an easy-to-use tool that enables you to describe and ship applications to Kubernetes with minimal effort, yet for platform builders, KubeVela serves as a framework that empowers them to create developer-facing yet fully extensible platforms at ease.”
See also: Kubernetes Moves to the Edge.
Operations: A Programmable Linux Kernel
The Linux kernel, the de facto operating system for cloud native operations, is starting to see a radical shift in how it can be used, thanks to the introduction of the Extended Berkeley Packet Filter (eBPF). Although originally targeted for superior in-kernel monitoring, this memory-mapped extension of the original BPF can run any sandboxed programs within the kernel space, without changing kernel source code or loading modules. In effect, eBPF acts as a microkernel, providing a potentially faster and safer way to use the Linux kernel. In this way, eBPF provides a way for developers to add their own programs into the kernel itself. The most immediate benefits would be for application and system monitoring (and debugging) as well as in speeding the decision making processes for network routing, allowing the kernel to do the work inline that would have heretofore be handed off to a module. Already, several Kubernetes-focused companies such as Isovalent and Tigera are using the technology to provide a faster alternative to using Kube-Proxy for traffic routing.
Security: Rethinking Vulnerability Management
It has become increasingly apparent over the past year that the current system for handling new security vulnerabilities may not be suited for the pace of cloud native computing. Tal Klein, from cloud security company Rezilion, argued on this site that the current industry-wide system for prioritizing the newly unearthed vulnerabilities, the Common Vulnerability Scoring System (CVSS) — is out of whack with how attackers use vulnerabilities to breach systems. There is too much focus on the severity scores themselves, and not enough understanding of the environmental context surrounding the scores, some have argued. Rezilion found that 67-75% of vulnerabilities with “high severity” CVSS scores were never loaded into memory and thus could not possibly be exploited. Meanwhile, attackers are exploiting lower-rated vulnerabilities instead, given that fewer companies actually patch these. The CVSS maintainers are working to update the criteria and ranking for the speed of cloud native change, and it’ll be interesting to see what changes come over the next year or so.
Development: Rust Creeps up on C++
For decades, our operating systems and other vital infrastructure software has been writing in C or C++, which are fast, low-level languages. These days, however, more and more system architects are reaching the conclusion that it is fundamentally difficult, if not outright impossible, to fully secure programs written in these languages, thanks to the unsecured way they handle memory and other factors. It certainly takes a lot of talent to handle memory allocations securely, and even then, a singel overlooked mistake can set the stage for . So lately, more adherents have jumped on a new language, Rust, which also both the speed of C/C++ but also the type safety necessary for writing secure applications. During the AllThingsOpen virtual conference earlier this year, Microsoft cloud developer advocate Ryan Levick explained why Microsoft is gradually switching to Rust to build its infrastructure software, away from C/C++. And that it is encouraging other software industry giants to consider the same.
Amazon Web Services and the Linux Foundation are sponsors of The New Stack.