How Cloud Platforms Will Evolve Programming Languages and Runtime Models

Compared to 30 years ago, the number of programming languages we developers have at our disposal is staggering, and the continued evolution of these languages is happening at a dizzying pace. While language paradigms have changed drastically, not much has changed in terms of execution and compilation.
Nearly all languages are compiled to either processor-native bytecode or an intermediary bytecode (e.g. Java bytecode, IL) with the intent to run on a single node. Even languages fine-tuned for distributed development use a combination of syntactic sugar and runtime support services to simplify distributed execution.
Ultimately, even these “distributed first” languages and runtimes follow traditional compilation or interpretation to some bytecode, which then run as single processes on very traditional OSes. Those processes might be enmeshed in a distributed scaffolding (e.g. Erlang), but they’re still traditionally compiled to be native at some point.
Based on the history of execution paradigms on traditional OSes, this makes sense. The currency of prior eras of computing execution has been instruction sets, memory, CPU cycles, and resource management – so that’s how we think of the compilation and execution of our languages.
But much has changed in 30 years. In the era of cloud computing, the currency of execution is different. It’s higher level and in some ways, much more complex. We no longer think about multi-threading, but instead, think about multi-node. Our software architectures are built to live and run on networks of hundreds of OSes and with previously untenable amounts of parallelism or concurrency.
Couple all of this with the fact that infrastructure that an application runs on can be reshaped by the application itself, the possibilities are staggering. An application can configure and reroute networking on the fly, and simple requests can be made to spin-up execution nodes such as a VM or container, which can then host a copy of the application that it made itself, or even more powerfully, host an application generated by that application through metaprogramming. All of this has driven execution away from the traditional OS and up the stack, relegating the traditional OS to nothing but a commodity in modern cloud-native applications.
Cloud platforms, such as Platform as a Service (PaaS) or cluster managers, acknowledge this by abstracting away as much detail as possible, and either through policy or inference, allow an application to execute on infrastructure in an independent way. This helps hide the complexity of distributed applications and reduces the complexity a developer experiences in writing and deploying these types of applications.
But does it stop here? Are we going to still use known programming languages that are compiled by traditional compilers to either intermediary or native bytecode, which is then hosted by a cloud platform that helps the application execute properly? Probably not.
As memory became cheaper, applications started to use memory more liberally. Developers writing applications in C were using malloc() and free() with near reckless abandon, leading to memory leaks that were typically the cause of many software bugs. To deal with this, managed memory runtimes such as the JVM and .NET CLR were introduced, abstracting away memory management and providing an execution cushion for the app to run on instead of running directly on the OS. Easy access to memory drove consumption up, scaling up the number of flaws to a point where it became a problem. This forced the invention of memory management.
Cloud computing and cloud platforms are no different. Getting a handle to some ephemeral OS instance has become cheap. Just look how easy it is to spin up a host on AWS or Azure via REST APIs. Software-defined networking (SDN) is driving down the cost of networking manipulation. As this trend continues, we’ll find that there is much complexity in managing this. And cloud platform providers will likely start to recognize the pain first, given this is where “cloud systems programming” occurs.
Similar to the creation of managed memory runtimes, we will likely create Managed Infrastructure Runtimes (MIR). Languages will be compiled to some sort of intermediary instruction set where the MIR will, as part of execution, ingest MIR instructions and spin up containers or unikernel instances, change networking routes, and manage resources at this new level of abstraction.
Application logic might be migrated to different containers to optimize for data locality or performance, without a developer or cloud platform having to be involved. Cloud-first languages will likely evolve to allow for syntactic “cloud hints” on how to shape infrastructure or when to enforce local execution only versus implicit cross-node migration. This would be analogous to what we do to drop into unmanaged code via the “unsafe” keyword in C# or using “final” in Java, which may result in method inlining.
Will any of this happen? It certainly seems that complexity will be the necessity that drives the invention and adoption of new cloud runtime models. Although not quite the same, we’re seeing some of this in technologies like AWS Lambda, so the idea shouldn’t seem too far-fetched and will appear soon.