We’re about to transition from the cloud computing era to the “sky computing” era, according to a pair of distinguished computer science professors at University of California Berkeley, Ion Stoica and Scott Shenker.
As the name suggests, sky computing is a layer above cloud platforms — and its goal is to enable interoperability between clouds. If you think that sounds like the current industry buzzword, multicloud, you’re on the right track. To find out more about sky computing, I interviewed Professor Stoica (who’s also, by the way, a co-founder of Anyscale and Databricks).
I was excited to speak with Stoica, because he has a track record of correctly predicting the future of cloud computing. Back in February 2009, he and a group of Berkeley academics published an influential paper about the then-nascent cloud industry. At the time, Amazon Web Services was just a few years old, Google’s only cloud product was App Engine (still in preview), and Microsoft’s Azure was yet to be formally released. The 2009 paper concluded that the “long dreamed vision of computing as a utility is finally emerging.”
Although cloud computing did indeed fundamentally change the IT industry and how applications were built and deployed over the 2010s, there is one glaring problem in the eyes of Stoica and Shenker — cloud computing did not become a public utility, like the internet or the web. In 2021, there isn’t one single underlying cloud platform with a set of open standards that anyone can use. Instead, cloud computing has evolved into a series of proprietary platforms that are largely incompatible with each other: Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and others. The new paper by Stoica and Shenker lays out a vision for “a more commoditized version of cloud computing, which we call the Sky computing.”
Implementing the Multicloud Platform
In essence then, this is about enabling multicloud application development. “To fulfil the vision of utility computing, applications should be able to run on any cloud provider (i.e., write-once, run-anywhere),” the new paper asserts.
In essence, sky computing is about enabling multicloud application development.
But why would developers want to build an application for a multicloud environment? Conventional wisdom says that it’s easier to pick one cloud provider and use the suite of services that company provides (AWS has literally hundreds of them!). However the paper rejects that notion, instead suggesting that apps with “computation-intense workloads” are better suited to a multicloud environment. I asked Stoica why that is?
“Compute is just easier,” he replied. “You don’t need to deal with egress fees, [where] it costs no money to put your data into the cloud, but it costs a lot to get data out of the cloud. In particular in machine learning, if you are doing training or high parameter tuning. These are extremely compute-intensive jobs, and so moving these jobs wherever you can do them faster and cheaper makes quite a bit of sense. Of course, you will also need to move the training data if you move the compute — but, in general, the cost of moving this data pales in comparison to the cost of training or tuning the models.”
The paper proposes that sky computing be made up of three layers: “a compatibility layer to mask low-level technical differences, an intercloud layer to route jobs to the right cloud, and a peering layer that allows clouds to have agreements with each other about how to exchange services.” These three layers mirror how the internet itself was designed — for example, the Internet Protocol (IP) provides inter-network compatibility.
Sky computing is made up of three layers: compatibility, intercloud, and peering.
The compatibility layer will enable an application developer to easily pick up and move their app from (for example) AWS to Google Cloud. Where multicloud comes in is with the intercloud layer, as it will allow applications to run across multiple cloud providers — depending on user needs. Here’s how Stoica explained it:
“The intercloud layer is going one level up [from the compatibility layer]. Ideally, with the intercloud layer you specify the preferences for your job — say I want to minimize costs, or minimize time, or I need to process this data locally — and the intercloud layer will decide where to run your job to satisfy these preferences.”
Regarding the data locality example, Stoica explained that there may be reasons — geopolitical or otherwise — why an application must use a specific geographic location. Consider an application that wants to process some data that must not leave a country’s boundaries and that there is only an AWS cloud data center in that country. In this case, the intercloud layer would automatically route that application to AWS’s data center. But all other applications might use different cloud platforms, depending on the intercloud rules the application developer defines. (The user wouldn’t know which cloud platform they’re on, by the way; this is all at the application deployment level.)
Who will provide this intercloud layer? Stoica thinks it could be provided by the existing cloud platforms, or perhaps a new type of “virtual cloud” company will emerge to specialize in this routing functionality. He suggested the term “infrastructure-less clouds,” because the intercloud layer doesn’t require infrastructure (servers, databases, and the like).
How Will the Cloud Incumbents Respond?
With multicloud being a priority for sky computing, a key challenge will be the buy-in of today’s market-leading cloud platforms — AWS, Microsoft and Google in particular. I asked Stoica which of the main platforms does he think will make the first move towards sky computing, and what would be their motivation?
“Based on economics theory, presumably clouds that are second or third [in the market] — like Google — will be most likely to do it, because this is one way for them to get more market share. If they provide a faster or cheaper infrastructure, the sky would make it easier for them to get more workload from other clouds.”
However, he also noted that application developers don’t necessarily need the permission of the big cloud platforms to attain “sky computing” functionality.
“You can do it today. I can have an application — like say a machine learning pipeline — and do some data processing, some training, and some serving to serve the models. I can do the training on Google and the serving on Amazon.”
The problem with doing multicloud today is that, in Stoica’s words, it’s “clunky” and “not automatic, it’s manual.” Plus of course the egress fees!
Sky computing potentially expands the software-as-a-service businesses of cloud providers.
Another challenge for the big companies is that they will view this, rightly, as commoditizing their core cloud platforms. But Stoica pointed out that other parts of those organizations will benefit — for instance, for Microsoft’s Office team, “this will allow them to run Office on Amazon’s cloud or Google Cloud.”
So, sky computing potentially expands the software-as-a-service businesses of the big cloud providers. Whether Microsoft wants to do this is another matter, but if all of their cloud competitors move to a sky computing model, then they will have no choice but to follow the market. It is intriguing how this will play out over the coming years.
What Will Be the Next Kubernetes?
When Stoica and his Berkeley colleagues published their 2009 paper, it was several years before Docker and Kubernetes came onto the scene as a way to manage cloud computing at scale. So I asked Stoica if he predicts similar innovations in DevOps tooling, in the next several years perhaps, that will boost the adoption of sky computing?
“Going forward, I think there will be a lot of innovation, because abstracting away the clouds — given the myriad of services they provide — is not going to be easy. And even if they provide the same service — like Kubernetes — it’s not the same when hosted by Google, [compared to] hosted by Amazon, or by Microsoft. They are not identical. So basically the ability to publish and make public the service APIs, as well as the differences, I think we are going to see a lot of innovation there.”
He also thinks there will be innovation in the data layer (“because you have to move the data transparently and efficiently across clouds”) and in security (“because you need authorization and authentication, and each cloud is slightly different”).
“Going forward, I think there will be a lot of innovation, because abstracting away the clouds — given the myriad of services they provide — is not going to be easy.”
So, similar perhaps to how the cloud computing revolution of the 2010s opened up a huge market for services on top of those cloud platforms — what we now know as the “cloud native” industry — there will be plenty of opportunities for startups to provide solutions to facilitate or build on the sky computing layers.
Similarly, there will need to be solutions on the frontend. How will developers specify the application preferences mentioned above (with the intercloud layer)? Related, Stoica said that another challenge will be “how you specify what are the big components of the application that can be distributed — and where.” For instance, maybe you want to do the machine learning aspects on Google, but another key task on AWS or Azure. So again, this seems like a greenfield for startups to explore over the coming decade, as the sky computing era takes hold.
Ion Stoica and his Berkeley colleagues were prescient about the future of cloud computing in 2009, and I think Stoica and Shenker have made a compelling case for utility cloud computing in the new paper. But for this vision to happen, at least one of the big cloud providers needs to make the first move towards building the compatibility and intercloud layers. Like Stoica, I suspect this will be Google (which, after all, was the company that developed Kubernetes). But Microsoft has also proven it is willing to support open source and pivot to emerging cloud trends. Odds on the market leader, AWS, making the first move are slimmer — but then again, Amazon was the company that practically invented cloud computing.
Regardless of which big player makes the first move, I’m excited by the hundreds or even thousands of new startups that will get their chance to shine as the sky computing platform gets built out over the coming decade.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Docker, Bit.
Amazon Web Services (AWS) is a sponsor of The New Stack.
Feature image via Pixabay.