How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Cloud Native Ecosystem / Cloud Services

Top 3 Challenges to Cloud 3.0

Cloud 3.0 will focus on compute anywhere, at massive scale — imagine a “decentralized world computer” that is fully distributed from cloud to edge, lives in multiple locations, and will never die.
May 20th, 2020 12:18pm by
Featued image for: Top 3 Challenges to Cloud 3.0

Cloud 1.0 focused on Infrastructure-as-a-Service (IaaS) built on virtualization. Cloud 2.0 saw the introduction of cloud native services such as Big Data, AI/ML, scalable middleware services, and development based on concepts like containerization.

Where will cloud go next? I believe Cloud 3.0 will focus on compute anywhere, at massive scale — imagine a “decentralized world computer” that is fully distributed from cloud to edge, lives in multiple locations, and will never die.

Why Decentralization Matters

Tenry Fu
Tenry Fu is the co-founder and CEO of Spectro Cloud. He has more than 20 years of experience solving problems in enterprise IT. He most recently led the architecture for the Cisco CloudCenter Suite and Cisco Container Platform after his previous company, CliQr, was acquired by Cisco. CliQr’s technology-enabled applications to run more efficiently across public and private clouds. He has more than 15 patents in the fields of scalable distributed systems, enterprise system management and security. He enjoys reading history books and building stereo systems in his spare time.

Today, public clouds like AWS, Azure, GCP are mostly centralized. Although the public cloud has multiple regions, each region is still basically a data center. The application hosted in such a region will serve clients from remote locations in typical client-service request/response fashion.

With more and more data generated at the edge, compute naturally moves closer to where the data is located due to “data gravity.” Technologies like 5G can minimize network latency and bandwidth limitations but do not solve the need for processing at the edge. For example, if a retail store wants to capture a customer’s photo for image recognition to do some push promotion to the customer’s phone, it would be better to process the image right at the store locally, instead of sending the image to the centralized cloud region over the WLAN. There are also use cases that need more local user interactions, e.g., AR/VR.

Another advantage of a decentralized approach to cloud is that participants can provide each other resources when required. Resource silos can be broken down so that the world as a whole sees a lot less overprovisioning in infrastructure.

We’re already seeing public cloud providers expand their presence closer to the edge. Those locations and investments, across multiple clouds and data centers, provides a solid infrastructure point of presence foundation for something truly global.

Gaps and Challenges

While “Cloud 3.0” might be a natural evolution, we’re just at the start of addressing issues that need to be resolved for enterprises to adopt:

1. Security and controls have to be a primary consideration. This is analogous to early private vs. public cloud debates. Enterprises took more than a decade to embrace public cloud. Cloud providers had to prove they do a better job of security, operational efficiency, and workload and network isolation. Even with that, enterprises are still hybrid or multicloud; no one wants to put all their eggs in one basket, and some clouds are better for certain workloads.

A fully decentralized worldwide “public” cloud will be hard to swallow by enterprises, especially if their workloads are going to run on untrusted computing environments without visibility. Network and data physical isolation will be nearly impossible since everything is distributed. A mindset change would be required to shift to logical isolation.

Enterprises will prefer a dedicated “private” decentralized cloud, an overlay on existing public and private clouds, that utilizes their own public cloud accounts and on-prem infrastructure so that they have control and trust.

2. Requiring re-writes of applications will slow adoption. Early decentralized computing platforms require developers use proprietary programming languages or PaaS services to create applications that can run on their platform. For example, CDN providers with their FaaS offers, Ethereum with a Javascript-like programming language Solidity, or Synadia with a decentralized NATS message bus service for applications. This might be necessary to take full advantage of the platform and hides some interconnections behind the scenes. However, this inhibits adoption — it is not only a big investment to re-write apps, but it’s risky for an enterprise to pick a winner.

This reminds me of the prediction in A Berkeley View of Cloud Computing in 2009. It described two competing approaches to the cloud: Infrastructure-as-a-Service (e.g., AWS EC2) and Platform-as-a-Service (e.g., Google App Engine). It predicted PaaS would take off as it would hide the infrastructure complexity and provide simple programming interfaces to consume services. When A Berkeley View of Serverless Computing was released in 2019, the authors admitted: “The marketplace eventually embraced Amazon’s low-level virtual machine approach to cloud computing, so Google, Microsoft and other cloud companies offered similar interfaces. We believe the main reason for the success of low-level virtual machines was that in the early days of cloud computing, users wanted to recreate the same computing environment in the cloud that they had on their local computers to simplify porting their workloads to the cloud. Practical need, sensibly enough, took priority over writing new programs solely for the cloud, especially as it was unclear how successful the cloud would be.”

Today, AWS offers both IaaS and PaaS services. Both are important, but IaaS was adopted first because it was better understood how to use it initially. A successful transition to Cloud 3.0 is going to be built on familiar developer tools and platforms today, like containers and Kubernetes, but in multi-cloud and multicluster fashion. Additional decentralized services and platforms will develop over time.

3. Rome wasn’t built in a day, and neither will decentralized cloud. Enterprise adoption and migration to the decentralized cloud is not going to happen overnight — it will be a process. Also, services will still exist on the on-prem or public clouds, so interoperability with them will be important.

When moving to decentralized cloud, layer 2 or layer 3 level site-to-site VPN / VPC peering setups will no longer be feasible. Most service access control will have to be moved to layer 7 at the application service level. Service Mesh will play an important role and be extended to other clusters, as well as interconnect with existing legacy services.

To make everything seamless will require the computing platform to orchestrate and place additional Service Mesh gateways to automate service accessibility control.

While the concept of a fully decentralized ubiquitous computing platform is interesting there are still many things that need to be tackled to get there. New use cases, implementations — we’re going to see a lot of evolution in the years to come.

Amazon Web Services is a sponsor of The New Stack.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.