How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Cloud Native Ecosystem / Cloud Services / Kubernetes

Azure Container Apps: Do We Need Yet Another Managed Container Service?

Microsoft's Azure's Azure Container Apps is latest addition to the Azure compute service joins other container-based offerings such as Azure App Services, Azure Kubernetes Service, Azure functions, and Azure Container Instances.
Nov 17th, 2021 5:00am by
Featued image for: Azure Container Apps: Do We Need Yet Another Managed Container Service?
Feature image par Steve Buissinne de Pixabay. 

At Ignite 2021 conference earlier this month, Microsoft announced the public preview of a new serverless container platform branded as Azure Container Apps. The latest addition to the Azure compute service joins other container-based offerings such as Azure App Services, Azure Kubernetes Service, Azure functions, and Azure Container Instances.

Microsoft is positioning Azure Container Apps as a Platform as a Service (PaaS) layer for AKS. It brings the familiar workflow of deploying one or more container images and walking away with a URL or endpoint. Behind the scenes, Container Apps run on top of a hidden, abstract Kubernetes cluster based on AKS.

Earlier this year, Amazon launched App Runner — a managed service to run containerized apps on AWS. As Corey Quinn, the Chief Cloud Economist at The Duckbill Group noted, App Runner became the 18th service to run containers on Amazon Web Services. From Elastic Beanstalk to AWS Fargate, Amazon’s cloud has an overwhelming choice of container services.

It’s no different with Google Cloud. To deploy containerized workloads, you can target App Engine Flex, Google Kubernetes Engine, GKE Autopilot, and Cloud Run.

While the Kubernetes architecture and APIs are standardized and mature, the developer experience is still missing from the stack. The fact that the cloud providers are not able to settle for a PaaS layer for Kubernetes is a strong indicator of this.

Last year, in a podcast with Alex Williams, the founder and editor in chief of The New Stack, I highlighted the lack of developer experience in the Kubernetes ecosystem.

What is prompting cloud providers to add new container services to the portfolio almost every quarter?

The Complexity of Kubernetes Continues to Grow

Kubernetes is a meta platform — a platform designed to build other platforms. But even after seven years of its launch, the industry is struggling to get the developer experience right.

The lack of an open source, portable PaaS/application layer on top of Kubernetes is one of the key reasons why developers find it difficult to deal with Kubernetes.

During the last few years, there have been significant developments in the cloud native ecosystem. Advancements in storage, networking, service mesh, observability, and security domains have pushed the envelope. They are helping Kubernetes and the cloud native stack to become truly enterprise-ready. On the flip side, they are also contributing to the increased complexity of the stack.

While there is a guarantee that the Kubernetes APIs are the same irrespective of the managed service offerings, there is no convergence of the platform layer running on different cloud environments. You may argue that services like AWS App Runner, Google Cloud Run, and the newly-minted Azure Container Apps deliver the developer experience. While that may be true for simple workloads that deal with a handful of containers, deploying, scaling, and managing complex, multicontainer applications is hard.

The Need for a Portable, Cloud Native Platform Layer

The projects such as Knative and Dapr attempt to reduce the plumbing needed to run complex workloads on Kubernetes. But, they still need an abstraction layer to deliver a seamless developer experience.

Knative, a platform based on Istio service mesh, focuses on bringing serverless capabilities to Kubernetes. The serving and eventing capabilities are exposed as a set of APIs for developers to deploy and scale containers on Kubernetes. But, Knative in itself is not a PaaS layer. It abstracts the underlying Kubernetes and Istio APIs through a simplified and aggregated API layer.

Distributed Application Runtime (Dapr) also takes the platform approach by providing the core building blocks to develop microservices. It exposes a pluggable model to easily swap-in and swap-out services such as object storage, cache, messaging, and database without the need to change the code. Recently, Dapr has been submitted to CNCF as an incubator project.

KEDA, Kubernetes-based Event-Driven Autoscaler, is another open source project and also a CNCF incubator project. It’s an event-driven auto-scaling engine to scale in and scale-out workloads based on external factors such as the number of messages in a queue or custom metrics coming through Prometheus. The end goal of KEDA and Knative Serving is the same — to provide a scale-to-zero infrastructure to optimize resource utilization and reduce the cost of compute.

What’s common between Dapr and KEDA is that these projects are backed and maintained by Microsoft and Red Hat.

Google, the key contributor of Knative, has built Cloud Run, a proprietary layer on GKE based on Knative. Cloud Run is one of the simple yet powerful serverless container platforms available in the public cloud.

Cloud Run is the missing link between Knative and the PaaS-like developer experience. Unlike Knative, Cloud Run is not an open source project. Google is leveraging that to deliver a differentiated experience in its public cloud (GKE) and hybrid cloud (Anthos) platforms.

Taking a leaf from Google’s playbook, Microsoft has now combined Dapr and KEDA to deliver a new container service in the form of Azure Container Apps. It’s a multitenant, isolated layer running on top of an AKS cluster pre-configured with Dapr and KEDA. Like Cloud Run, you don’t have to provision a managed AKS cluster and install Dapr and KEDA on it. Container Apps come with a pre-configured environment based on Dapr, KEDA, and Envoy proxy.

So, Azure Container Apps is to Dapr+KEDA what Google Cloud Run is to Knative.

It is obvious that platform providers such as Google and Microsoft are leveraging their open source investments to offer opaque, managed services on their cloud platforms. There is nothing wrong with it, but it highlights the lack of an open source, a standards-based, portable application platform for Kubernetes.

The Fragmentation of Application Platform on Kubernetes Is Not Healthy

The widening gap between the cloud native application layers and the proprietary implementations will hurt the ecosystem in the long run. Even if container images are the lowest common denominator of the stack, the packaging and configuration of apps run on these black box services is extremely different from each other.

In the context of Kubernetes, a valid deployment/pod definition is bound to run in any cluster — managed or self-hosted. It’s not the same with the managed application services. A microservices workload targeting AWS Fargate cannot be easily migrated to Azure Container Instances or Container Apps. Moreover, stitching multiple containers together to act as a single microservices-based workload is hard to run and scale in these managed services.

What we need is a portable, transparent, open source application layer that will consistently run inside a Minikube cluster deployed in a developer’s laptop or a massive multinode cluster provisioned in the public cloud. We are waiting for a platform layer that brings the best of Knative, Dapr, and KEDA to Kubernetes without the lock-in.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker, Hightower.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.