Modal Title
Containers / Kubernetes

Acorn, a Lightweight, Portable PaaS for Kubernetes

What fascinates me about Acorn is its simplicity and portability. It treats Kubernetes as an ideal runtime environment for deploying, scaling, and running applications without any assumptions.
Aug 26th, 2022 8:00am by
Featued image for: Acorn, a Lightweight, Portable PaaS for Kubernetes

Acorn, a new application deployment framework launched by the founders of Rancher, comes extremely close to what I expect from a development environment running on top of Kubernetes.

For a long time, I have advocated a simplified approach to developing and deploying applications targeting Kubernetes. I’ve emphasized the need for a portable, transparent, open source application layer that will consistently run inside a Minikube cluster deployed in a developer’s laptop or a massive multinode cluster provisioned in the public cloud.

Designed by Darren Shepherd and his team, the creators of the most popular Kubernetes distribution, K3s, Acorn follows some of the same principles that made Rancher’s products click with the cloud native community. It is open source, simple, lightweight, and a portable framework to deploy and scale microservices on Kubernetes.

Developers and operators using Acorn don’t need to know the nuts and bolts of Kubernetes. It’s a bonus if they understand the internals like volumes, secrets, config maps, and ingress. But Acorn abstracts the complexity of Kubernetes with its own JSON-like domain-specific language (DSL) to describe a modern application based on the microservices design pattern.

The promise of a PaaS like Cloud Foundry is pushing the code to the runtime and walking away with a URL. Acorn precisely focuses on this workflow of accepting source code or a container image and publishing an endpoint. Behind the scenes, it does the heavy lifting of negotiating with Kubernetes API to create the resources and the plumbing needed to connect them.

Though there have been efforts like Amazon Web ServicesApp Runner, Azure Container Apps, and Google Cloud Run, to bring PaaS-like experience to deploy containerized workloads, they are confined to the public cloud environment and are not portable. Acorn is one of the few frameworks that can scale seamlessly from a Kind cluster running on a developer’s laptop to a multinode cluster in the cloud.

This article analyzes the architecture of Acorn and goes behind the scenes to understand how Acorn deployments translate to Kubernetes objects.

Let’s take a look at the architecture in detail.

Set up the Environment in Minikube

Install Minikube on Mac and enable Nginx Ingress on it. Ingress is one of the most important prerequisites of Acorn.

Install Acorn CLI with Homebrew and check its version to ensure it is installed.

We are now ready to install Acorn in Minikube. Run acorn init to configure Minikube.

Installing Acorn in a Kubernetes cluster creates a set of resources that handle applications’ build time and runtime requirements. Let’s start with the namespaces.

The acorn-system namespace contains the API and the controller, which are the components of the runtime environment. The same namespace may optionally run the image builder and an image registry when running in the development environment. The other namespace, acorn is reserved for applications, which we will explore in the next section.

The installer creates just one custom resource definition (CRD) in the cluster. The CRD, AppInstance.internal.acorn.io maps to the Acorn apps running within the cluster.

The Acorn API server is associated with the Kubernetes API server through aggregation. The Acorn CLI talks to the API server, api.acorn.io. Since Acorn leverages Kubernetes API aggregation, the CLI only needs the RBAC permissions to the Acorn API group.

The API server passes on the inbound requests to the acorn controller that translates the application definition to appropriate Kubernetes resources such as deployments, config maps, secrets, and volumes. The controller is responsible for managing the lifecycle of an Acorn application by creating and terminating downstream Kubernetes resources.

Deploying Acorn Applications

Let’s start by creating the most simple Acorn application with a single web server based on the Nginx image.

Create an Acornfile in an empty directory with the following contents:

The definition is self-explanatory. We launch a container by the name “web” based on the Nginx image from the Docker registry and make it available on port 80.

Run the Acornfile with the below command:

Since we didn’t provide a name to the app, Acorn has assigned a random name, proud-silence.

When we invoked the run command, Acorn created an OCI manifest and pushed it to the internal registry service running within the acorn-system namespace. It is also possible to use an external registry for these OCI artifacts.

Let’s get the URL to access the app by running the following command:

Let’s access the web server to test the app.

Now, let’s see what this simple app did to our Kubernetes cluster.

Firstly, we notice that there is a new namespace that acts as a boundary for the app.

Let’s examine this namespace. As expected, the app run command has created a Kubernetes deployment, replicaset, pod, and a cluster IP service.

The cluster IP service is exposed to the outside world through an ingress resource, which we will explore in a moment.

When we examine the acorn namespace, we find the instance of the CRD, AppInstance.

kubectl get appinstances -n acorn

Revisiting the idea of the ingress to expose the web application, let’s see if we can find an ingress resource within the application namespace.

Every Acorn app that “publishes” a port will have an associated ingress object created within Kubernetes.

Since the app runs as expected, it can now be tagged and pushed to an external registry. The operations team managing workloads can deploy it to a production cluster without knowing anything about the internals of the application.

What fascinates me about Acorn is its simplicity and portability. It treats Kubernetes as an ideal runtime environment for deploying, scaling, and running applications without any assumptions. It doesn’t tamper with the cluster and deploys a minimal set of resources, just enough to run microservices. It’s truly portable in the sense that when we switch from the development cluster context to another and deploy the application, it gets pushed to the production cluster.

Acorn is highly influenced by Docker and follows some of the familiar patterns to run multicontainer applications. Like Cloud Foundry, it also supports binding existing services such as databases and cache deployed in other apps.

Once Acorn supports the ability to deploy straight from a Git repo that contains an Acornfile, it becomes extremely easy for DevOps to manage microservices-based applications.

In the next part of this series, I will show a real-world example of a microservices application based on Acorn running across a development environment and a production cluster. Stay tuned.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.
TNS owner Insight Partners is an investor in: Docker.