TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Containers / Kubernetes / Serverless

Containerless Computing: The Ultimate Service Decomposition

There's a need for "containerless" computing - in which application developers can focus on the services, rather than the infrastructure.
May 29th, 2020 10:15am by
Featued image for: Containerless Computing: The Ultimate Service Decomposition

KubeCon + CloudNativeCon sponsored this post, in anticipation of KubeCon + CloudNativeCon EU, Aug. 17 – 20, virtually.

Bruce Basil Mathews
Bruce is a Senior Solutions Architect at Mirantis. He has been involved with the Open Source community since 2000. He was a member of Hewlett-Packard’s Public Cloud team and was heavily involved with the initial release of HP’s Helion Openstack. His new technology loves are containerization orchestration platforms and service meshing technologies such as Kubernetes and Istio.

There is no such thing as “serverless” computing, of course. Computing resources will always have to run somewhere. The term arises from the desire users have to get away from managing their actual servers and infrastructure. Further, what we’re seeing now is a need for “containerless” computing — in which application developers can focus on the services they’re trying to provide, rather than the infrastructure that creates and orchestrates those services.

It wasn’t always this way, of course. In the days of Fortran and COBOL (back when dinosaurs roamed the earth) we were forced to think of applications monolithically, starting with the database and working our way forward to the front end. These days things are different; the younger the developer, the more likely they are to have learned Object Oriented Programming from the start, making it much easier for them to adjust to the microservices mindset.

And it is an adjustment.

Why Decompose Applications into Services?

It’s a necessary adjustment, because in order to create truly modern, cloud native applications, we need to have specific functions containerized and orchestrated in a convenient, fault-tolerant way. This means breaking the overall end goal into individual services by function, or by operation, or even by organizational unit.

In the old days, we wrote these functions as, well, functions; and we included them in libraries that were then imported into the application. In that respect, it’s not that much of a shift. Those libraries are essentially now container images stored in repositories such as Docker Hub, to be spun up as needed.

But how does that lead us to “containerless” architecture?

Think of it this way. We’re used to thinking of serverless computing as separate from a technology such as Kubernetes, which orchestrates containers. The difference between the two is that (theoretically) when a function isn’t in use in a serverless environment, it doesn’t exist and it isn’t using resources or costing money; whereas in a Kubernetes environment, you can scale your resources down to a single container, but that container is still running (and costing money) even if it’s not being used.

It doesn’t actually need to be that way. There are already ways to enable serverless computing on Kubernetes. But what if we could create an environment in which we get the advantages of Kubernetes — such as scheduling, robustness, taints and tolerations, and so on — without having to bother the user with the details of how to make that work?

Let’s say you, as a developer, are creating an application to stream and analyze video. At some point in this process, you’re going to have to encode the data. In decomposing the application, you’ve decided that this is an individual service, so you create a container that will take raw data as a request and return the encoded version. Now what?

Imagine that you could take that container and add it to a secure registry (so that only your organization could use it), and when the application needs the encoded data, the service is there. You aren’t worried about making sure there’s a server available, or scaling, or networks, or anything for that matter. In some ways, that’s the definition of serverless computing of course. But serverless as a practice has one downfall: it’s not standardized and brings us back to vendor lock-in.

Standardizing on Kubernetes

What if instead, we were standardized on Kubernetes? Now we have the ability to avoid vendor lock-in, use skills that already exist, and easily create multicluster applications. We can even create an environment where some resources are shared, and others are private, as determined by the needs of the developer. We can also use service mesh technology such as Istio to manage and segment network traffic.

In fact, let’s take it a step further. What if we could add intelligence to the provisioning of these containers? What if a neural network or other machine learning algorithm could predict when various containers are likely to be needed, and spin them up ahead of time? The algorithm could scale not just the containers, but the infrastructure itself — minimizing the costs of maintaining the infrastructure while still maximizing performance.

All of this can run in the context of Trusted Computing, where images are stored in a trusted registry and can only be instantiated on servers and devices that have been verified by technologies such as Trusted Platform Modules (TPMs). This enables distributed computing while mitigating security risks.

Where it Gets Complicated

Not to say that this is all unicorns and rainbows, of course. Another important aspect of this new way of thinking is standardization and security. We need to plan that into the architecture now, both inside — what’s running within it — and from the outside world, so the executing application can’t be attacked. These are areas where companies must invest a non-trivial amount of money to make the required technology advancements happen. For example, to achieve the needed flexibility today, we often wind up having to run containers as the root user; and that’s absolutely not a long-term solution. We need a shared model of authority, and that means everyone would have to agree on how the methodology will work. There’s an industry need for defining authority and how trusted computing can work harmoniously across all of those flavors of infrastructure, such as containers, VMs, and bare metal.

One thing that remains consistent with our current situation, however, is that the new businesses that will form around these technical advancements will wind up treating everything — containers, orchestration, networking, and even hardware — as code. All of the new automation methodologies have to be self-healing, self-expanding, self-configuring, and even self-automating across applications. For example, a neural network will need to be able to expand when necessary, predicting the need and improving its model (and accuracy) as needed.

What’s more, this entire network of technology advancements needs to be upgradeable and updatable in place and without interruption of services, which means that all of the hardware vendors will need to participate in that, in addition to the software development vendors.

In other words, there’s a lot for us to do if we’re going to keep pace with what’s needed, but if we start now and work together, we can definitely make it happen.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker, Mirantis.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.