CI/CD / Containers / Kubernetes / Machine Learning

AppLariat Provides on-the-fly Container Reconfiguration

22 Jan 2018 8:13am, by

You want to be able to deliver your application in multiple different platforms, on-prem or on a public cloud. You might want to deliver your application as a managed service, where it runs on the customer’s cloud or the customer’s portion of the public cloud.

Your sales team wants to do a demo, but it’s difficult for them to run something that’s consistent across all regions.

Your QA team wants to use the latest versions for testing. So how do you deliver all those different builds in a consistent manner through policy, with auto-scaling to minimize cost?

Irvine, Calif.-based appLariat’s answer to that involves two parts: a delivery mechanism to containerize applications and deploy them with the Kubernetes open source container orchestration engine, and an AI-based capability to configure them on the fly to handle the load and meet required service-level agreements (SLAs) with the smallest possible footprint.

Once you have the delivery mechanism, the issue becomes how to configure the application to be able to handle the load coming at it, but keep the cloud cost to a minimum, said Mazda Marvasti, CEO and co-founder. You have to have the delivery mechanism, then the re-configuration mechanism.

“It’s kind of like VM sprawl. A lot of companies popped up to move VMs or move VMs to have a more optimal distribution. Containers are orders of magnitude more complex in terms of reconfiguration,” he said. With virtual machines, you had to move them from one location to another. With containers, not only can you move them, but you can resize them on the fly in terms of the resources allocated and the number of instances. That makes the problem far more complex.

“Techniques such as capacity management just don’t apply to the container world, you have to use this AI-based mechanism to determine what is the optimal configuration,” said Marvasti, who, along with co-founders Wayne Watson and Steve Henning, previously worked at VMware.

Marvasti noted that PayPal, for one, has more than 700 apps that run 150,000 containers, making manual configuration untenable.

Reconfiguration on the Fly

The company launched at DockerCon 2017 last April, responding to how the founders witnessed little consistency in the way containers are developed and managed. The platform aims to create that consistency.

“We take the customer’s application and automatically containerize it, and deliver it to a Kubernetes platform. So it looks like a cloud-native application without you having to build it from the ground up as a cloud-native application, but that’s not really the purpose of it. Once it’s in that form, the customer can deliver that application to the destination of their choice,” Marvasti said.

It uses a wizard-like interface to automate containerization of your existing application for deployment on Amazon, Google and other public clouds, or on cloud platforms like VMware vSphere.

“We have a mechanism for building containers and it’s essentially a best-practices thing,” he said. “When you tell us it’s MongoDB, we already have all the necessary information to build an appropriate Mongo container that knows how to scale, knows all the environment variables that Mongo needs to run in a Kubernetes environment. Enforcing consistency means we don’t require customers to handcraft these containers.”

You make selections from appLariat’s library of common open source and licensed application components, specify component versions and configuration data specific to your application. If the component you need isn’t there, you can use your own images or Puppet, Chef or Ansible scripts. Users don’t have to use Dockerfiles or Docker Compose files—the platform handles that.

Once it has the application definition, the app is deployed on a few small clusters to run a few automated tests to learn the behavior of the application on a small scale. It then uses that learned behavior to train a neural net. The third piece is config extractor that uses the learned behavior on the neural net to extract configuration at a larger scale.

You do the training every time you make code or architecture changes that affect the scalability of your application, but it doesn’t run all the time, he said. And it learns from production data and pushes that back into the neural net. The re-testing and reconfiguration take place in the background with the customer often unaware of it.

Timing Is Everything

In just moving applications to Kubernetes, it competes with Red Hat’s OpenShift and startups like Rancher and Apcera as well as as-a-service Kubernetes players like Platform9. Yet this automated AI-based reconfiguration is one of the ways the company differentiates itself, Marvasti said.

“There’s a lot of misconception about Kubernetes being able to do these things already. Kubernetes is great at knowing what to do, but now when to do it or how to go about doing it. You can assign resources to your container: It must have at least this much CPU, at least this much memory, there will be this many instances of this container. From a user perspective, meaning developer, DevOps guy or even IT, how do they know what to assign? There could be thousands and thousands of containers. And if you don’t assign the right values, Kubernetes might start killing your containers [because it’s out of resources],” he said.

The platform determines the container configurations at runtime using the Kubernetes API. It works alongside the Kubernetes scheduler to ensure production apps are reconfigured to reduce the number of nodes in the cluster but yet are still able to handle the load and the SLA.

Its customers include MapR. Its sales team uses appLariat to create demo and POC environments and uses its policy capabilities to determine aspects such as how long a project will live and who has access. Future work will involve adding in analytics to the mix.

The platform integrates with popular automated build tools such as Jenkins and CircleCI, as well as GitHub, BitBucket and other code repositories. Users just have to point to where the code is and the tools they’re using for appLariat to get the code ready to deploy, Marvasti said.

The Cloud Native Computing Foundation, which manages Kubernetes, and Red Hat are sponsors of The New Stack.

Feature Image: “Sonoita Rodeo” by Bill Morrow, licensed under CC BY-SA 2.0.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.