Cloud Native / Microservices / Serverless / Sponsored

Iron.io: Orchestration Should Focus on the Job, Not the Container

15 Apr 2016 8:52am, by

In an interview with The New Stack’s Alex Williams at the Intel Cloud Day event in San Francisco last March 31, Chad Arimura, the CEO and the chief business developer of container workload management provider Iron.io made a strong case for software developers eventually being allowed to ignore issues of deployment and infrastructure when building applications.

He cited the rising popularity of so-called serverless architectures such as AWS Lambda and Google Cloud Functions, where every detail of software deployment is masked from the developer writing code.

Arimura was joined by Ivan Dwyer, Iron.io head of business development, who noted that, in an optimal system, developers would be solely focused on workloads and not the process of hosting them.

“If you look at the evolution of the server, the virtual machine, and now the container, the container is just another disambiguation of what a server is, in a sense,” said Dwyer. “Now we can put a lot of containers into servers.  But we like to think about it as the next evolution: that is, around the job or the workload type.”

Listen to the podcast here:

It was a curious case to be making, especially at an event where Intel had just announced an extended partnership with a commercial container maker (CoreOS) and a commercial OpenStack distributor (Mirantis). One goal of their combined efforts will be to enable Intel processors and orchestrators of CoreOS containers to communicate with one another directly. This way, conceivably, Kubernetes (and, by extension, CoreOS Tectonic) would become capable of polling the individual processors in server clusters, and could stage workloads on selected processors according to their relative state of readiness.

Iron.io would prefer a state of affairs where the virtual envelopes that encase applications, be they Docker (OCI) containers or VMware-style virtual machines, is allowed to be immaterial to the developer at one level. Dwyer suggested to Williams that an optimum platform should concentrate on the job that the workload represents.

That concentration, he argued, would enable monitoring functions that would normally be concerned with factors like CPU utilization and latency, with the ability to report metrics that are related to the job itself: for example, with an image processor, how many images are being scanned and processed, and at what sizes. This way, service-level agreements (SLAs) can be developed between service providers and customers that also pertain to job performance, as opposed to just availability and uptime.

“With a more granular unit of compute, scalability becomes more efficient and effective,” Dwyer wrote in a February 2015 Iron.io white paper [PDF, registration required]. “As a pattern, microservices promotes Y-Axis scalability by decomposing functional elements as individual services, as opposed to traditional replication. This separation of components creates a more effective environment for building and maintaining highly scalable applications.”

Intel and Iron.io are sponsors of The New Stack.

Feature image: Anchorage cables for the San Francisco-Oakland Bay Bridge, U.S. Library of Congress.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.