With a launch of a new enterprise platform, Apcera is doubling down on support for containerizing legacy applications, one for both for both existing and cloud-native apps.
The Apcera Container Management Platform extends its offering across multi-cloud capabilities, policy and government, networking and container orchestration.
The company also announced integration between its Community Edition and Amazon EC2 Container Service, Google Compute Engine and Microsoft Azure as well as enhanced OpenStack support to make it easier to move workloads from one infrastructure to another.
“Enterprises looking at containers not only for new applications, but also those sitting on x86 that might be 15 years old. They’re expensive to maintain, they’re inflexible. They want to move to the next generation of IT — hybrid cloud, multi-cloud, containers, etc — but in many cases they’re stuck,” said Henry Stapp, Apcera director of product management.
People want to manage all their apps in one place, further explained software architect, Josh Ellithorpe. They’re looking to do resource allocation, log aggregation, health monitoring, scheduling for all their apps, whether they’re cloud-native or some old app in their infrastructure.
The new platform promises improved:
Security: Enterprises set policy that determines where workloads run and the resources allocated to them.
Network micro-segmentation to deliver container-level policy for security and governance through a real-time, software-defined network that manages all network communication across a multi-cloud infrastructure.
Hybrid mobility: It allows enterprises to treat their entire infrastructures as a single cluster, providing portability across on-premises, cloud or hybrid environments without breaking dependencies or governance.
The new platform offers app tokens and event streams to allow applications to read a stream of events and take action on the part of other jobs. The company also has added options for enterprise customers with integration to identity and access management tool Keycloak as well as Microsoft Active Directory-based implementations. It’s enhanced LDAP/Active Directory support to ensure app tokens and event streams work for legacy apps.
The software also has support via IPSEC that guarantees fully encrypted communications on the back end of the cluster. And it’s offering an additional layer to Layer 2 networking mode that can be leveraged via VXLan. Previously all Layer 2 Open V-Switch (OVS) — based networking used through GRE (Generic Routing Encapsulation). The option provides better compatibility across cloud environments, Ellithorpe said.
All these features can help onboard legacy applications into a cloud environment.
“We see container orchestration systems focused on greenfield applications, but not allowing for older applications to even play in those environments,” he said.
“Some of those applications don’t suit well for Docker images or when you put them into Docker images, they require really complex init scripts to set things up that are brittle and cause those things to not work as advertised. So we looked at that and said, “OK, what are the core things a legacy application really needs?”
Obviously, it needs a filesystem that’s set up, configuration files pointing to the right services, persistent volume if it’s saving data to disk. It needs a certain number of resources and certain network routes if it is to function and communicate to all things it needs to communicate to. So the company looked at how to best provide those things.
The networking kind of happens for free, he said. It whitelists the correct locations, and the app doesn’t know the difference. Within container-management products, though, persistent disks need to be prescribed. You don’t get persistent disks across all your containers by default.
So you prescribe that persistent filesystem either through NFS- or SMB-based storage so it has what looks like a persistent disk. Then it looked at the configuration files running these legacy applications, many of which are using XML-based configuration files. They do not understand things like environmental variables, he said, and are not designed for the service discovery mechanisms for cloud-native applications.
The software uses an advanced templating system so the service can do service discovery without the application even needing to know that it’s happening.
“We basically mark the configuration files as templates, then instead of putting hard-coded credentials in those configs, we put a token. When we’re starting the container, we fill out the configs, then by the time we start the app, the app just runs exactly the way it used to,” he explained.
The legacy application dropped into the platform doesn’t even realize it’s running in a container, Stapp said. The platform wraps in this service discovery, dynamic policy, security control, and network control around the application with no modifications. And it provides management of it all in one place and allows updates on the fly.
Users can then decide whether they want to break it down into microservices or do additional things to make that application more modern.
The company will continue to focus on ease of use and the ability to manage everything in one place, Stapp said, and keeping up with the container ecosystem.
That includes participation in the Open Container Initiative. It contends Docker is moving away from its open source roots and toward more commercial interests rather than building generic building blocks — the argument behind CoreOS’s rkt. Ellithorpe points to Red Hat’s recent work and talk of a fork are examples of focus on a more stable release.
Third-party vendors want to “optimize for a stable enterprise core,” whereas Docker itself wants to innovate quickly and get features to the users as quickly as possible, 451 Research Director Donnie Berkholz pointed out in a recent New Stack Analysts Podcast.
Apcera, CoreOS, Docker and Red Hat are sponsors of The New Stack.