Today, 20% of enterprises run containerized applications, according to Gartner, and there’s more to come. They predict that number will skyrocket, in just two years, to 75%. That’s understandable, as tremendous developments in container-related technology (thank you, Docker and Kubernetes!) have converted the promise of this longstanding but underutilized technology into a viable reality.
But as container enhancements improve and simplify the deployment of cloud native apps, what about the thousands and thousands of legacy, non-cloud native apps left behind? This is a very challenging problem because many Fortune 1000 enterprises are still working with legacy software written decades ago that are deeply embedded within core applications that are vital to business operations.
Common sense may suggest that deploying these legacy applications in virtual machines would be the best solution. However, most customers do not want to have to park these apps in VMs when an increasing number of their modern apps are being deployed in containers running on public and private clouds. Indeed, they would like to bring cloud-based and container-driven flexibility to their legacy apps. But it can be challenging to refactor these apps into a container friendly cloud native architecture, leaving IT managers with their hands tied and wondering how to bring cloud-level flexibility to legacy apps.
Agility, flexibility and cost-savings are what customers are looking to gain by running legacy apps in containers, however, many barriers to these cloud-based conversions exist and can be quite significant.
First is a scarcity of talent. Many of these applications were written decades ago, and the engineers who created them — and know them best — have long since retired or are soon exiting the workforce. Further, some (such as management applications) are so deeply embedded in an organization’s operations and integrated within other applications and processes, it leads to a stubborn interdependent knot that is difficult to untangle — especially when those with the most knowledge of them are no longer available. Thus, through the accident of time, rewriting the software may be nearly impossible.
Secondly, and relatedly, is expense. Even if the talent is available, a conversion exercise requires a hefty financial undertaking and intense overhaul of legacy processes. Cost is a detriment to many IT upgrades; in fact, it’s likely that one of the reasons, according to McKinsey, that only 30% of digital transformation exercises are successful is that companies aren’t seeing dramatic, near-term ROI improvements. And maintenance-oriented expenses, like those associated with rearchitecting legacy apps for a cloud environment, can be key contributors to that cost crunch.
Fortunately, new advancements in container platform architecture are alleviating many of these concerns. Customers recognize that this transition is difficult to manage and look to their cloud service providers to make recommendations and manage their transition to containers. If they choose the right container platform, companies can effectively redeploy legacy applications without refactoring. For example, they can now deploy legacy apps on-premise using a public cloud platform to “create” multiple duplicate instances of an application across hardware systems to gain some of the flexibility and mobility of a cloud-based application. Open-source solutions can help with this, and prevent the issue of “vendor lock-in.”
Another avenue to consider when seeking to run legacy applications in containers, is to run the containers themselves on bare metal servers, which is now an option on certain platforms. Doing so without loss of security and consistency can eliminate some of the resource partitioning and additional hardware requirements that are the result of running containers within VMs. When you do away with the intermediate layer of the VM, efficiency and flexibility are greatly enhanced.
When it comes to deploying legacy applications in containers on Kubernetes, enterprises may turn to application-specific Operators for help. Since the legacy app cannot be refactored, for the reasons described above, the enterprise may choose to deploy and control the application using a Kubernetes Operator. While Operators are powerful, writing one suffers from the same challenges that confront the enterprise when trying to refactor the legacy application itself. The author of the Operator must be highly knowledgeable, not only of Kubernetes, but also of the legacy app. This is a scarce and expensive talent. When you run non-cloud native apps on Kubernetes using Operators, you need to do so without having to write an Operator for each application.
Luckily, today’s most advanced container platforms provide such functionality. You can gain the benefits of containerization and orchestration by Kubernetes for all your applications — legacy and cloud native alike. This, along with the option of deploying application containers in bare-metal, virtual machine or cloud settings, provides a platform flexible enough for any IT group. The net result is to reduce complexity and cost while increasing application performance, all without adversely impacting enterprise security.
That should generate enough ROI to propel your digital transformation to the next level.
Feature image via Pixabay.