This year has seen many companies start deploying containers into production. For now, the majority of the activity revolves around new applications. We expect more legacy applications to be containerized, but no one knows how long this process will take. What we do know is that many organizations are not ready for microservices. Without the infrastructure and processes in place, companies will not have the time or resources to even think about moving their older workloads to a new cloud platform.
People are simultaneously deploying containers for several different use cases. According to a Bitnami survey that focused on container users, 51 percent are using them for test/dev and 47 percent for developing new applications.
Once these new apps are in production, then IT departments are focusing on making their container-related infrastructure enterprise ready. Moving forward, our assumption is that unless there is a specific requirement for a new app, the developers will choose cloud-native platforms and develop using microservices principles. These greenfield scenarios are common at startups, but most developers actually spend their time working on existing applications. Only 29 percent of container users say they will use containers to re-architect part of an older app. However, a majority of people doing re-architecting also will be deploying new apps on containers.
Cost and risk are bigger obstacles to re-architecting legacy applications for containers. Companies deploying new applications into production can take baby steps. If the infrastructure to operate containers at scale fails, rolling back deployment is relatively painless. In contrast, enterprises are more risk-adverse regarding existing applications that tend to be mission-critical or affect a large number of users. In terms of cost, much new app development is funded as new product development, but legacy applications are often maintained under the IT operations budget. Unless dramatic cost savings are to be found, IT ops are not going to pay for developers to rework a three-year-old piece of custom software.
Given the additional level of risk and cost, organizations are hesitant to re-architect legacy applications unless they are further along in their microservices and DevOps maturity. According to research by Puppet Labs, smaller, quicker code deployments characterize organizations that have embraced DevOps processes.
Without a DevOps ethos and associated tools, the likelihood of failure is high. As Forrester’s Jeffrey Hammond and John Rymer explain in How To Capture The Benefits Of Microservice Design, “continuous integration and delivery is difficult to scale” and that simply speeding up existing release management processes is a recipe for disaster and destabilization.”
An earlier article, we reported that few companies use continuous integration (CI) and containers at the same time. However, companies that use containers are more likely to use CI. While only 36 percent of organizations use CI, the figure rises to 61 percent for container users and 74 percent for companies re-architecting legacy applications. CI’s sister, continuous deployment (CD) is critical to making a container deployment successful. Thus, it is not surprising that companies that are re-architecting apps for containers are more likely to be using CI and continuous deployment to automatically deploy as compared to all users of containers (37 percent vs. 24 percent).
Given this perspective, Will Kinnard may have been overly optimistic when writing about the decision to containerize legacy applications. He claimed most “can be containerized in their current form, creating several advantages with no downsides from the previous implementation.”
Even if your organization has a shiny new PaaS, experience managing containers in production and a level of DevOps maturity, the time and effort needed of software refactoring may still not be worth the effort as compared to other cloud migration approaches. Often time, in order to avoid the costs associated with re-architecting applications, it is more cost-effective to just lifting and shift a workload from an internal data center to an IaaS. For a more detailed way to evaluate this decision analyst David Linthicum posits that the following variables should be assessed:
- Code and data portability
- Application and data performance
- Cloud native features that support better performance
- Ability to leverage microservices
- Governance and security
- Business agility
Now, think a second about “serverless” and Functions-as-a-Service (FaaS). Currently, almost all serverless efforts are around new applications, and by in large are on top of containers. Serverless also requires significant re-architecting. Without the necessary tooling, infrastructure and organizational maturity, rewriting software for the serverless world should not be in your short-term plans. However, IT leaders think one and two years out when conducting current state application assessment. From this perspective, perhaps you should be thinking about containers and FaaS at the same time.
Bitnami is a sponsor of The New Stack
Feature image via Pixbay.