App Modernization: 5 Tips When Migrating to Kubernetes
Containerization may be the key to a more automated infrastructure, but migrating your apps to Kubernetes can feel like a major step.
In fact, according to Canonical, the makers of Ubuntu, only about 15% of enterprises are fully using Kubernetes to run their applications. About the same percentage are still using a purely VM-centric infrastructure, while the rest are either planning a migration to Kubernetes or are using a mix of Kubernetes, virtual machines and bare metal.
So, what’s preventing enterprises from modernizing their applications? Enterprises consistently espouse their dedication to DevOps principles, yet many still find it hard to let go of their monolithic app architecture.
We get it — making a drastic change like adopting microservices, containers and the cloud is difficult.
Fortunately, we’ve got five key tips for enterprises interested in app modernization to make the change a little easier.
1. Treat Your Legacy Apps Like Pets, Not Cattle
Way back in the early 2010s, Microsoft engineer Bill Baker tried to encourage his audience of database administrators (DBAs) to rethink their approach to servers by encouraging them to treat them as cattle, not pets.
That means not giving servers special names, not giving them special care when something went wrong and not investing too much time and energy into them as one would a pet.
Instead, Baker suggested that when something went wrong with one of their servers, the DBAs should take it out back, shoot it and replace it with a nearly identical one as one would cattle.
The analogy has grown wildly popular in infrastructure circles, and today it’s been applied to numerous different scenarios. In the context of a cloud migration, infrastructure managers need to apply this adage in the reverse: Your apps and virtual machines should indeed be treated like pets, not like cattle. Each VM needs to have a unique identity, and they need to be migrated carefully and thoughtfully to a Kubernetes- and cloud-based infrastructure.
For put-upon infrastructure managers who have been mandated to rapidly migrate their apps to the cloud to reduce capital expenditures, this might seem like bad news. It means they can’t rely on bulk migration services, such as Google’s Anthos migrate; these systems don’t afford much consideration for the individual VM.
Because these migration services take a cattle-like approach, they often create all sorts of issues that come back to haunt infrastructure managers down the road.
2. Don’t Just Lift and Shift — Evolve Your Approach for a Better App
In many ways, this follows the general approach of treating your legacy apps like pets as opposed to cattle. You could simply translate your legacy apps to a cloud-based environment. But what worked on-premises isn’t going to necessarily work in the cloud. And by making the adjustments necessary to make your app cloud-ready, you’ll ultimately be refactoring for a more streamlined user experience.
For example, an app-modernization effort should involve decomposing a monolith into microservices for greater scalability, availability and flexibility. It should be done so that as much infrastructure work can be automated as possible, enabling rapid releases and continuous improvement. And it should leverage containers rather than VMs, ensuring that your app will be agile and highly available.
3. But If You’re Not Ready to Leave Your VMs Behind, There Are Options
Ultimately, it’s going to be more effective in the long run to just adopt cloud-native practices during your app-modernization effort. However, not every organization is able to make that commitment. You can bring your VMs to the cloud if you like, but it’s important to conduct a rigorous cost-benefit analysis first.
Hosting your VM in the cloud, in their original on-premises format and managed by the same hypervisor, can be beneficial if there’s a lot of institutional knowledge at your organization related to their use, and you don’t want to give that knowledge up to learn a new system. You’ll retain some of the benefits of a cloud migration, such as greater availability and fewer maintenance requirements, and your teams will still be able to work with the same computing environment they’re used to.
The downside? This can be exorbitantly expensive. Not only do you have to pay your cloud service provider, but you also have to pay for your VMs’ native platform. And ultimately, it’s just a stop-gap solution. Eventually, you’ll want to go fully cloud native if you want to apply modern principles to your app-deployment process.
If you make a comprehensive app-migration plan early on and get buy-in from all internal stakeholders on the timeline and process, you can skip this step and transition into a microservices- and container-based infrastructure faster and with greater cost-effectiveness.
4. Don’t Expect Your Cloud Provider to Replace Your On-Prem Storage
Back when everything was on-premises, your monolithic application likely didn’t need any special treatment in order to handle storage; it simply connected to a database and stored state there. Once you’ve decomposed your app into microservices for your migration, you’ll find that microservices work perfectly fine within a container-based environment.
But although the services themselves are inherently stateless, the data they handle is not. Containers are highly ephemeral and are regularly being spun up or down in a Kubernetes environment, which can make it very difficult to preserve state through traditional approaches.
One option is to consume database services from your cloud provider. Unfortunately, this approach can be costly and won’t offer the same features that you may have enjoyed with your on-prem solution, such as synchronous replication, disaster recovery, encryption at rest and transit, and so on.
5. Identify a Kubernetes-Native Storage Solution Instead
Rather than rely on your cloud provider for storage, enterprises can do more with less cost using a cloud, and Kubernetes-native storage solution, like Ondat. These tools work with Kubernetes’ Container Storage Interface (CSI) to provide storage for stateful containerized applications.
Because you get to define your storage requirements on your end rather than accepting whatever your cloud provider offers, you can build in those features to ensure maximum availability for your app.
Ondat works by aggregating storage across the nodes in your Kubernetes cluster into a collection of host-aggregated pools, enabling volumes to be dynamically provisioned to containers from anywhere in the cluster. What’s more, Ondat allows you to rebuild the same storage features you enjoyed on-premises, plus others that you may not have had implemented, such as:
- Synchronous replication
- Encryption at rest and in transit
- Disaster recovery
- Deterministic performance
- Thin provisioning
- Native integration with Kubernetes
We’ve helped numerous enterprises support their stateful applications during a migration to the cloud. If you’re curious about how Ondat could fit into an app-modernization effort, read the case study of our work with French home furnishings distribution firm CAFOM Group. Or, if you’d like to talk about your application’s particular requirements, get in touch with us directly.