Cloud Native Migration Traps to Avoid
NS1 sponsored this post.
The DevOps team that receives the greenlight to shift its CI/CD processes and operations to a cloud native environment realizes that the journey can be fraught with peril. Despite the cloud native “pot of gold” — represented by increased agility, faster software release cadences and more stable deployments — things can quickly go sour.
During NS1’s INS1GHTS2020 virtual summit, Jonathan Sullivan, NS1 chief technology officer and co-founder, said during his keynote that organizations seeking to modernize their infrastructure by migrating to the cloud and taking advantage of the opportunities microservices and container environments offer “are creating a foundation for the future of their company.”
Businesses are going to be more resilient and prepared for the inevitable continued evolution of technology as “software continues to take everything,” he said.
However, the migrations to cloud environments with containerized and microservices environments is also never easy, of course. “A digital transformation used to involve “migrating to the cloud” and determining “what that meant for your business, Sullivan said. “What we’re finding today is that everything is a lot more complicated than that,” he said.
In this post, drawing from the talks and keynotes from NS1’s virtual summit and other sources, we look at some of the pitfalls organizations can avoid as they make the shift to cloud native environments.
Don’t Go Into that Cloud Alone
The idea, of course, is to help support DevOps’s key mission to securely develop and deploy software while spending less time and fewer resources managing the more operations-related tasks, by relying on cloud providers such as Google Cloud Platform, Amazon Web Services (AWS) or Microsoft Azure. However, the tool-selection process to bridge the gap when shifting from on-premises to cloud environments represents a major challenge for many organizations.
“The process is never going to get simplified down to, ‘we’re going to be able to migrate everything away on-premises and read the playbook and put it into one cloud,’” Sullivan said at the INS1GHTS2020 Fireside Chat session.
DevOps must, for example, find the right “modern application delivery stack services in order to take advantage and leverage the investments you’ve made,” Sullivan said. “Without that, you can have this fantastic hybrid cloud strategy and if you have no way of intelligently orchestrating traffic across that, you’re just not going to see the ROI,” Sullivan said.
It is necessary when migrating to a containerized Kubernetes environment to invest in frameworks such as VMware’s Tanzu. This allows DevOps “to put your stuff anywhere and just figure out how to make use of this complex infrastructure and complex substrates,” Sullivan said.
Ultimately, the shift thus requires “intelligent orchestration, good integration along the whole DevOps stream,” strong monitoring and management of the platform and automated updates, patching and error remediation, Clive Longbottom, an analyst for Clive Longbottom and Associates, said.
Don’t Copy and Paste Legacy Systems
The data center cannot be duplicated in a cloud environment. While the main goal remains largely the same, the elemental operational frameworks will be different. The IT staff may have to worry less about maintaining their on-premises servers and data center infrastructure, but the new cloud environment is not a mirror of the data center. In short, applications need to be “cloud-designed,” Longbottom said.
“The era of the large monolithic application trying to do everything is over,” Longbottom said. “Those vendors trying to migrate their monolithic app onto a cloud aren’t doing cloud computing — they are essentially moving to a hosted application model without the benefits of elastic resources and granular levels of functional usage.”
For those organizations that want to take “home-grown applications and move them to the cloud, it is far more likely that all they are doing is moving them to a virtualized server,” Longbottom said.
“Design and develop for the cloud, making sure that updates, patches and full upgrades can be done without taking the whole service down and that such actions can be taken with small payloads and low impact on the overall platform,” Longbottom said.
Getting The Provisioning Right
The cloud does not possess an infinite capacity. While easy to overlook or forget, cloud servers and computing, in general, contribute to CO2 emissions and the cloud’s capacity is, in fact, finite. The cloud servers are also, at the end of the day, the same as those servers running in a private data center. This means that capacity management must be thought through as well when shifting to cloud native environments.
While using cloud provisioning to increase capacity has become easier “because we can sort of just go out and buy more compute, it also makes things easier for us to ignore.” Heidi Waterhouse, senior developer advocate, LaunchDarkly, said during her talk “Breaking Strain: A Story About Capacities and Testing,” during INS1GHTS2020.
“We don’t get as many early warning signs that we’re running out of compute, and if everybody needs to buy compute at the same time, there’s only so much capacity. When I talk to my friends who do network provisioning for the backbones, it’s ‘like we are still using computers that still have to come from wherever they’re manufactured and we still need the cables and the fiber, and the switches to run the cloud,’” Waterhouse said. “The cloud is just somebody else’s server in their server room. So, when we’re thinking about increasing capacity, we need to have a reasonable expectation that we can either build it or buy it.”
Elastic capacity cloud provisioning can solve some, but not all capacity problems in the future, either, Waterhouse said. “You also need to be able to respond nimbly when you’re building a robust system and it really matters that you can corner, because if you have a large system that can only go forward if there’s something in the path, you’re going to have a lot of trouble,” Waterhouse said.
Amazon Web Services and VMware are sponsors of The New Stack.
Feature image via Pixabay.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: email@example.com.