Today, everyone is talking about the cloud. As enterprises look at AWS, GCE, Azure, and other cloud providers they see many potential gains over running their own data centers. They see autoscaling, usage-based billing, quickly provisioned data services (Postgres, MySQL, etc), and a host of other benefits. However, to get there, they also experience quite a few challenges. In this article, I am going to go through the common pains associated with migrating to the cloud and clearly articulate what enterprises need to address these issues.
Currently, there are a number of tools that organizations are using to start their foray into the cloud. These include machine configuration tools (Salt, Ansible, Chef, Puppet, etc), custom libraries written by the cloud vendors, infrastructure tools (Terraform, Fog, Vagrant, etc), as well as software to handle health monitoring, logging, policy, and security to name a few.
This process leads to servers being provisioned as special snowflakes; each machine is slightly different depending on the app being deployed.
While these tools do work, most are maturing slowly (many aren’t even version 1.0) and updates tend to cause unexpected issues. Developers are currently using tools where the underlying APIs are under heavy development and this makes updates risky. Also, more often than not, these updates are required for functionality to continue to work as expected. This results in developers using tools that break often, require constant updates, and require expertise in the cloud platform they are deploying to.
Let’s take a step back and use AWS as an example. For an organization to move from on-premise hardware/virtualization to AWS, they require a way to provision VMs. They could use Terraform, Fog, CloudFormation, or just create their infrastructure using the AWS web console. Next, they need to configure the VMs with a configuration tool. This requires developers who are experts in writing these recipes, as well as knowing exact details of the applications that are being deployed, including the installation requirements of all of their dependencies.
This process leads to servers being provisioned as special snowflakes; each machine is slightly different depending on the app being deployed. Auditing and maintaining these servers spirals out of control as you deploy more applications and services. On top of all that, access controls are not connected to corporate auth systems, which requires custom credentials/keys and secrets to be setup on AWS directly — there goes your unified user and role management!
For organizations running thousands of servers, the status quo is untenable. Even basic operational requests like, “What version of Java is being used by deployed applications?” becomes a laborious task which could take weeks, if not months. Also, bugs are harder to find and fix, especially if using multiple cloud vendors. What if an app is behaving fine on one cloud, and not on another? What library or configuration is at fault? How do I quickly identify the root cause of the bug? On top of that, how do I verify the security of my servers if the firewall configurations and other important security features are configured on a cloud by cloud basis? Not only that but staffing to support multiple cloud environments can be difficult and expensive.
Now, I am sure that some of you reading this are screaming, “This is why we have containers!” — and obviously, as an engineer at Apcera, I am a huge fan of containers. However, containers are not a silver bullet. I still need to configure the host machines/VMs, I still have segregated policy per cloud provider, and I still am using tools that have been on the market for a very short time and am dealing with dependency upgrades constantly as container runtimes and deployment systems improve. While using containers helps for application-level portability, the IT and operations workflow leaves a lot to be desired.
The truth is, what enterprises need more than ever is a streamlined workflow and, more importantly, consistency in their deployments. This space is starting to grow very quickly. Organizations are looking at Apcera, Kubernetes, Docker, Mesosphere, CoreOS and others to help them in this regard. Instead of detailing the pros and cons of these products, let’s just look at the key features a company does need:
- A single deployment system that can deploy and move workloads to any cloud or on-premises data center.
- Audit trails that cover every action in the system, on any cloud, or on-premises environment.
- A single authentication and authorization system to control who can deploy workloads, and what can be deployed.
- Support for Linux containers, including Docker so they can move toward microservices architecture.
- A consistent way to audit software versions across infrastructure to quickly find workloads that require security fixes or upgrades.
- Simplified staging of applications: frameworks should be auto-detected, built, tested, and deployed easily.
- Easy access to native cloud APIs that are not running on the platform, so you can still harness the full feature set of each cloud.
- Service discovery for jobs, so they can easily locate required databases and data stores.
- Policy controls that run on and across infra and cloud providers that declare how workloads interact with one another. For instance, you should be able to guarantee that a job can’t connect to your production database if it is a developer workload.
Today more than ever, enterprises need a platform that just works and solves most of the fundamental issues in their infrastructure through a single control plane. This leads to environments that require less to maintain and are optimized for speedy delivery and operational efficiency. Why cobble together a bunch of tools to get what you need when someone has done the heavy lifting for you? Focus more on the revenue-generating part of your business and the applications you are deploying!
Visit Apcera.com to learn more about how our technology removes the headache of orchestration and configuration management and delivers the key functions listed above. You can even get started today with the Community Edition.
Apcera, CoreOS and Docker are sponsors of The New Stack.
Feature image via Pixabay.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Docker.