While most of the emphasis on the industry’s multicloud discussion is on where applications run, the enduring changes that will ultimately prove transformational will reside above the technology.
A common description of cloud is running applications on someone else’s servers. But is a cloud service about the servers or the operations?
For enterprises that think the primary driver behind cloud is cost, it might be about the servers. The cloud providers certainly operate infrastructure at higher volumes than enterprises, and they can take advantage of economies of scale to create superior buying leverage.
It’s true that the cost per unit of compute, storage, and networking is lower for a cloud provider than for a typical enterprise, but in today’s commodity world, the cost of the physical devices is not the dominant contributor to overall expense. The real cost benefits come in leveraging cloud operations, especially with multicloud. Where the vast majority of enterprises remain largely manual, cloud providers have embraced automation in a way that IT has never seen before. Leveraging a collection of well-architected systems is far less expensive than relying on manual effort.
As the rate of change in technology increases, agility has become the new economic currency. The future favors the fast, and moving to cloud and multicloud architectures removes one of the bottlenecks to change within every enterprise.
Multicloud as an Operational Condition
If the benefits of cloud are operational, enterprises should view multicloud as more than merely running applications in multiple clouds. The premise of multicloud is that the user experience should be the same regardless of where an application workload is run. If there are differences between private and public clouds, AWS and Azure, then the multicloud promise has not been delivered. How the infrastructure is administered is just as important as how the underlying infrastructure is architected.
Operating each cloud as a silo also creates management complexities, preventing enterprises from managing the entire infrastructure as one pool of resources. Multicloud demands that some management layer emerge that abstracts the underlying bits and bytes from the over-the-top policy.
If the goal is to manage the whole of the infrastructure as a single entity, the operational boundaries must extend well beyond the cloud.
A user’s experience is certainly affected by what happens in the data center (be it in a private or public cloud), but it also relies on what happens in the cloud on-ramps that exist in both the campus and branch. Therefore, the operational domain must extend not only across the data center and public cloud, but also to the campus and branch gateways. Things like policy and security must be uniformly applied, regardless of where the user or application is.
This means that multicloud management is, by very definition, end-to-end — which is a massive shift in traditional approaches to IT architecture. In legacy environments, IT is siloed: the campus, the data center, the branch, the public cloud, and so on. Each domain has its own teams, tools, and processes. If multicloud requires cross-domain management, there must be a complete re-haul in approach.
For example, evaluating tools for one domain would require participation from other teams if to ensure monitoring and visibility across multiple domains. Workflows that span multiple teams may need to be automated to ensure consistency of experience. Technology choices typically made in isolation now need to consider interoperability across domain boundaries.
The most important implication of end-to-end control is that multicloud will force operations to be multivendor.
Across moderately complex enterprises, it is exceedingly unlikely that there will be a single vendor across the whole of the infrastructure. With multicloud, this adds another layer of complexity with profound ramifications.
For multivendor to be at all functional, open standards become critical architectural building blocks. Where decisions in a pre-multicloud world only needed to consider what was within the silo, decisions in a post-multicloud, multivendor world require thoughtful coordination across boundaries. Orchestration tools will be vital given the operational slant of multicloud transformations.
The Human Element
Embracing multivendor also means changing IT’s approach to talent. In legacy environments, proficiency is measured by familiarity with specific vendor commands and configuration. As the world becomes operationally-focused, the emphasis shifts from vendor-specific prowess to a foundational understanding of the requisite architectural building blocks.
From an operations perspective, it means that personnel will be more familiar with tools (especially open source) and languages. Managers may search for Python over vendor CLI, for example. Where certifications played a key role in assessing talent, the multicloud era will be defined by broader understanding and experience across various cloud environments.
While it is tempting to hire to close the gaps, companies should first train their existing teams. Having existing people with experience will help bridge the old to new and is likely the only way to successfully navigate an operational transformation. And, of course, given multicloud is an operational condition, it means a much stronger emphasis on workflows. Understanding how things are done — particularly at the boundary between people, tools, and organizations — will make individuals more capable of operating in a more advanced environment.
There is a degree of muscle memory in how infrastructure is created and managed, and breaking legacy IT design habits and operational practices of different teams is going to be difficult. Enterprises should keep this in mind and push teams to work with one another so everyone thinks holistically and acts deliberately.
Up-ending siloed approaches to enterprise IT — from operations to vendors to teams — can help make sure every decision makes the infrastructure more multicloud-ready.
Feature image via Pixabay.