Mirantis sponsored this post.
The switch from on-premises data centers to the cloud has enabled companies to offload a lot of the complexity involved in maintaining a data center, providing access to computing, storage and network as a commodity. Yet this switch has created other challenges for companies, which may be using multiple cloud providers while still maintaining or implementing on-premises solutions to host legacy applications, or for niche use cases, such as edge or high-security requirements.
The fundamental idea of how a data center provides its services, and even its very definition, is changing rapidly, as are the expectations of the developers who are building future applications.
So, what’s next? What is the data center of the future? What capabilities should it have and what scenarios should it handle? New workload demands, such as IoT, smart devices, and data security and regulations, are raising new challenges and opportunities. The following are some necessary attributes of future data centers.
New data centers will need to be highly flexible to accommodate many different environments. Public cloud providers will be a central part of this future, but let’s first look at on-premises environments, which will continue to matter.
For certain use cases, on-premises infrastructures might prove to be more economically sound than their public cloud counterparts. Gartner found that cloud services can initially be more expensive than running on-premises data centers, with a negative overall ROI. This, of course, assumes that the cost is not outweighed by complexity and risk.
The on-premises use cases that make the most sense include:
- Network and storage-intensive big data workloads: The network and storage costs can become astonishingly high on public cloud, making on-premises infrastructures more economical.
- High-security environments such as governments, financial and health institutions are a few examples of industries where this matters. While cloud providers may be able to accommodate many security requirements, government or certification compliance might be impossible.
Flexibility should also apply to the use of public cloud providers. For example, companies may want to use multiple providers at the same time, or switch from one to another for reasons such as cost, platform stability, feature set or, as recent events have highlighted, geopolitics.
Another attribute of advanced data centers should be their ability to be distributed. There are three main reasons to build distributed infrastructure.
- The first is to reduce outages and data loss by avoiding single points of failure. The list of things that can go wrong in a data center is fairly long: power outages, fires, system failures, human error, network disruption and system outages are a few of many potential incidents. By distributing infrastructure across multiple geographically distanced data centers, companies reduce these risks.
- The second is proximity. Hyperconnectivity enabled by 5G is pushing the boundaries. Because data is increasingly consumed and produced at the edge, future data centers will serve more and more devices at the edge: data closets in retail stores and on factory floors, street furniture in smart cities, parking sensors, video surveillance and self-driving cars. Proximity offers a number of advantages such as reducing latency and decreasing the cost of data transport. Some even argue that proximity is the very reason that the cloud computing era is reaching its limits.
- The last reason is that businesses are increasingly required to control data location to assure regulatory compliance, data sovereignty and data protection. The ability to conform to jurisdictional and customer requirements aligned with the General Data Protection Regulation and the 120 other data protection laws will require data centers to operate seamlessly with hosting infrastructures all over the world. Considering today’s hyperconcentration of major cloud providers in a limited number of countries, this attribute will be critical.
The data center of the future will have to be vendor agnostic. No matter the hardware or underlying virtual machine or container technology, operating and administration capabilities should be seamless. This flexibility enables companies to streamline their deployment and maintenance processes and prevents vendor lock-in.
And because no cloud provider is present everywhere in the world, the ideal data center should have the ability to run in any environment in order to achieve the distribution requirements discussed above. For that reason, new data centers will largely be made of open source components in order to achieve such a level of interoperability.
Great User Experience
Distribution and flexibility should not come at the expense of ease of use. Data centers must allow for seamless cloud native capabilities, such as the ability to scale computing and storage resources on demand, as well as API access for integrations. While this is the norm for containers and virtual machines on servers, the same capabilities should apply across environments, even for remote devices such as IoT and edge servers.
The Challenge for Software Service Providers
The data center of the future has many similarities with today’s multicloud or hybrid cloud. Still, while two-thirds of CIOs want to use multiple vendors, only 29% of them actually do, and 95% of their cloud budget is with just one cloud service provider. In other words, there is a strong need that is not yet fulfilled.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Mirantis.
Featured image via Pixabay.