TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Kubernetes / Microservices

Containers, Microservices and the Multi-Cloud

Jun 28th, 2016 9:14am by
Featued image for: Containers, Microservices and the Multi-Cloud

Bill Zajac
Bill is a Sales Engineer addicted to performance. At his current company, Teridion, Bill is working with companies who have come to the realization that the internet was originally architected to be reliable, but not fast. He has a background in application and networking diagnostics and has worked with extremely large and ground-breaking organizations throughout his career. His focus is finding a better more efficient way in delivering content, impossible using legacy technologies.

Build, ship, and run any app, anywhere: It’s a very powerful tagline from Docker.

This principle made me very excited about the opportunities that containers, and specifically microservices running inside of containers, have to offer to those of us in application development and deployment.

We now imagine a world where we are no longer tied to architecture decisions made by predecessors or adjacent teams.

Have a containerized application and want to move from AWS to Azure for a month? No problem! Want to build your search process in Python while the rest of the organization write in Java? Have at it!

The possibilities of microservices inside containers are a brave new world, but here are a few points I have learned along the way.

159D0FFA-2140-414E-9C9A-71EFCB910DD8

The New Old Way

The idea of containers is not new. A Java Virtual Machine (JVM) is a container. What is new is the understanding that changing one microservice will not impact other services within the distribution. There are some security concerns with containers around kernel level issues, but as long as organizations are following a set of best practices, these issues can be minimized.

The good news is that there is a multitude of container orchestration engines available based on specific requirements. Container engines make the deployment and management of containers much simpler than managing each container as its own process. Further, they are maturing and are allowing for an organization to extend the infrastructure managed by such by a singular set of rules and regulations.

One of the first times I was playing with a container orchestration engine, I did not put any limitations on the amount of EC2 instances it could spin up on my account. I learned my lesson after getting a $300 bill from AWS for 15 minutes of just clicking through the orchestration engines tutorial.

With the transition from data center to cloud to multi-cloud, the connection between two cloud providers is just as critical as the connection between content and end-users. There are a growing number of organizations that are leveraging multiple cloud providers to provide an answer to a singular user’s query. What should an organization do when the connection to the two clouds becomes congested or goes down? Failover to another cloud? Although technically that could work, it adds additional rules to the engine and will increase the cost of the deployment.

Instead, organizations should focus on avoiding point-to-point issues by leveraging a solution like that offered by Teridion to ensure the internet is always performing at the best possible throughput by automating changes if the Border Gateway Protocol (BGP) runs into issues. This is a very good solution for cloud providers who do not have a direct connection between their clouds. Organizations looking to leverage multiple cloud vendors could also benefit from this approach.

Scale at Scale

One of the first times I was playing with a container orchestration engine, I did not put any limitations on the amount of EC2 instances it could spin up on my account. I learned my lesson after getting a $300 bill from AWS for 15 minutes of just clicking through the orchestration engines tutorial.

This type of balloon scaling is also a concern to organizations making sure they are scaling properly with their microservices. There is a balance between ensuring that only necessary containers are added into a distribution to get maximum performance while minimizing effort and cost. Containers that are always on at dormant data centers are a rising concern. These dormant data centers are typically used for either failover or local region support.

If an organization is leveraging a cloud instance for local region support, it is often because of concerns around internet performance from the other locations. There are other reasons to have multiple geographical footprints like data privacy, but those requirements cannot be met any other way besides hosting the content within the designated zone. If an organization were able to reduce the amount of physical locations required to run their application, the cost savings could be massive. Likewise, If there were one data center that was able to scale out to dynamically handle the traffic in an efficient way, the cost savings would be very substantial.

Main Framing the Deployment

Containers can live anywhere an operating system can be deployed. That does not mean your application should be deployed across multiple data centers or clouds when a single geographical footprint will do.

The idea is very similar to how a mainframe is pitched as a very powerful and secure appliance. All of those individually programmed tasks live inside a singular unit. The conglomerate of microservices somewhat resembles this deployment architecture. With the onset of new networking capabilities, the containers are becoming less penetrable.

The only thing missing is the steel case to separate the microservices from the rest of the world. There should be a way for organizations to deploy massive amounts of containers without outsiders knowing exactly where the entry point lives. Putting a firewall between the microservices and end users is a good start, but then the firewall becomes a potential attack point. With limited impact to performance, the separation between end user and microservice is possible as well. Many organizations are leveraging a VPN like architecture causing end users to see the end-point as an external IP, which then forwards all traffic to the true origin.

Final Thoughts

Containers and all of the peripheral components are some of the hottest areas in development right now. There are organizations who are leveraging containers in their production environments. Performance is always a key component of any IT initiative, and containers allow for flexibility in addressing a majority of the inherent scalability concerns associated with large volume applications.

Containers are growing in maturity regarding process’ performance and the internal networking throughput between containers. These microservices are still at the whim of the Internet’s performance for end users. Strategies for any deployment must still contain procedures for taking accountability for the instability and insecurity the internet provides; otherwise, the process to become a microservice-based organization will result in a subpar user experience.

Docker and Teridion are sponsors of The New Stack.

Feature image: Madrid, Iowa, U.S., Photo by Tony Webster.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.