There’s a lot to do when deploying microservices into an enterprise software environment, and right from the start there are decisions to make. Those decisions trickle down from the full data center considerations to the operating systems, to the container management and orchestration layer, and finally reach into the application itself.
Hidden within each of these strata are nooks and crannies where singular choices can make lasting impacts on performance, application velocity, and the actual business value generated. For these reasons, it’s worth taking the time to make all of these decisions properly, and from a position where your teams are well informed of the constraints and possibilities.
Starting right at the top, the actual location of these deployments is, perhaps, the largest influencer of the other decisions in the stack. Deploying services into Amazon Web Services, Microsoft Azure, or Google Cloud each comes with its own deployment choices. Amazon offers its own tutorial on this topic, relying on its application load balancer service to bear the weight of the deployment, for example. NGINX also has advice on deployment strategies. This is a common pattern, repeated in private clouds, as well. The load balancer model allows for traffic to be shifted to individual groups of servers, thus providing for those groups to be updated on their own, in order. As one group is updated, it takes the place of another that has not, and in this fashion, a rollout is performed.
The key for this model of rollout deployment is that it is also compatible with the decomposition need to make the transition from a full-fledged monolith application, into smaller subdivided microservices. By provisioning these microservices into groups, they can be brought online as the monolith is removed from the equation, ensuring a smooth transition, and preventing a gap in the data that flows through enterprise services. This is also the path to zero downtime for upgrades and updates.
Deconstructing a monolithic application also comes with benefits for the business as a whole. Aside from increasing agility, as individual microservices can be modified without worrying about redeploying an entire monolith, this pattern also allows for businesses to break out their most essential and difficult application aspects to be replaced with best in class products.
Many monolith applications already include things like payment processing, or VOIP, or user content storage, and allowing a team to break these aspects out and turn them into a line item on their budgets, thanks to Stripe, Twilio, or Filestack, means more engineering resources can be focused on the actual differentiators for the business application.
Sameer Kamat, CEO of Filestack, said that one of the most important factors of running microservices is maintaining uptime. His company offers APIs to intake and manage user-generated content. “Uptime is a big factor for us. That’s where reliability, redundancy, and load balancing is a big factor for us. We use autoscaling because some of our biggest clients have seasonality in their business. We have to scale up and map to that seasonality. In December, there is a big spike. You cannot rely on hope your infrastructure has to be designed in a way that handles scalability in an automated fashion.”
As with any API-driven application, microservices are an enabler of the agility afforded by a slimmed down application. With fewer moving parts, and dependencies tied to API interfaces instead of OS libraries, microservices can, in theory, be written in any language the developer wishes, and use any environment that’s desirable. As these factors will be restricted only to the microservices container, this pattern allows for greater flexibility within the development team.
That’s not to say that there are no restrictions on development once the transition to an API-based application is made. Once the API is rolled out, in fact, it cannot change; it can only grow. If original functionality changes, those wild applications written for version 1.0 will stop working, resulting in SLA violations.
“With an API comes a lot of responsibility, making sure it is compatible, making sure it’s super simple: Our whole promise to developers is that we will save you time, and provide access across languages,” Said Kamat said. Architecturally, we have to make sure we’re very aware of any breaking changes to the API. That includes building out microservices for various elements and making sure they can be kept up to date. It has to be a nimble architecture.”
Thus, deploying microservices requires a good deal of infrastructure to be in place just for the rollout of those new services. Load balancers, monitoring systems, orchestration and administration systems, and security products all must be ready to go before even rolling out service one.
Living with Linus’ Lilliputians
One of those early decisions to be put in place before rolling out is your choice of operating system. In days past, the choice was generally between Red Hat Enterprise Linux or Windows, but with the transition to containers over the past three years, that choice has become less clear. Linux remains the king of containers, despite Windows support, but the actual Linux distribution you choose can have wide-ranging effects on performance and maintainability of your microservices.
“Done properly, a microservice can eventually become provably correct, and beyond further optimization.”
CentOS and Red Hat Enterprise Linux remain viable choices but are also fairly large distributions. Red Hat’s concession here is Project Atomic, a tiny Linux designed to do little more than host containers. Alpine Linux currently holds the prize of being the smallest popular distribution, but it can have some sharp edges for the inexperienced user. VMWare has PhotonOS, Rancher has RancherOS. Once an OS is chosen, further customization can be had, thanks to the lack of other dependencies within each container OS. Just as microservices can be written in any old programming language, they can also be hosted in just about any environment the team can support. This also allows for the quick testing of new technologies, such as Nix: each microservice can be an island unto itself, with green fields, or brown overgrowth.
And this is the true promise of microservices deployments at scale: with a well-oiled container construction, testing, and orchestration pipeline, the internal minutiae of each application becomes confined. The team building that application will maintain that expert knowledge of its internals and will share that knowledge when needed, but in the end, the goal is to push each service to solidification. Even ossification.
Just as developers once spent months building enterprise services in assembly language to ensure the fastest possible execution on mainframes, microservices leverage deployment pipelines to facilitate rapid refinement, iteration, and feedback. This allows the development team to become utterly obsessed with the minutiae, rather than constantly fretting over external variables. Done properly, a microservice can eventually become provably correct, and beyond further optimization.
Even without perfect internals, the microservices model is built to allow for solidification of the APIs themselves. As with any API, versioning is essential, and new features can be added, but old ones should rarely be subtracted. When a microservice is deployed for the first time, it immediately becomes a dependency somewhere else. This is another reason uptime remains the most important focus for deployments.
Orchestrating deployments is where enterprises can show their core IT competencies. Administrators and operators should already be chomping at the bit to try out the hottest new tools, like Kubernetes, Terraform, Rancher, and Spi.ne.
Choosing orchestration platforms, however, is more complicated than simply picking Kubernetes and installing it. While this wildly popular open source project has gained many adherents in the past year, it still remains a complex piece of infrastructure software, designed by geniuses for geniuses.
The entire cloud-based microservices architecture requires some basic relearning, as well, said Dave McJannet, CEO of HashiCorp. “One thing cloud has done is inspired people with an operating model which is different from the model of the past. It’s characterized by infrastructure on demand and zero trust networks, means investing about security differently, and thinking about networking differently; from physical host networking to service networking,” said McJannet.
While the new model of deployment parallels the old application server model, the infrastructure plays a far more important role than in past systems. “There’s a parallel for sure. That’s why when we draw the pictures, we have three elements of the stack: The core infrastructure, the security layer, and then runtime on top. That picture hasn’t changed for 30 years. The thing is, now at the runtime layer, instead of an application server, perhaps you’re using a container orchestration platform, but you still have the other parts of the puzzle,” said McJannet.
Today, instead of deploying the application server, he said, teams are deploying the entire service with rolling updates, and automated scaling support. That’s a different go-to-production model than most IT shops are used to, and it requires all of the infrastructure for microservices to be in place before anything can be deployed at all.
To this end, McJannet’s company offers Terraform, an environment provisioning tool designed to stand up multiple services at the same time, and to interconnect them. McJannet said he sees many customers using Terraform to provision Kubernetes, as the Kubernetes world expands to include new services like Istio, and GiantSwarm.
This is the type of meta-thinking required to undertake proper microservices deployments: As Carl Sagan said, “If you wish to make an apple pie from scratch, you must first create the universe.”
Feature image via Pixabay.