The move to microservices typically requires a great deal of automation, and automation implies operations. In today’s cloudy, agile world, the science of operations is veering towards good DevOps practices. Naturally, automation requires code, and operators today have some of the most powerful tools and systems ever created for boiling down the complexity of such environments, into more easily programmable bite-sized chunks.
To get there, however, the real grind of microservices is the necessity of having intricately woven systems in place, balancing traffic, monitoring gateways, and distributing a security model across the entire application stack before bit one can even be deployed. It’s a bit like a chef requiring a properly cleaned, tooled, and prepped kitchen in order to prepare a dinner service. This even extends to the waiters and ticketing systems one needs to pass those well-poached plates of veal to the starving customers.
The skill in DevOps is not being a great chef, but a great manager: Managing the waiters, the hot window, the prep chefs, and the money, all from a vantage point above the floor, with full visibility of the entire chain of processes, products, and people.
In the microservices world, this means it’s generally DevOps’ duty to set up all of the infrastructure required to build out at-scale environments. That means Web application servers, registries and repositories, OS and container images, virtualized networking, firewalls, load balancers, message queues, and reverse proxies. It’s also up to the DevOps team to support new technologies demanded by the development teams: HTTP2, GRPC, and reliable SSL.
Sid Sijbrandij, CEO of GitLab, said that, “With the explosion of microservices, you’re getting more and more projects. Companies are finding that not only do they have to automate the DevOps lifecycle, but it has to be automated out of the box. For every project, you have to setup your creation tools, packaging, and management. If you have to do that every time, that’s going to take a lot of time. The increased complexity of microservices makes it needed to have a closer collaboration between development and operations. It’s much more important to have them both on the same page.”
It’s the Data, Stupid
One area that can become tricky is data. From making databases run reliably at scale in Kubernetes, to the sudden proliferation of outside data stores developers can adopt when freed by microservices architecture, data is a big problem for cloud-based infrastructure: so much so that the earliest days of cloud saw the holistic advice of “Expel state from your application.”
Today, we know that stateful and stateless applications can both happily coexist in the cloud, but the actual day-to-day work of managing that data isn’t always easy. Georgi Matev, head of product at Kasten, said that, “What we are seeing is that data is following the same pattern as we’ve seen on the compute side. As things break into smaller and more logically sized components, the same makes sense on the data side.”
While that sounds good, it’s a different thing entirely to move data with agility than it is to move code. That’s no excuse for not automating the data layer, says Datical Chief Technology Officer Robert Reeves. “Everyone is getting a lot faster with deploying the application; the compiled bits. But then, they’re still relying on manual database changes to support that application. That causes a velocity gap. You’ve got one part of the house leaving for home on time, while the other side of the house — the database folks — are on suicide watch,” said Reeves.
Preparing for microservices is a bit like a chef requiring a properly cleaned, tooled, and prepped kitchen in order to prepare a dinner service.
“We need to get the human out of this,” said Reeves. “We need to remove intervention. Back in the day, systems administrators thought building a server was like building an artisanal coffee table. ‘Look at my wiring! See how pretty it is in the data center?’ Today, who cares, man?”
“The first thing you need to get over is this idea that we need a human to manually review and execute our SQL scripts,” said Reeves. “Our first enterprise customer was a very large bank. When our executive sponsor started talking to the DBA and said, ‘Hey what do you think about automating the database,’ he said, ‘That’s just not the way we do stuff here.'”
“We’re big fans of microservices, but you need to put the power of updating these services and the databases that support the microservices in the hands of the product team,” said Reeves. “We have customers that use Pivotal, and they can update in seconds but have to hold off until the external services team can run a script on the database. They’re waiting 10 to 12 days. What’s the point?”
Adding to this complexity is the introduction of Kubernetes as the new standard for container orchestration. As Kubernetes adoption has grown, some of the questions around it fits into existing infrastructure patterns are still being answered. Still, it falls to DevOps to understand and administrate Kubernetes so that developers can get their on-demand databases, pipelines, and deployments.
IBM is just one of many companies already building out infrastructure on top of Kubernetes. The Istio project, for example, allows DevOps teams to have full control of the data and traffic flows around their microservices. Daniel Berg, IBM distinguished engineer, container service and microservices, wrote to The New Stack in an email that, “In the open community, [IBM has] worked with other tech leaders such as Google and Lyft to build Istio, which equips developers with an orchestration layer on top of their containers to better secure and manage the traffic flowing through them. We’ve also worked with Google to launch Grafeas, which helps secure the supply chain of code around containers as they’re deployed and used.”
The work at IBM is also focusing on making Kubernetes easier to consume, so DevOps won’t have to be monopolized by turning the knobs on such a complex system. “While there is still a learning curve with using Kubernetes to build container-based solutions, we are working both internally in IBM as well as with the community to develop tools and capabilities to make it easier to develop applications in Kubernetes without having to be a Kubernetes expert,” wrote Berg.
Behind all these services is the need for a unified set of processes. The various teams invested in the microservices inside a company need open lines of communication, and they need to be implemented in a way that cannot be sidestepped or avoided. It’s an undertaking, and it’s about encoding human behavior into automation, deployment, and development pipelines.
Chris Stetson, chief architect and senior director of microservices engineering at Nginx, said that, “You have to change around your thinking in terms of how you do application development. One of the things that we have been doing a lot of recently has been creating a uniform development and deployment process, where you have your application developers working in a Dockerized version of the application, and doing their coding and testing essentially in that Docker environment, which very closely mimics the environment that we will be deploying to ultimately for our customers. Having that process built out so it’s easy for developers to get started with is incredibly valuable,” said Stetson.
Stetson said Nginx has implemented an almost ancient, but no less effective solution to this problem. “We use Makefiles, we’ve been using Makefiles a lot to encapsulate the more complex Docker compose commands we’ve put together to make a build target for our frontend developers to be able to do their Webpack frontend development. They’re connecting back to all the services they need dynamically reloading the changes they’re working with, and we like using a Makefile because its like declarative Bash scripting essentially,” said Stetson.
Read the next article in this microservices series: Get Ready for Microservices with a Phased Approach
Feature image via Pixabay.