There is no “normal” for a microservices environment. And if a groundbreaking November 2016 white paper [PDF], produced by a team of six researchers from the University of Messina, the U.K.’s Newcastle University, and IBM Research, is to be taken at face value, perhaps such a state of equilibrium may never be attained.
Every data center, the researchers write, has unique operating requirements. Once microservices are deployed there, the configurations they require become not only unique, but baked into their respective systems.
“Moreover, these microservices have both control and dataflow dependencies,” the white paper reads. “The challenges exist in dealing with heterogeneous configurations of microservices and cloud datacenter resources driven by heterogeneous performance requirements … the mapping of microservices to datacenters demands selecting bespoke configurations from an abundance of possibilities, which is impossible to resolve manually.”
So if your job is to monitor the behavior of your data center’s microservices environment using an applications performance management (APM) platform, what is it that you’re looking for? When will you know if the behavior of certain services, or the system as a whole, is out of the ordinary? And once you’ve determined what “ordinary” is, for the place and time where you work, how long do you expect that to remain the case before it changes completely?
“With an APM system, in practice, it’s a web app. So a request comes from an end user, it travels over the open Internet, it hits some servers, and then it’s fed through to a database, where a record is updated and the request returns a response, which follows the same path back to the user,” explained Alexis Richardson, the CEO of Weaveworks, in an interview with The New Stack for this edition of the Context podcast.
“In a cloud-native system, the architecture is not particularly so simple,” Richardson continued. “Maybe fifty microservices cooperate and deliver one response to one end user. That’s an extreme example. Or maybe one microservice calls another. So if you drew it as a picture on a board, the request would ping-pong around, like a 1980s Breakout console game, until eventually returning to the user, instead of having a straightforward, up/down, request/response. Tracking all of that is very different than tracking in a typical APM setting.”
Weaveworks produces a commercial version of the Prometheus monitoring platform, called Cortex. Richardson told us he agrees that any APM platform vendor that says it can academically extend its agent-based monitoring system from a client/server to a microservices model, is stretching things a bit.
“An APM vendor that grew up building support for web apps, now claiming that it can do the same level of support for any other kind of application, is frankly a pretty hopeful — or maybe hopeless — remark,” he said.
Yet how realistic is such an assessment, given the fact that APM platforms are being extended into microservices, and customers are experiencing some results? This is the issue we delve into with this edition of The New Stack: Context, where we conducted interviews with:
- Alexis Richardson, CEO, Weaveworks
- Loris Degioanni, CEO, Sysdig
- Jim Gochee, Chief Product Officer, New Relic
Listen to all TNS podcasts on Simplecast.
IBM is a sponsor of The New Stack.
Feature image: An emulated Atari 2600 Super Breakout game from RetroGames.cz.