Monitoring is having a moment. IT operations have moved into the spotlight thanks to accelerating advances in artificial intelligence and real-time analytics. The potential for next-gen monitoring seems limitless. But even as new models are integrated into our tools, something is still holding us back. It’s the increasing complexity of what we are trying to monitor.
Even the best monitoring platforms struggle to deliver on their full potential due to inconsistent or incomplete data. We have to opt for one monitoring platform over another based on an awkward cross-section of what it can do with what data it can collect. And data collection has traditionally been the lesser factor.
Love the forecasting but wish it could monitor X? Well, maybe you could write a custom integration, or find an open source one — but you shouldn’t have to, and it’s often impractical. What happens when you have several custom integrations? Who’s keeping them current? What happens when you find a bug? How do you roll out updates? In any case, your stack will evolve, and a well-suited analytics solution can become a poor fit in the span of a single decision.
So how do we keep up? How can monitoring evolve as quickly as our needs? We need to decouple data collection from data analysis. Let’s let data analytic platforms focus on deriving better insights from data. Likewise, let’s open up a space for data collection specialists to better the way we monitor our stacks.
At Blue Medora, I’ve helped bring more than a hundred distinct monitoring integrations to market, on nearly a dozen monitoring platforms. What used to be a series of hard-wired (1:1) integrations, evolved into reusable (1:N) libraries, and is now converging on a common solution (M:N) for the overall IT data collection sector.
To illustrate this point, consider all the monitoring platforms that have support for PostgreSQL. Traditionally, each platform writes some code to connect to PostgreSQL, collects information, assembles the data into the expected format, and finally sends it along for consumption. Each platform typically implements its own 1:1 solution.
If you want to bring PostgreSQL monitoring to a wide variety of platforms, you need to do better. You need a standalone solution for monitoring PostgreSQL — one that can be routed to any monitoring platform. This is a 1:N solution.
And finally, what if you want to monitor more than just PostgreSQL, perhaps 100 different technologies, and yet still bring that data to various platforms? This is the M:N problem. The key here is to have a generic and independent data model — one that represents the true state of your stack, and can be transformed to serve any platform.
This is how we get better at data collection, but there’s a whole microcosmos of related problems, from integration patterns to lifecycle management, efficiency gains to best practices. Over the years, it’s become clear to me that data collection is its own first-class space, with its own set of challenges that require better solutions than those we get when we conflate collection with consumption.
The industry needs metric standardization across a wide variety of distinct technologies, but we also need SME’s to determine what’s important in the context of each endpoint. We need monitoring integrations that:
- Automatically stay current.
- Minimize overhead on your stack, and on you.
- Make a single observation useful in multiple analytics platforms.
These are solvable challenges, but they are clearly of their own domain — their own link in the toolchain.
History has shown us that the abstraction of distinct problem spaces is an innovation multiplier. In hardware, think virtualization. In software, think containerization. From where I stand, we’re currently at a similar juncture in the monitoring market. They say history does not repeat itself, but rather rhymes. I’m excited to say that the new rhythm is loud and clear, and IT monitoring is marching in a new direction.
Blue Medora is a sponsor of The New Stack.
Feature image via Pixabay.