Technology / Sponsored

The Great Gap Between Tools and Observability

4 Oct 2018 6:00am, by

Blue Medora sponsored this post.

Moria Fredrickson
Moria leads marketing programs for IT monitoring integration leader, Blue Medora. This includes customer and market research initiatives to capture the voice of the customer and anticipate changing requirements.

In a June 2018 survey of over 400 IT professionals within the VMware User Group community, Blue Medora took a closer look at how various metric collection strategies impacts IT success. Our first question was “how close are we to observability?”

What the survey results showed as a startling potential disconnect between adding tooling and improving observability. That trend becomes even more pronounced once you look at the success/failure of different data collection strategies and their impact on overall IT outcomes.  We also found that data collection has reached critical importance for the monitoring space.

Observability gets tossed around a lot in monitoring discussions, but its definition is not nearly as well known. Systems engineer and author Cindy Sridharan offers one of the clearest definitions to date in an essay on Medium. She describes how observability, a superset of monitoring, combines alerting/visualization, distributed systems tracing infrastructure and log aggregation/analytics to provide better visibility into IT systems health.

We wanted to test if increased access, data depth and context provided by a Dimensional Data stream positively impacted observability and other IT outcomes. Dimensional Data refers to a real-time metric stream provided by a monitoring integration as a service (MIaaS).

Single-Pane of Glass Search May Be Shifting to “First” Pane of Glass

It probably won’t surprise you to learn that 60 percent of respondents ran three or more monitoring platforms. To us, that number seemed a bit conservative compared to the five or six we regularly hear about from our customers. What did surprise us was the number of organizations who seem to think this tool sprawl is here to stay. Consolidation has been a perennial theme in monitoring, like many technology practices, for years.

Figure 1: According to our survey, 248 of the 410 respondents run three or more monitoring tools.

Consolidation certainly hasn’t gone away. After all, half respondents indicated that they were trying to consolidate–perhaps in still in search of that elusive “single source of monitoring truth”, but others had no near-term plans to decrease the number of monitoring tools.

Why are teams opting for this multiplatform approach? For those we surveyed, there were a variety of reasons. The two that stood out were the different technologies monitored (cloud, infrastructure, databases, etc.) and the different monitoring use cases (performance monitoring, log analytics, etc.).

Figure 2: Respondents plan to keep multiple monitoring tools for a number of reasons.

Those reasons suggest there isn’t a gap in monitoring tooling as much as there is a gap in the data collection for these tools. It suggests that if the respondent’s had access to all metric types across all of the technologies in their stacks there would be considerably less reason to continue on with multiple monitoring solutions — or at the very least they could focus on opting for the best analytic or visualization engines based on their team’s needs.

More Tooling Hasn’t Necessarily Elevated Us to Observability

Observability, Sridharan argues, provides highly granular insights into systems behavior along with rich context.  Our survey respondents indicated that despite all the tooling they have in place, most of them still face gaps when it comes to the granularity and context of their data.

In fact, 75 percent of our survey respondents indicated they lacked the depth they needed — defined in this survey as component-level data resolution — within the vast majority of their monitoring integrations (a broad term used to describe any combination of API connections, plugins and scripts used to collect metrics for their monitoring tools).

Figure 3: Survey responses indicate the majority (305 of the 410 respondents) have a critical gap in granularity, one of two key characteristics of observability.

When it came to the second characteristic of observability, context, the survey results weren’t much brighter. More than two-thirds of the respondents didn’t have access to critical context, defined in this survey as visibility into the relationships between various layers of their stack. That means most of them are looking at metrics in a silo. In a waterfall situation, separating root cause from symptoms can be very time consuming without context.

Figure 4: A majority (276 of the 410) respondents reported they lack context in their monitoring integrations. Context is a key characteristic of observability.

Your Metric Collection Strategy Matters for Observability

As we discussed above, the results point to a systemic gap between adding monitoring tools and improving observability. In my next post, I’ll dive deeper into these same survey results and further examine the correlation between the right metric collection strategy and achieving observability.

Feature image via Pixabay.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.