What Monitoring Can Learn from Major League Baseball
Blue Medora sponsored this post.
In 1984, Bill James, was frustrated that Major League Baseball (MLB) refused to publish play-by-play accounts of every game.
Bill recognized the value that standardized, easy-to-access data depth could bring to his approach to analyzing the game of baseball by looking at specific aspects of individual player performance — a practice now known as sabermetrics.
Because MLB didn’t have all the data he needed for his model to work, Bill recruited a network of fans that would work together to collect and distribute this information (known as Project Scoresheet) and it later evolved into STATS Inc., the company that provided data and analysis to every major media outlet before Fox Sports acquired it in 2001.
It’s about at that time that Paul DePodesta changed the game by taking an analytical, even algorithmic, approach to selecting players for the Oakland Oakland A’s team that was struggling to stay competitive with a payroll one third the size of teams like the New York Yankees.
The A’s General Manager Billy Beane assembled using DePodesta’s analysis on James’s data won a record-setting 20 games in a row in 2003. His then highly unorthodox yet ultimately successful “coaching by algorithm” approach that changed America’s great game forever later served as the basis for Michael Lewis’ book “Moneyball: The Art of Winning an Unfair Game” and Hollywood movie.
Today, after a decade of working in the IT monitoring business, I can affirm algorithmically guided analytics in the monitoring space is also shaking up the IT world. Like Baseball in the early 2000s, algorithms have the opportunity to completely change the monitoring game. Advances in machine learning have the potential to elevate monitoring to observability.
But, like baseball before the advent of STATS Inc., our traditional data-collection methods are holding us back. Sadly, some of the most incredible analytics engines are relying on a hodgepodge of data collection sources. Many are built by technology providers, while others are based on open source or even community members.
Most large organizations run six or more monitoring tools. Maintaining dozens of individual integrations can actually turn into a significant investment, one with the potential to rival monitoring platform costs over time.
However, perhaps most importantly though, the results aren’t really good. Many times it can be a struggle to get clear insights when things go wrong.
Monitoring Integration as-a-Service (MIaaS) has the opportunity to change that by decoupling data collection from data analytics. In a way, building an integration layer that any monitoring or analytics platform an organization uses can draw upon.
Here are four modern monitoring use cases where MIaaS really makes sense:
Microservice architectures require a different approach to monitoring integrations, one that takes into account the temporary nature of many resources. MIaaS makes it easy to auto-discover new resources dynamic tech stacks, such as those using containers or serverless technologies.
MIaaS also goes deeper than community or open-source integrations like Collectd or StatsD, because it has a more flexible ingestion framework that can accommodate more and varied API types. This makes it possible to deliver component data in highly abstracted environments. See the comparison below of a Redshift database running in a containerized environment:
Hybrid Cloud Monitoring
Pinpointing application performance problems in a cloud-native application is already tough enough. Throw in on-premises stacks and one or more cloud providers, and suddenly you have an environment with multiple moving parts, affecting application performance from several points.
MIaaS can unify visibility between legacy datacenter technologies and across clouds. Understanding the relationships or context between the individual components of an entire system like a node and a cluster or a container and a host makes alerting more accurate and root causes analysis much quicker.
Multicloud Application Optimization
Research from Edwin Yeun at ESG suggests the greatest driver for multi-cloud utilization is the individual application or workload requirements. Each of the major public cloud monitoring providers offers their own monitoring solutions, but it becomes difficult to compare apples to apples. Metrics can be displayed in different depths or even units.
MIaaS levels the playing field for all public cloud metrics. It also makes it easy to standardize on one cloud provider’s monitoring platform or to use third-party analytics tools to analyze cross clouds from an application-oriented view via an application performance monitoring platform (APM).
As I mentioned, most large organizations run multiple monitoring tools. This can be for a variety of reasons, but one of them may be individual team preferences. But, as the traditional silos of “Dev” and “Ops” continue to collide, ensuring that everyone has access to the same MIaaS is an easy way to increase collaboration.
You may recognize Bill James and Paul DePodesta’s story from the book or movie. If so, you may remember how James’s data and DePodesta’s analysis changed the game of baseball forever. Decoupling data collection from data analytics could revolutionize IT monitoring in the same way. MIaaS has the potential to unleash the full potential of the best AI technologies, by eliminating current challenges caused by data collection.
Feature image via Pixabay.