Events / Podcast

Datadog’s Data Science Draws Monitoring from the Darkness

1 Jun 2016 8:51am, by

The future of monitoring is changing rapidly, as more businesses are starting to see value in looking at alerts in algorithmic and programmatic ways as they struggle to determine if a spike or dip on a graph is good or bad.

In this episode of The New Stack Makers embedded below, The New Stack founder Alex Williams sat down with Ilan Rabinovitch, director of technical community and evangelism at infrastructure monitoring tools provider Datadog during OSCON Austin to learn more about how Datadog is empowering developers working with microservice-based infrastructures, the rise of containers in development, and the way collecting metrics has been welcomed back into the light after spending a decade in the dark.

The conversation can also be viewed on YouTube.

Data science is increasingly being applied to operations as companies try to filter their signal-to-noise ratio.  Datadog itself utilizes outlier detection and clustering algorithms to help.

“It turns out that tagging based primitives, building alerts and dashboard as queries with predicates using tags provides you quite a bit of power. Monitoring is a complex big data problem and we help make that simpler. To see everyone all excited about metrics again after a decade of being in the dark is great,” said Rabinovitch.

By using Datadog’s unique approach to image tagging, developers can get a fine-grained picture of their entire architecture without unnecessary noise. With the ability to tag events ranging from those that take place across every data center in an organization to events just happening at an individual data center location, Datadog allows developers flexibility along with horsepower when monitoring their stack.

“Imagine if we had this conversation 10 years ago. The pet versus cattle analogy exists because people were naming their servers after greek gods because they thought they were going to be around for years. We’re going to continue to see that drop as people are adopting orchestration tools like Kubernetes, OpenShift, and DC/OS,” said Rabinovitch.

Datadog processes nearly three trillion points of data on a daily basis, utilizing not only its own proprietary solutions but makes use of a variety of open source tools such as Cassandra, PostgreSQL and Apache Kafka.

“The default answer to most infrastructure plumbing is open source, it’s amazing. Ten years ago that wasn’t the case. Nowadays, it’s the default. If you look at the container space, there’s not a ton of proprietary container solutions out there. They’re all open source,” Rabinovitch explained.

Red Hat OpenShift is a sponsor of The New Stack.

Feature image: Ilan Rabinovitch, interviewed by Alex Williams at OSCON.


A digest of the week’s most important stories & analyses.

View / Add Comments