Prometheus is a next-generation monitoring system from SoundCloud that has already earned considerable repsect from the likes of the Docker engineering team. It’s a service especially well designed for containers and provides perspective about the data intensiveness of this new age and how even Internet scale companies have had to adapt.
SoundCloud, of course, is the online exchange for openly shared audio tracks. It’s an enormously popular platform which provides an immediate benefit for artists to be discovered by listeners, possibly by promoters, maybe even by potential agents. It enables amateur and professional musicians to share a common audience.
The company has a new set of realities compare to the summer of 2007 when the company was founded on a monolithic application written in Ruby on Rails. It was the most immediate and effective method at the time to construct a service that audiophiles could begin using right away. The back-end model was a single repository with a replete public API, enabling client apps to utilize whatever architectures or platforms they wished. This is what developers at the time called, keeping things simple. But simple doesn’t scale. That’s especially the case — with apologies to Arthur C. Clarke — with monoliths.
To adapt accordingly, SoundCloud has adopted a microservices architecture. In that transition, they now run hundreds of services and thousands of instances, many running at ther same time. There are also issues about understanding the dimensions of the data, scaling the data, having a query language and making it all manageable. With those considerations in mind, SoundCloud started looking for a monitoring service. They could not find one so they created Prometheus, which is actually more than a monitoring system. It’s actually a monitoring service and time series database rolled into one. Johannes ‘fish’ Ziemke, an infrastructure engineer at Docker, Inc. and formerly a systems and infrastructure engineer at SoundCloud, describes Prometheus as a “highly dimensional data model, in which a time series is identified by a metric name and a set of key-value pairs. A flexible query language allows querying and graphing this data. It features advanced metric types like summaries, building rates from totals over specified time spans or alerting on any expression and has no dependencies, making it a dependable system for debugging during outages.” Prometheus is written in the Go programming language.
Escaping the Monolith
Back when applications inhabited the memory space of single processors, systems monitoring was simple. It took place at regular intervals, or scheduled reporting cycles. The application produced the log and admins diagnosed it. Scaling up was a process of multiplication. You clustered the same applications together and distributed the client requests directed to them using a load balancer.
Today, applications inhabit the network. Jobs are distributed among multiple servers in these applications not just through blind load balancers, but by way of problem domains or logic domains — servers play discrete roles. Yet even this architecture is somewhat monolithic (maybe it would be “polylithic,” except that’s not a word).
As the number of users for a monolith grows linearly, the time consumed in serving them grows exponentially. As SoundCloud developer Phil Calçado wrote last June, “We felt we were always patching the system and not resolving the fundamental scalability problem.”
Early on, SoundCloud’s developers knew they needed to move to a microservices architecture. The microservices concept is fundamentally what it sounds like: Interrelated processes are divided into smaller logic domains. When the application needs to be scaled up, it’s more efficient to spawn new processes than to replicate entire blocks of functionality.
But here’s the problem: When you run your Internet-based business on a microservices architecture, the statuses of all its jobs are difficult for anyone to monitor in the aggregate. As a result, there’s no self-evident method for ascertaining why bottlenecks occur or how to remediate them.
Offloading the Mothership
Smartly, the SoundCloud team opted to gradually transition processes from “Mothership” (Sir Arthur would have loved these people) over to her microservice-based counterparts in stages. For at least the near-term, the two systems needed to co-exist. They would accomplish this by applying a consumption model to messages. Microservices would consume these messages to the extent they could, and those left over would be processed by Mothership. While the original monolith was constructed in Java, each microservice could be built using whatever language the developers assigned to it deemed most appropriate.
Monitoring the Mothership and its expanding universe had been done using the most common tools in the DevOps professional’s toolkit. StatsD absorbed the relevant statistics from each service in the system. It flushed the relevant data for those metrics through the Graphite engine to create charts. But in 2012, SoundCloud ran into essentially the same problem with StatsD and Graphite as it had with the original Mothership: They weren’t scalable. Now that microservice instances numbered into the thousands for any one moment of time, there wasn’t any reliable way to funnel all the events generated by these services into a single pipeline.
SoundCloud had to think about monitoring in a new way: active sampling. Imagine in your mind, for a moment, an ant colony where every worker ant continually reported on his progress building the colony. You’d think an efficient system would be for a pipeline to gather all these reports and pile them on the desk of the Queen. In practical use, though, having every single report on file is not always necessary for determining the status of the construction project. It might be good enough to have soldier ants patrolling the pathway, and scraping up reports as they go along.
The team built Prometheus to manage the soldier ants. Named either for the harbinger of fire in Greek mythology or the notoriously terrible sci-fi film, Prometheus utilizes a minimally organized database of key/value pairs sorted as time series. The way the keys are organized enables these series to be easily associated with one another, enabling multi-dimensional views, plotting one series against another and looking for correlations. The team built their own charting system, PromDash, not only to build live charts but also to assemble metadata that can be queried ad hoc using a form of SQL.And because SoundCloud designed Prometheus from the beginning as open source, the team was never alone. Now the monitoring system has attained major support from the Docker, as well as from master data management platform maker Boxever.
“You don’t always have the ability to change the code of your services to add instrumentation,” wrote Boxever senior developer Brian Brazil, in a post to his company’s blog last January 30. “Many supply metrics [are] in a single pre-defined format over which you have little control.”
Brazil demonstrated how his team uses Prometheus’ exporter function to take the instrumentation data already exposed by Java’s mBeans, and convert it into a format that Prometheus can consume. As a result, Boxever’s systems can monitor client latency in real-time, without having to scrape every single bean for metadata.
The result, according to Prometheus’ developers at SoundCloud, is what they describe as “operational simplicity.” “You can spin up a monitoring server where and when you want, even on your local workstation, without setting up a distributed storage backend or reconfiguring the world.”
Feature image via Flickr Creative Commons.