With organizations deploying larger and larger containerized services into production, understanding the health of these containers has become more critical than ever. Whether working in Docker, Kubernetes, Amazon Container Services, or another container platform, this need remains the same. This requires visibility into not only if containers are healthy, but their performance, dependency status, error alerts, and resource usage system-wide.
As microservices continue to arrive onto the application development landscape, understanding how the many containers individual services are deployed on can communicate with one another is key. Collecting data from within hundreds of containers operating at scale has proven a significant pain point for those in container monitoring though there are solutions such as Kubernetes which aim to ease the pain of working with containers at scale.
When businesses pair Kubernetes containers with a scheduler such as Mesos, adding a container monitoring service such as WeaveWorks, Sysdig Cloud, or New Relic can bring together valuable insights about one’s infrastructure.
Digging into Monitoring Containers
As microservices run on a collection of containers, isolating issues which affect service performance is critical. When deploying a monitoring platform, there are a variety of SaaS offerings to choose from.
Historically, service discovery when working within containers has required developers to write code which ensures that these containers can be viewed and interact with one another. While some container monitoring software requires a complex and time-consuming setup, WeaveWorks has done away with the traditional approach of installing libraries or complex kernel setups.
“Writing the container on the host collects all the information in a real-time picture without installing libraries or kernel modules. For customers [setup] is often a challenge, they don’t want to do that,” noted WeaveWorks chief operating officer Matthew Lodge.
To get the most out of any container monitoring platform, one must consider how they will be using the information which is obtained. As data is collected, without actionable items it becomes trivial.
Companies may run containers in a similar pattern to individual servers, intent on having them always on, collecting data to send to a database for long-term analysis or later efforts to refine its pipeline.
Others may deploy hundreds of smaller, short-lived microservices throughout the day, spun up when customers access specific parts of their web application.
Lee Atchison, principal cloud architect and advocate at New Relic, notes that understanding how containers will be used in one’s organization is key to understanding how to monitor highly scalable applications running in production.
New Relic not only monitors containers on a basic level but also provides users with rollups of monitoring at the Docker image level. “This allows you to see the usage pattern of specific Docker images, independent of how many instances of that image are running or have run. This gives a unique view into the impact of performing some types of actions, such as container upgrades and container versioning. It also gives perspective into usage patterns of short-lived containers,” said Atchison.
Despite the revolution brought about by containers, they remain difficult to monitor at scale.
“The lack of visibility makes it hard to connect and interpret in a meaningful way the metrics coming from the container,” said Sysdig CEO Loris Degioanni.
Monitoring containers on a deeper level can result in system slowdowns, with complications arising in deployment at scale and utilizing data collected efficiently.
Sysdig utilizes Container Vision technology to allow users to see inside containers from the outside, leading to better scaling and performance when operating in production.
In addition to its open source platform, Sysdig offers an enterprise level container monitoring suite which covers end-to-end monitoring for those working in a distributed environment. “Containers lend themselves naturally to orchestration, Kubernetes, in particular, is designed to run at scale on arbitrary infrastructures,” said Degioanni.
The Right Tools for the Job
Under the hood, many container monitoring platforms differ very little. However, some have taken to writing their software code in such a way that the benefits are easily felt by the end consumer.
Weave Scope is written in Go. Lodge noted that using a compiled language for container monitoring software makes a big difference, as it is then infinitely simpler for users to write their own microservices and integration. Go is an agile language, with a vast library of support to bolster its already impressive track record.
Sysdig relies on Linux kernel modules written in C, powerful libraries compiled in C++, extensive database technology, SVG based virtualization, and “A bit of everything,” to power its container monitoring offerings, said Degioanni. Atchison notes New Relic is a SaaS offering, relying on multi-tenant software to better utilize system level resources. This results in better utilization of servers and increased scalability, allowing for New Relic to scale its service without customers experiencing interruptions.
Return on investment is important when collecting data. Without having actionable items, data can sit unused in an off-site data center which costs companies time and money to maintain.
Setting up and configuring a container monitoring service must then be simpler, allowing for users to obtain data which is easy to understand, implement, and act upon while also having the ability to perform well at scale.
Having visibility into containers across one’s infrastructure allows for developers to not only refine their code but to understand how the ways that they write code can impact the performance of containers running microservices at scale.
Lodge highlights three areas for improvement in container monitoring, including embracing open source without it becoming, “A DIY Lego set,” transitioning from single hosts, and making container monitoring systems simple for all to use.
Companies continue to shift toward containers to better streamline their applications, create microservices, and re-work their operations. Degioanni notes that many companies are using containers to not only isolate individual workloads, but to orchestrate their container-based services into a rapidly scaling infrastructure through the use of Kubernetes, Docker Swarm, or Mesos.
As more companies working with containers also embrace container orchestration for working at scale, monitoring these vast networks of microservices and stand-alone, long running instances present a challenge which will persist well into the new year. Without accurate, high-level visibility between container setups large and small, businesses cannot utilize the data within them efficiently. To further temper and hone container monitoring technology into a stable, scalable solution for monitoring vast quantities of containers deployed in production, there must be continued discussion around the issues facing this exciting new technology.
Docker, New Relic and Weaveworks are sponsors of The New Stack.