It unveiled this service last week at KubeCon, where it also introduced a troubleshooting dashboard Weave Scope.
“We find a lot of the technologies in cloud-native, be it Kubernetes, Prometheus or OpenTracing, are pretty new. Developers don’t necessarily want to have to learn everything about a new tool before using it. So it can be attractive [to them] to say, ‘We’ll start and run this for you. … We’re offering this as a matter of convenience to speed up your day job,’” said Alexis Richardson, Weaveworks CEO.
As more people start using these tools — especially less experienced people — they will demand that the tools be easy to use, he said.
The company touts these benefits of Weave Cloud:
- Enabling sophisticated queries that help DevOps teams troubleshoot problems in containerized applications. Weave Cloud requires no coding to collect baseline metrics from the infrastructure, and users can export custom metrics from their application with just a few lines of code.
- Providing visualization, monitoring and management of container networks built using Weave Net, while also enabling the isolation of services and containers, as well as firewall traffic to reduce the attack surface.
- Automating the tedious and error-prone steps to go from a new set of container images to a properly deployed service running in Kubernetes. Platform teams define a policy that describes how the service should be run, and Weave Cloud automatically generates the right Kubernetes configuration files, checks them into source code control, and (optionally) deploys them to Kubernetes.
“If you ask a developer, ‘What bothers you about containerized cloud-native applications,’ they will say, ‘Well, I don’t know where to start. But once I’ve got the basic Docker or Kubernetes cluster up and running, I don’t know what to do next.’ We give them the answer to that. In one product, we give them the things they need: monitoring, management, security, firewalls, continuous deployment,” he said.
For the cloud native space, the Cloud Native Computing Foundation — Richardson serves as chair of its technical oversight committee — is building the emerging best-of-breed class of tools that people will use to build applications. CNCF chose Prometheus as its second hosted project after Kubernetes, then added OpenTracing and more recently added Fluentd as its logging project.
Weaveworks, Richardson says, is filling the gaps between the open source projects on which the cloud-native tools are built. Weaveworks entered the container management landscape in 2014 with Weave Net. In April, it added the ability to set up micro-SDN (Software Defined Network) and the ability to multicast information to large numbers of containers simultaneously.
“We think it’s fundamental to monitoring and visibility baked into every single thing you do in cloud-native,” said Richardson, who adds that it’s building “the monitoring-based platform,” which he considers the wave of the future.
Because Weaveworks uses Prometheus internally, it found some limitations, such as a limited amount of data that can be stored and high availability, he said. It doesn’t, for example, have a built-in snapshot capability. And setting up high-availability Prometheus monitoring “is a bit of an art form.”
Through conversations with the Prometheus community, it forged ahead with Cortex on a new data model.
“It takes the data queue, which is the back end of Prometheus, the time-series database, and writes it out onto Amazon S3 at the moment, in a slightly different format than traditional Prometheus storage to give you a much larger storage space,” he said. “Essentially, you can store as much data as S3 can hold, which is very attractive when you’re running a large-scale application and you don’t know how much storage you need.”
Currently, for HA, you run multiple instances of Prometheus side-by-side. If there are gaps, you have to decide which one got the data missing from the other one.
“It’s an exercise in itself to figure that out,” he said, explaining that Weaveworks come up with a different model that can be part of a cloud service, which some customers, not necessarily all, will find helpful.
As for the rash of new monitoring companies cropping up, Richardson pointed to three differentiators:
“We’re not actually running the container orchestrator for you. That’s what everyone else wants to do,” he said. It aims to provide add-ons and fill gaps for those who just want to run Docker or Kubernetes themselves. “We’re going to give you stuff you don’t have; we’re not going to give you new versions of the stuff you do have.”
Second: It’s cross-platform.
“We don’t care if it’s Docker or Kubernetes or both. We have that in common with companies like Rancher and Hashicorp, but most people are saying you have to bet on one or the other horse, and I think that’s creating a bit of confusion in the market. We want to say to customers, ‘You need to be thinking about the application, not the infrastructure.’ Part of making that easy for you is we’re going to work with the main open source orchestration platforms,” he said.
Third: Integrating monitoring into everything. Most monitoring tools are things you get separately and plug into yourself, then you have to integrate into your own systems, he explained.
“We believe that by doing some of the integration ourselves into orchestration, continuous deployment, networking, security, we can add a lot more value to the customer.”
Listen to all TNS podcasts on Simplecast.
The Cloud Native Computing Foundation and Docker are sponsors of The New Stack.