Honeycomb, the San Francisco-based startup aimed at helping software engineers solve the trickiest infrastructure problems, has announced two features designed to give users more control over their data.
The company maintains that logging and monitoring tools often don’t provide enough data to find and fix the most vexing problems. It was designed from the ground up to debug live production software, consume event data from any source with any data model and encourage collaborative problem-solving.
“Honeycomb takes a complex situation and lets you get down to the 30 things in common about everyone experiencing this problem,” said Aneel Lakhani, Honeycomb vice president of marketing.
Its users often cannot predict their problems because they’re emergent and a key factor is how their end users behave, he said. They need to investigate something they’ve never seen before and that won’t appear on a dashboard — especially from aggregated data.
So it offers event and tracing debugging, and allows users to ask questions about their systems.
“An event is anything in your system you think is relevant: You can put all kinds of information. It doesn’t matter if it’s high cardinality: unique user ID, IP address, endpoint, operating system, shopping cart ID — whatever you think is relevant. Then you can do analysis on it,” Lakhani explained, calling it a business intelligence tool for software engineers.
Cardinality refers to the number of potential answers. The more combinations, the more places you might find bugs.
Honeycomb users can ask questions to narrow down the possibilities, Lakhani said, such as: “Show me everyone experiencing latency of more than one second on this API call. Tell me the spread across geographies and then across device types.”
That means looking at a lot of data. But it’s common knowledge that scanning more data takes a hit on performance. Up to this point, Honeycomb has had only one speed: Fast, Lakhani said, and although the performance issue held true for Honeycomb as well as for logging and monitoring systems, it has decided to announced tiered storage with a lower price for the lower tier.
It’s something customers have been asking for, he said, because it wasn’t cost-effective to store a year or more of data.
“The further back you go, the lower the performance, but rather than being opaque about how that works, we’ve decided to put the knobs in the hands of our customers,” he said.
Using a slider, a customer can decide on the amount of data they want to be high performance and put the rest in the lower tier.
“We’re letting people draw that line themselves,” he said, adding that with other products, you don’t necessarily know where that line is.
Users also can decide how long events should be retained by data set — the amount you need in high performance might be different for, say, log-in transactions versus database transactions.
Lakhani conceded the company’s bottom line could take a hit when customers choose only a small amount of data for the higher-priced tier, but it will have to work that out.
Right now, the lower tier is eight to nine times less expensive, depending on the scale, Lakhani said.
Due to our optimization work in the process, it’s reducing the cost of the core high-performance product more than 20 percent and is reworking contracts with existing customers to pass on the savings.
The company also announced a beta for its Secure Tenancy product for companies subject to various compliance regulations. It’s designed to be able to handle personally identifiable information, though that’s typically the type of data organizations send to Honeycomb, Lakhani said. Some organizations want to encrypt everything.
It allows users two choices:
- Encrypt all event data before sending to Honeycomb, while maintaining total control over keys.
- Keep event data completely on-premise, while still using Honeycomb’s SaaS observability platform for querying and analysis.
In the second choice, the only thing that leaves the premises is a hash of the data that you totally control, Lakhani said.