With ChatGPT, Honeycomb Users Simply Say What They’re Looking for
Right about now every company is trying to figure out how to get a competitive hand with ChatGPT. For observability platform provider Honeycomb, the OpenAI technology promises to make querying easier for its users.
The company has embedded into its user interface some natural language querying capabilities, called Query Assistant.
So when a system goes awry, and the operations team jumps on the Honeycomb platform to find out what’s up, they can ask questions in conversational English, rather than spending time parsing their ideas into a SQL-like query that the platform would understand.
The chat assist may even make the platform easy enough to interrogate that even non-specialists can use it to understand code behavior, promises Charity Majors, CTO of Honeycomb, in a phone conversation with The New Stack.
Prior to the Query Assistant, Honeycomb users constructed queries using a visual SQL-like language of Honeycomb’s, with the usual SQL clauses such as GROUP BY and WHERE and LIMIT, and so forth.
The Query Assistant offers a single blank box where users are urged to write their question, in the form of “Can You Show Me?” such as “Can you show me all the slow endpoints by status code?”
Majors prefers the ChatGPT approach over the traditional use of AI in operations, often dubbed AIOps. By its conversational format, ChatGPT educates the users on how the underlying systems works, through its question-and-answer format, whereas AIops may just provide a potential answer, with no indication of how it got there.
“Anytime you’re going to rely on AI to make the decision, it’s almost impossible for humans to follow the trail of breadcrumbs and figure out why it did what it did, and then take better actions on that,” Majors said.
Tracking the State
Overall, it took Honeycomb only about six weeks to ship this feature.
Honeycomb chose ChatGPT as a chat assistant because it offered the best mix of features, Majors said. No in-house AI experts were needed: Honeycomb’s in-house software engineers were easily able to work with the technology.
The company had to do a fair amount of work to prepare materials — such as defining keywords, data types, and schemas — that ChatGPT could use to interpret the queries, given that ChatGPT can’t as of now, hold state. For a user to have an extended conversation about a problem involves collecting a bunch of information to ship over for each query.
Nonetheless, Majors was surprised in how much ChatGPT could infer additional facts based on the input. Yet, ChatGPT had no hallucinations (or false facts conjured by ChatGPT). “I just feel like with numbers, ChatGPT is a little bit better than with words,” Majors said.
Query Assistant is not Honeycomb’s first user-aid. Another feature BubbleUp, which allows the user to pick a subset of a dataset where something of interest appears to be taking place, where the platform will compute all the dimensions that lie inside of that bubble. But while BubbleUp makes the inspections process easier, it still requires to user to be somewhat conversant with the system’s own domain knowledge.
Query Assistant, Majors said, “is very much for the people who aren’t quite sure where to start.”
“It jumps over that whole needing-to-know SQL, and you’re just like, ‘I just deployed something, where are the errors?’ It leaps you to something that might not be exactly what you want, but then you can tweak it,” Majors said.
Query Assistant is a free feature for Honeycomb users, currently as an experimental offering. Admins can turn it off. No customer data is shared with OpenAI or used by Honeycomb internally.