Continuing to question the traditional wisdom that software updates should be tested in their own sandbox environment, Charity Majors, CEO of observability software provider Honeycomb.io, spoke at Gremlin‘s chaos engineering conference last week, ChaosConf, about the benefits of testing in the production environment.
“Testing in prod has gotten a bad rap,” she told the audience, referring to the conventional wisdom that running untested code on live users is asking for trouble — and customer dissatisfaction.
She explained that the initial negative reaction to the idea is based on a false dichotomy. It assumes that there are only two options for software development which is, at heart, an exercise in repeated testing. One is to test software totally in its own “sandboxed” environment. The other option would be to upload the new code to the cloud for all users.
But there are multiple techniques, such as A/B testing and Canary testing, that allow you to try code on a small portion of the entire user-base, so you can collect metrics before rolling the update out system-wise.
“Trying to mirror your staging environment to production is a fool’s errand. Just give up.”
What we think of as the typical software upgrade cycle is due for an upgrade, she said, especially as our architectures move towards container-based distributed systems, which can’t be easily managed by the old tools.
“Distributed systems are incredibly hostile to being cloned or imitated, or monitored or staged,” she said. “Trying to mirror your staging environment to production is a fool’s errand. Just give up.”
Most architectures are way more complicated than the standard LAMP stack. “Finding the right level of detail to ask the right questions is challenging,” she said.
With distributed systems, what can go wrong is this infinitely long tail of things that will probably never happen, but one day they do. Photos load slowly for some people but not for others. “How are you going to find that in a staging environment? Spoiler alert: You’re not.”
This is why observability is so important — which Honeycomb specializes in — because you can’t predict where the failure will occur. Maintaining distributed systems involves more than attaching a standard set of monitoring agents to your application.
Majors’ views were echoed earlier in the day by by Amazon Vice President, and all-around scalable microservices expert, Adrian Cockcroft. “Your failure model will not include the outlier that breaks everything,” he said.
Problems with distributed systems will not show up in a dashboard, Majors noted. The temptation is to create a new dashboard entry that would capture the problem the next time it occurs. The problem is, that exact problem will probably never occur again. You need to take a more open-ended, exploratory approach, she said.
“Monitoring itself is not enough for complex systems,” she said. “Dashboards are a relic.”
Events Not Metrics
“The hard part is figuring out where the problem lies. The hard part is not debugging the code. The hard part is figuring out which part of the code to debug,” Majors said.
She talked about how most problems in distributed systems are ones involving high-cardinality. Cardinality is the number of elements in a set, and perfectly cardinality is one where each element gets its own set. The answers to most all problems in distributed sets come from high-cardinality data — or a very small subset of factors that somehow worked together to create trouble.
A traditional control theory definition of observability is that it is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. In this light, the trick would be to gather all the data you would need so that you could ask any question, without having to write more code to harvest more data.
Cockcroft’s presentation was in accordance on this point as well. For something to “observable” means you can totally predict its behavior just from the metrics it provides. By having no state, microservices are inherently observable. Monoliths are harder to model because they have so many potential states.
The hard part of any debugging any distributed system is finding the right question to ask. As anyone who runs them knows, distributed systems are never fully operational. There are always some issues threatening to take them down at any point.
You ask questions. Based on the answers you get, you ask more questions, and follow the breadcrumbs. “You have to be able to ask questions of your raw events,” she said.
This is an AWESOME piece by @joab_jackson.. clear, crisp overview of the industrywide lurch towards distributed systems, and some of the ripple effects as experienced by tooling and teams.
I'd like to sharpen up a couple small points. https://t.co/DzYyB5EOpY
— Charity Majors (@mipsytipsy) October 4, 2018
More insights from the Chaos Conference can be found here:
Gremlin assisted in the travel costs of covering this event.