Optimizely’s Claire Vo Talks Successful A/B Testing at Scale

Optimizely’s Claire Vo Talks Successful A/B Testing at Scale
When building front-end software, it can be tricky to figure out just what works. As with any page layout endeavor, from the Web to the supermarket checkout line tabloids, there are plenty of nooks and crannies to explore with headlines, graphics, and colors. Any software shop earning money on the Web likely already knows about “A/B” testing: the practice of making subtle changes in your page design and gathering metrics on their effectiveness at converting visitors then comparing them to the existing version of the site.
Now that such testing regimes are commonplace in the enterprise, it is inevitable that every team eventually encounters the egregious and exhausting existentialist crisis that is test management. Gathering metrics for a single test is one thing, but what happens when the entire enterprise is pushing tests across thousands of sites all the time?
Claire Vo is a Silicon Valley success story: She sold her startup Experiment Engine to Optimizely in 2017. Her particular winning formula was to help solve this exact problem for enterprises: Managing experiments at scale across thousands of sites, and measuring results in order to effect actionable changes overall.
“I think it’s interesting when you think of experimentation as risky because in my mind, experimentation is actually the opposite of that. It’s something that helps you reduce risk. The riskiest thing I can imagine a business does is just guessing,” said Vo. “Experimentation actually de-risks the process of product development because it helps you validate hypothesis before you roll out something to all of your customers that might have a negative impact.”
Vo said that the requirement for enterprise success with front end A/B testing is the establishment of a pipeline, similar to that which likely already exists for other software. These tests must be managed and measured, like any tests, across the lifecycle of the product being tested, and as software assets on their own. This infers continuous integration pipelines.
“Fundamentally there are a couple things you want to have in place before you even start experimentation. The first [is] this underlying infrastructure of data collection. Can you collect the metrics and analytics you need to evaluate the performance of your application or business based on the behavior of your users? If you don’t have the data, you can’t make decisions based off of it. So the first thing I would say was really important to do is to instrument your data,” said Vo.
Vo continued to elucidate as to the processes and plans to put in place to succeed with user testing and experimentation, and also chimed in a bit to comment on her success as an entrepreneur, with a bit of advice for others looking to build a company from scratch.
In this Edition:
0:31: How do you ensure experimentation is safe and useful?
1:36: What infrastructure is needed to manage all these tests?
5:26: How do you keep order when you’ve got so many experiments running?
9:32: What kind of person should lead experiments inside an organization? Product manager? A team lead?
13:37: What advice do you have for entrepreneurs starting a software company?
15:20: What is Techtonica?
Feature image via Pixabay.