Measure Causal Impact with A/B Testing
Everything can be measured. If you want to know the click-through rate of your call to action button or the bounce rate of your landing page, you need to look no further than our Analytics tool. But if you want not just numbers but information that can drive decisions, you need to do more than measure, you need to test. Want to know what landing page design is the absolute best for driving conversions? What specific part of your ecommerce site is causing the high rate of cart abandonment? Could the button color or its placement on the webpage be driving your low conversion rate? Without the ability to run experiments or tests, there’s no way to know.
Fortunately, there is a solution to these problems: A/B testing. With A/B testing, you can systematically improve just about any important metric: marketers can create an optimal marketing strategy to increase conversions, developers can create a more positive user experience in real time with better-calibrated machine learning models, and businesses in general can improve their bottom lines.
How does this work?
The A/B Testing Process
In order to measure causation (instead of just correlation), you need three things: a control group (which sees the old version), an experimental group (which sees the new version), and random assignment into those groups from a large sample size. Unlike most scientific experiments, you can also A/B test a larger number of new versions, though for this you may want to use multivariate testing to get more actionable test results.
When defining which sections of your userbase should see the new feature, you may want to refine by all sorts of categories. Release it to only internal users for QA testing using the real infrastructure. You have a list of users who opted in for early releases of new features, so release to them by ID. Release to ten percent of the total userbase. Release to a percentage of the total userbase in New York specifically. A testing tool like Split can give you this high level of granularity in your test run.
Once you have your control and experimental groups, you can run tests to see which version improves your metrics (improves conversion rate, decreases average support ticket count, decreases page load time, etc). Don’t discount a small improvement in the winning variation over the old one; a 10 millisecond improvement in search engine result load time can pay the entire salary of the engineer who made it. Focus on the impact of the change on your bottom line.
Using this process to select the winning variation can give you the answer to causal questions and help you achieve the results that actually matter, where you get more clicks, more conversions, and ultimately more revenue.