- Set up user tracking. Running experiments is impossible without tracking, so make sure that you have a way to track the impact of any changes or tests. A common way to track experiments is with a feature toggle based experimentation platform like Split.
- Define a hypothesis. When you define your hypothesis, you’ll also define your validation criteria – aka, how much evidence you’ll need to make a decision. Ensuring you know up front what outcomes would cause you to make which decisions will prevent a significant degree of bias. Ask, “what will tell me this new product or feature is successful?”
- Test the hypothesis. Set up the test and run it. In the software development world, most tests take longer than the short period of time it takes to looking out your window to see if it’s raining: you’ll need to run the experiment for a while in order to gather enough data for statistical significance.
- Act on the experiment results. Once you have a statistically significant result, act on it. Roll out, or roll back, the experimental feature. Note what worked and what didn’t, and keep running experiments.