Glossary /

Hypothesis Driven Development

If a software engineer wakes up in the morning, and hears something that sounds like rain outside her window, she thinks it might be raining. Her hypothesis, that it’s raining, drives her decision to look out her window. She knows ahead of time that if she sees rain, it’s actually raining, whereas if she sees a sprinkler running, it’s not. When she actually looks out her window and sees rain instead of a sprinkler, she decides she should bring an umbrella to work. When she gets to the office, if she notices people aren’t clicking on her website’s CTA button, she thinks the button might need to be more visible. Her hypothesis, that the button isn’t visible enough, drives her development process. When she wants to verify that the button’s visibility is causing the low conversion rate, she creates a new UI with a larger CTA button and tests it alongside her previous UI (probably using A/B testing). She knows ahead of time that if she sees a statistically significant increase in clicks from the users who see the bigger button, that was the problem, whereas if she doesn’t see an increase, it wasn’t. When she actually runs the test and sees a statistically significant increase in conversions, she decides to roll out the larger button to all users. This is experimentation: using the scientific method to solve problems, test hypotheses, and create effective solutions. We do it all the time, often without even realizing it. In fact, many recent technology-related processes use this model: agile, devops, and the lean startup business model are based on the experimentation mentality. Hypothesis driven development (HDD) is just the name we give to experimenting on the software development process. The exact steps of hypothesis driven development are:
  1. Set up user tracking. Running experiments is impossible without tracking, so make sure that you have a way to track the impact of any changes or tests. A common way to track experiments is with a feature toggle based experimentation platform like Split.
  2. Define a hypothesis. When you define your hypothesis, you’ll also define your validation criteria – aka, how much evidence you’ll need to make a decision. Ensuring you know up front what outcomes would cause you to make which decisions will prevent a significant degree of bias. Ask, “what will tell me this new product or feature is successful?”
  3. Test the hypothesis. Set up the test and run it. In the software development world, most tests take longer than the short period of time it takes to looking out your window to see if it’s raining: you’ll need to run the experiment for a while in order to gather enough data for statistical significance.
  4. Act on the experiment results. Once you have a statistically significant result, act on it. Roll out, or roll back, the experimental feature. Note what worked and what didn’t, and keep running experiments.
Turning every new feature proposition into an experiment means all your feature releases are driven by data. You’ll know what your users want, and how the form of that desire shifts over time. You’ll know what features your users use and which they don’t, which they want and which they only say they want. And because you know these things, you’ll be able to create the best product for your customers.

Get Split Certified

Split Arcade includes product explainer videos, clickable product tutorials, manipulatable code examples, and interactive challenges.

Deliver Features That Matter, Faster. And Exhale.

Split is a feature management platform that attributes insightful data to everything you release. Whether your team is looking to test in production, perform gradual rollouts, or experiment with new features–Split ensures your efforts are safe, visible, and highly impactful. What a Release. Get going with a free accountschedule a demo to learn more, or contact us for more information.