Big News! Split is now part of Harness. Learn more at Harness and read why we are excited by this move.

Glossary

A/B Testing

A/B testing can be very similar, with only a change in button color, or very different, with a total change in the way a feature behaves.

What is A/B testing, and how does it work?

A/B testing, otherwise known as split testing, is the process of testing two different versions of a web page or product feature in order to optimize conversion rate, or improve upon a certain business metric. The two versions can be very similar, with only a change in button color, or very different, with a total change in the way a feature behaves.

A/B testing is based on the scientific method, and the process is very similar. To start with, gather relevant data on your current features and see which ones have the most potential to improve key business metrics. After you have the baseline data, look at those features to see how customers are utilizing them, and hypothesize a variation which could improve it.

Since the next step is to build the new version, you’ll want to take improvements with similar expected user experience improvement and prioritize them by how easy they are to build. Then, pick a test to start with and build the new version. The old version will be what scientists call the “control” and what we’ll call Version A; the new version is the “experiment”: Version B.

With front end A/B testing, people typically assign Version A and Version B to different sets of users and measure which set of users, if either, had a higher conversion rate with statistical significance. But there is a different kind of A/B testing which happens at a much deeper level.

A/B Testing With Feature Flags

While typical A/B testing happens on the front end, choosing which version of the page show to website visitors, there is a way to A/B test your product features as well: using feature flags.

Feature flags allow development teams to release a feature to only a subset of users, which satisfies the necessary step of creating two versions of a feature. All that’s left to do is to integrate the team’s analytics platform with the feature flag management system, such that the team can correlate the users’ behavior with which version of the feature they used.

After these things are done, the A/B testing process can be used to find the expected user experience change when any new feature or code change is implemented. Development teams can then look at this information and adjust the feature accordingly: if the change is significantly negative, they can find out what’s wrong roll back the feature so it performs as it did before running the test; if it’s positive, they can release the product feature to a larger percentage of their customers.

Switch It On With Split

The Split Feature Data Platform™ gives you the confidence to move fast without breaking things. Set up feature flags and safely deploy to production, controlling who sees which features and when. Connect every flag to contextual data, so you can know if your features are making things better or worse and act without hesitation. Effortlessly conduct feature experiments like A/B tests without slowing down. Whether you’re looking to increase your releases, to decrease your MTTR, or to ignite your dev team without burning them out–Split is both a feature management platform and partnership to revolutionize the way the work gets done. Schedule a demo or explore our feature flag solution at your own pace to learn more.

Split - icon-split-mark-color

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.

Want to see how Split can measure impact and reduce release risk? 

Book a demo