We have updated our Data Processing Addendum, for more information – Click here.

Glossary

Winning Variation

Winning variation refers to the version of an asset that outperforms other variations in achieving a specific goal, such as higher click-through rates, increased conversions, or greater user engagement. It is identified through statistical analysis of experimental data.

What is a Winning Variation?

A winning variation in the context of online experimentation refers to a specific version or variant of a web page, email, advertisement, or any other digital asset that outperforms other variations in achieving predefined success metrics or goals. These success metrics can vary depending on the objective of the experiment, such as higher conversion rates, increased user engagement, longer session durations, faster response times, or lower cost to serve.

Variant:

In the context of online experimentation, a variant refers to any version of a digital experience being tested against other versions. Variants typically differ in one or more elements, such as layout, design, copy, color scheme, or underlying functionality. Each variant is presented to users in randomized fashion to assess its impact on user behavior and performance metrics.

Control Variation:

The control variation, often denoted as the baseline or original version, serves as the reference point against which other variants are compared. It represents the existing or default version of the digital experience being tested. The control variation helps assess the relative effectiveness of new variations by providing a benchmark for comparison.

Success Metrics:

Success metrics are quantifiable measures used to evaluate the performance and effectiveness of different variations in online experimentation. These metrics depend on the specific goals of the experiment and may include conversion rate, click-through rate, bounce rate, revenue per user, average session duration, faster response times, lower cost to serve, or any other relevant metric indicative of success.

Statistical Significance:

Statistical significance is a measure used to determine whether observed differences in performance between variations are likely due to genuine effects or simply random chance. In online experimentation, statistical significance helps ensure that results are reliable and meaningful by indicating the probability that the observed differences are not merely the result of variability or noise in the data.

Winner Declaration:

The process of declaring a winning variation involves analyzing experiment results to identify the variant that demonstrates superior performance based on predefined success metrics and statistical significance. Once a winning variation is determined, it is often implemented as the new standard version to optimize the digital experience’s performance and achieve desired business objectives.

Switch It On With Split

The Split Feature Data Platform™ gives you the confidence to move fast without breaking things. Set up feature flags and safely deploy to production, controlling who sees which features and when. Connect every flag to contextual data, so you can know if your features are making things better or worse and act without hesitation. Effortlessly conduct feature experiments like A/B tests without slowing down. Whether you’re looking to increase your releases, to decrease your MTTR, or to ignite your dev team without burning them out–Split is both a feature management platform and partnership to revolutionize the way the work gets done. Schedule a demo or explore our feature flag solution at your own pace to learn more.

Split A/B

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.

Want to see how Split can measure impact and reduce release risk? 

Book a demo