Big News! Split is now part of Harness. Learn more at Harness and read why we are excited by this move.

Glossary

Type II Error

A type II error is one of two statistical errors that can result from a hypothesis test.

A type II error (type 2 error) occurs when a false null hypothesis is accepted, also known as a false negative. This error rejects the alternative hypothesis, even though it is not a chance occurrence.

In any hypothesis testing situation, the null hypothesis states that the subject of the test is not significantly different in the experimental versus the control group, and so any difference observed is the result of some error. The alternative hypothesis, by contrast, states that there is a significant difference.

As a result of this setup, there are four possible outcomes from any hypothesis test:

  1. We reject a false null hypothesis,
  2. We reject a true null hypothesis,
  3. We accept a true null hypothesis,
  4. Or we accept a false null hypothesis.

1 and 3 are correct inferences; 2 is a type I error (a false positive), and 4 is a type II error (a false negative).

When Are Type II Errors Acceptable?

Since it’s statistically impossible to entirely eliminate both type I and type II errors, individuals performing experiments must decide which type of error is more acceptable to them and structure their experiments to eliminate the less acceptable one as much as possible.

As an example of when a type II error might be more acceptable than a type I error, let’s look at email spam checking. The alternative hypothesis is that the email is spam, and thus the null hypothesis is that the email is not spam. Committing a type I error means marking a legitimate email as spam, preventing its normal delivery. Committing a type II error means a spam email being marked as legitimate and sent to the user’s inbox.

A significant number of type II errors points to an ineffective spam filter, but a significant number of type I errors means the spam filter is overall doing more harm than good by preventing users from seeing legitimate communications. Therefore, the goal of email spam filtering systems should be to bring down the number of type II errors while keeping the number of type I errors at near-zero.

By contrast, in a biometric security system, such as a fingerprint scanner on a mobile phone or facial recognition software on a personal computer, then the alternative hypothesis is “the scanner doesn’t identify the person on its list of authorized users” and thus the null hypothesis is “the scanner does identify the person on its list of authorized users”.

In this situation, a significant number of type II errors would mean an insecure device, whereas a significant number of type I errors would mean some minor user inconvenience of needing to demonstrate their authorization another way (such as with a password or pin code). Therefore, the system should be designed to bring down the number of type I errors while keeping the number of type II errors at near-zero.

Minimize Type II Errors

Because they arise from the design of the test, minimizing a certain error type requires altering the test. To minimize the number of type I errors, decreasing the p-value (increasing the confidence interval) is an easy way. To minimize the number of type II errors instead, either increase the sample size (or run the experiment for a longer time, in some cases), or increase the p-value.

Switch It On With Split

The Split Feature Data Platform™ gives you the confidence to move fast without breaking things. Set up feature flags and safely deploy to production, controlling who sees which features and when. Connect every flag to contextual data, so you can know if your features are making things better or worse and act without hesitation. Effortlessly conduct feature experiments like A/B tests without slowing down. Whether you’re looking to increase your releases, to decrease your MTTR, or to ignite your dev team without burning them out–Split is both a feature management platform and partnership to revolutionize the way the work gets done. Schedule a demo to learn more.

Switch It On With Split

The Split Feature Data Platform™ gives you the confidence to move fast without breaking things. Set up feature flags and safely deploy to production, controlling who sees which features and when. Connect every flag to contextual data, so you can know if your features are making things better or worse and act without hesitation. Effortlessly conduct feature experiments like A/B tests without slowing down. Whether you’re looking to increase your releases, to decrease your MTTR, or to ignite your dev team without burning them out–Split is both a feature management platform and partnership to revolutionize the way the work gets done. Schedule a demo or explore our feature flag solution at your own pace to learn more.

Split - icon-split-mark-color

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.

Want to see how Split can measure impact and reduce release risk? 

Book a demo