Flagship 2024 – Day 2 is live! Click here to register and watch now.

Glossary

False Positive Rate

What is a false positive rate, and how is it calculated? How does it compare to other measures of test accuracy, like sensitivity and specificity?

What is false positive rate?

How is a false positive rate calculated and does it compare to other measures of test accuracy, like sensitivity and specificity? False positive rate (FPR) is a measure of accuracy for a test: Be it a medical diagnostic test, a cybersecurity machine learning model, or something else. In technical terms, the false positive rate is defined as the probability of falsely rejecting the null hypothesis.

False Positive Definition

Imagine you have an anomaly detection test of some variety. Maybe it’s a medical test that checks for the presence or absence of a disease; maybe it’s a classification-based machine learning algorithm. Either way, there are two possible real-life truths: either the thing-being-tested-for is true, or it isn’t. The person is sick, or they aren’t; the image is a dog, or it isn’t. Because of this, there are also two possible test outcomes: a positive test result (the test predicts the person is sick or the image is a dog) and a negative test result (the test predicts the person is not sick or the image is not a dog).

Because there are two possible truths and two possible test results, we can create what’s called a confusion matrix with all possible outcomes.

Here are the possibilities:

  • True Positive: the truth is positive, and the test predicts a positive result. The person is sick, and the test accurately reports this.
  • True Negative: the truth is negative, and the test predicts a negative result. The person is not sick, and the test accurately reports this.
  • False Negative: the truth is positive, but the test predicts a negative. The person is sick, but the test inaccurately reports that they are not. Also called a Type II error in statistics.
  • False Positive: the truth is negative, but the test predicts a positive. The person is not sick, but the test inaccurately reports that they are. Also called a Type I error or Type 1 error in statistics. False positive results are obviously concerning because of how misleading they can be. False positive test results of this kind are very common in medical examinations like breast cancer screening mammography.

Measuring the Accuracy of a Test

By calculating ratios between these values, we can quantitatively measure the accuracy of our tests.

The false positive rate is calculated as FP/FP+TN, where FP is the number of false positives and TN is the number of true negatives (FP+TN being the total number of negatives). It’s the probability that a false alarm will be raised: that a positive result will be given when the true value is negative.

There are many other possible measures of test accuracy and error rate. Here is a short rundown of the most common ones:

The false negative rate — also called the miss rate — is the probability that a true positive will be missed by the test. It’s calculated as FN/FN+TP, where FN is the number of false negatives and TP is the number of true positives (FN+TP being the total number of positives).

The true positive rate (TPR, also called sensitivity) is calculated as TP/TP+FN. TPR is the probability that an actual positive will test positive.

The true negative rate (also called specificity), which is the probability that an actual negative will test negative. It is calculated as TN/TN+FP.

If you’re on the patient side of a medical test being analyzed like this, you may care a bit more about two additional metrics: positive predictive value and negative predictive value.

Positive predictive value is the likelihood that, if you have gotten a positive test result, you actually have the disease. It’s calculated as TP/TP+FP. Conversely, negative predictive value is the likelihood that, if you have gotten a negative test result, you actually don’t have the disease.

Switch It On With Split

The Split Feature Data Platform™ gives you the confidence to move fast without breaking things. Set up feature flags and safely deploy to production, controlling who sees which features and when. Connect every flag to contextual data, so you can know if your features are making things better or worse and act without hesitation. Effortlessly conduct feature experiments like A/B tests without slowing down. Whether you’re looking to increase your releases, to decrease your MTTR, or to ignite your dev team without burning them out–Split is both a feature management platform and partnership to revolutionize the way the work gets done. Schedule a demo to learn more.

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Glossary"}]}" data-page="1" data-max-pages="1">

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.

Want to see how Split can measure impact and reduce release risk? 

Book a demo