Release Note /

October 30th, 2020


  • Multiple Comparison Correction
    You can now apply a statistical correction to control the False Discovery Rate when making multiple comparisons in the same experiment. The significance threshold setting can be adjusted to higher or lower confidence. Using the default significance threshold of 5%, you can be confident that at least 95% of all the statistically significant metrics you see reflect meaningful detected impacts. This guarantee applies regardless of how many metrics you have.