We have updated our Data Processing Addendum, for more information – Click here.

Glossary

Bayesian Statistics

Bayesian statistics employs Bayesian probability theory to model and update uncertainties about hypotheses. It involves combining prior beliefs with new evidence, using Bayes’ theorem, to obtain updated and more informed probability distributions.

What is Bayesian Statistics?

Bayesian statistics is a branch of statistics that is based on the Bayesian probability theory. It provides a framework for updating beliefs or probabilities about a hypothesis as new evidence or data becomes available. In contrast to classical (frequentist) statistics, which treats probabilities as frequencies or limiting proportions, Bayesian statistics views probabilities as measures of belief or certainty.

Bayesian Probability

Probability in the Bayesian context reflects the degree of belief or certainty about the occurrence of an event. It is updated as new information is acquired, incorporating prior knowledge and observed data through Bayes’ theorem.

Prior Probability

The initial probability assigned to a hypothesis before considering any new evidence. It represents existing knowledge or beliefs about the likelihood of an event before incorporating data.

Likelihood

The probability of observing the given data, given a particular hypothesis. It describes the compatibility between the observed data and the hypothesis.

Posterior Probability

The updated probability of a hypothesis after taking into account both prior knowledge and new evidence. It is calculated using Bayes’ theorem, combining the prior probability and the likelihood.

Bayes’ Theorem

A fundamental formula in Bayesian statistics that calculates the posterior probability of a hypothesis given prior knowledge and observed data. It is expressed as P(H|D) = P(D|H) * P(H) / P(D), where P(H|D) is the posterior probability, P(D|H) is the likelihood, P(H) is the prior probability, and P(D) is the probability of the observed data.

Posterior Distribution

The probability distribution of the parameter(s) of interest after incorporating prior knowledge and observed data. It represents the updated beliefs about the parameter(s).

Prior Distribution

The probability distribution representing the initial beliefs about the parameter(s) before observing any data. It is based on existing knowledge or subjective assessments.

Conjugate Prior

In Bayesian statistics, a prior distribution which belongs to the same family of probability distributions as the posterior distribution when combined with a specific likelihood function. Conjugate priors simplify calculations and result in closed-form solutions.

Markov Chain Monte Carlo (MCMC)

A computational technique widely used in Bayesian statistics to approximate the posterior distribution of parameters. MCMC methods, such as the Metropolis-Hastings algorithm and Gibbs sampling, are valuable for complex models where analytical solutions are challenging.

Bayesian Model Averaging

A technique in Bayesian statistics that considers multiple models and their associated parameter values to make predictions or inferences. It accounts for model uncertainty by weighing the contribution of each model based on its posterior probability.

Bayesian Hypothesis Testing

A method of hypothesis testing within the Bayesian framework, where the focus is on updating beliefs about the relative plausibility of different hypotheses given the observed data.

Subjective Bayesianism

An approach to Bayesian statistics that emphasizes the incorporation of subjective beliefs and opinions into the analysis. It recognizes that prior probabilities may be subjective and can be based on individual or expert judgment.

Switch It On With Split

The Split Feature Data Platform™ gives you the confidence to move fast without breaking things. Set up feature flags and safely deploy to production, controlling who sees which features and when. Connect every flag to contextual data, so you can know if your features are making things better or worse and act without hesitation. Effortlessly conduct feature experiments like A/B tests without slowing down. Whether you’re looking to increase your releases, to decrease your MTTR, or to ignite your dev team without burning them out–Split is both a feature management platform and partnership to revolutionize the way the work gets done. Schedule a demo or explore our feature flag solution at your own pace to learn more.

Split A/B

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.

Want to see how Split can measure impact and reduce release risk? 

Book a demo