Become more data-driven around feature releases by using impression data from Split in mParticle and mParticle event data to evaluate treatments in Split.
Use measures of impact, absolute value, and error margins to analyze whether an A/B test was a success.
At Split we believe in the power of metrics, and are always striving to improve the ways we help our users make more data-driven product decisions. In this previous post we talked about the importance of understanding the impact of a new feature release via key and guardrail metrics. With…
With today’s release, you can now create a range of metrics from a single event, providing a deeper analysis of the results of your experiments. These additional insights into knowing how people are using your application is vital to shaping your product development priorities. For example, if you wanted to…
Split’s goal is to power the world’s product decisions, so we are always looking for new ways to enable our customers to be more data-driven. We believe in the power of metrics and strive to make sure our users have a holistic view of their experiments’ impact. Split’s existing metrics…
In this post, we will talk about key experimentation concepts including how to choose your Overall Evaluation Criteria (OEC) for your experiments and how to increase the sensitivity of those metrics through metric filtering and metric capping.
At Split, we are always improving how we can help our customers make these decisions more efficiently across the full application stack. In this blog, I will discuss best practices to achieve statistically significant results in your experiments and how Split can help you accomplish this.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.