In this post, we will talk about key experimentation concepts including how to choose your Overall Evaluation Criteria (OEC) for your experiments and how to increase the sensitivity of those metrics through metric filtering and metric capping.
Announcing the release of Split Workspaces, giving our customers the ability to easily separate feature flag management and feature experimentation across their products and applications. This comes as a direct result of our experience onboarding our customers onto the Split experimentation platform over the years.
At Split, we are always improving how we can help our customers make these decisions more efficiently across the full application stack. In this blog, I will discuss best practices to achieve statistically significant results in your experiments and how Split can help you accomplish this.
We built Split’s feature experimentation platform with this fundamental assumption: the data you capture to measure and understand your customer experience is collected across many touch points, and any tool you use to release feature flags and measure impact must be able to capture data from all of them.
As adoption of feature flags has spread, product teams have learned to release new features incrementally, exposing new functionality slowly to the user base with a careful eye towards ensuring the right customer experience. Feature flags have made automating the process of measuring product metrics much easier.
David Martin, Senior Solution Engineer at Split, gives a demonstration of Split’s feature experimentation capabilities with a real-world example. Imagine you’re a trip planner who takes customers through Tour Mont Blanc, and you want to optimize your customer’s experiences. Where do you start? By observing your customer’s reactions to the changes you’re testing to see which one fares better in the end.
Here at Split, we are devoted to the continuous improvement of our user experience. Split’s customer success team has been busy gathering feedback and our engineers have been running experiments to hone in on some important enhancements.
One of the best things about building product at Split is getting to use our experimentation and analytics capabilities to understand how our customers are both discovering new functionality and engaging with the product. In this blog post, we walk through how we used Split to add Starring capabilities to our own product.
Feature flags are just one piece of the puzzle when it comes to adopting a data-driven feature release strategy. By running controlled experiments, your team can make informed decisions before rolling the feature out to the rest of your userbase.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.