Split’s goal is to power the world’s product decisions, so we are always looking for new ways to enable our customers to be more data-driven. We believe in the power of metrics and strive to make sure our users have a holistic view of their experiments’ impact. Split’s existing metrics…
Starting today, Split is an initial partner for Segment’s new Developer Center, making it even easier to send user events and customer identities tracked in Segment to Split for analysis in a feature-driven context.
In this post, we will talk about key experimentation concepts including how to choose your Overall Evaluation Criteria (OEC) for your experiments and how to increase the sensitivity of those metrics through metric filtering and metric capping.
Announcing the release of Split Workspaces, giving our customers the ability to easily separate feature flag management and feature experimentation across their products and applications. This comes as a direct result of our experience onboarding our customers onto the Split experimentation platform over the years.
At Split, we are always improving how we can help our customers make these decisions more efficiently across the full application stack. In this blog, I will discuss best practices to achieve statistically significant results in your experiments and how Split can help you accomplish this.
We built Split’s feature experimentation platform with this fundamental assumption: the data you capture to measure and understand your customer experience is collected across many touch points, and any tool you use to release feature flags and measure impact must be able to capture data from all of them.
As adoption of feature flags has spread, product teams have learned to release new features incrementally, exposing new functionality slowly to the user base with a careful eye towards ensuring the right customer experience. Feature flags have made automating the process of measuring product metrics much easier.
David Martin, Senior Solution Engineer at Split, gives a demonstration of Split’s feature experimentation capabilities with a real-world example. Imagine you’re a trip planner who takes customers through Tour Mont Blanc, and you want to optimize your customer’s experiences. Where do you start? By observing your customer’s reactions to the changes you’re testing to see which one fares better in the end.
Here at Split, we are devoted to the continuous improvement of our user experience. Split’s customer success team has been busy gathering feedback and our engineers have been running experiments to hone in on some important enhancements.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.