When Split is developing new features, enabling our customers to be more data driven is always top of mind. We believe in the power of metrics and Split’s existing metrics impact page allows you to make powerful product and engineering decisions and understand the effect on your business goals.
Split’s metric details and trends view provides additional insight to understand the why behind your metric’s impact. This view also helps you diagnose any problems with your experiment. For example, in the chart below the trend clarifies that the error margin has decreased over time and that the impact is positive over the baseline, both great indicators for an experiment.
We have two exciting new updates for the metric trends view: we now show metric values over time and we now show multiple treatments compared on the same graph.
Impact versus value
In addition to the metric impact over time, you can now also view the metric value for each treatment in your split over time. The line chart in the ‘impact over time’ tab allows you to easily visualize the difference in the impact as well as how the error margin has changed throughout the version of a split. The line chart represents the cumulative impact and is based on all the data we have received up until the last calculation update. The ‘values over time’ tab will allow you to debug or spot check any extreme fluctuations in your metrics over the course of the split version.
We recommend reporting on the metric impact and it’s error margin at each review period day, which are typically on the 7th or 14th day of a split version and use the metric value trend line as secondary information.
Interpreting metric values over time
When viewing experiment results as a relative percentage impact it can be difficult to understand what that percentage impact equates to. Showing the metric value in absolute terms over time provides this context and makes the percentage impact more meaningful. Whether you are digging into your key or your guardrail metrics, your product metric or your engineering metric, you can visualize its true performance.
The metric value in the line chart corresponds to the mean value of the metric which shown in the table below the chart. The table also provides additional information of the metric dispersion such as its standard deviation, minimum value, maximum value, and its 95th percentile.
Interpreting multiple treatments over time
The majority of our customers test at least two treatments within a split, some customers testing more than two, as it is now even easier with our dynamic configuration functionality. Therefore with this release you are able to see multiple treatments results against your baseline treatment in one view, allowing you to gain a more holistic understanding of their experiment. This will help our users better interpret their experiment results and help them decide on the next steps of their rollout plan and testing cycle.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
Become more data-driven around feature releases by using impression data from Split in mParticle and mParticle event data to evaluate treatments in Split.
Google Analytics is the most popular web performance tool out there. Nearly every web app uses it. And given the metrics that GA collects — sessions, pageviews, conversion rates, page load timing — it’s a logical dataset to pair with feature flags. So we’re excited to announce our latest, very…
Feature flag approval flows make it easy to review changes before production, stay compliant with regulation, and see the status of every feature.