When Split is developing new features, enabling our customers to be more data driven is always top of mind. We believe in the power of metrics and Split’s existing metrics impact page allows you to make powerful product and engineering decisions and understand the effect on your business goals.
Split’s metric details and trends view provides additional insight to understand the why behind your metric’s impact. This view also helps you diagnose any problems with your experiment. For example, in the chart below the trend clarifies that the error margin has decreased over time and that the impact is positive over the baseline, both great indicators for an experiment.
We have two exciting new updates for the metric trends view: we now show metric values over time and we now show multiple treatments compared on the same graph.
Impact versus value
In addition to the metric impact over time, you can now also view the metric value for each treatment in your split over time. The line chart in the ‘impact over time’ tab allows you to easily visualize the difference in the impact as well as how the error margin has changed throughout the version of a split. The line chart represents the cumulative impact and is based on all the data we have received up until the last calculation update. The ‘values over time’ tab will allow you to debug or spot check any extreme fluctuations in your metrics over the course of the split version.
We recommend reporting on the metric impact and it’s error margin at each review period day, which are typically on the 7th or 14th day of a split version and use the metric value trend line as secondary information.
Interpreting metric values over time
When viewing experiment results as a relative percentage impact it can be difficult to understand what that percentage impact equates to. Showing the metric value in absolute terms over time provides this context and makes the percentage impact more meaningful. Whether you are digging into your key or your guardrail metrics, your product metric or your engineering metric, you can visualize its true performance.
The metric value in the line chart corresponds to the mean value of the metric which shown in the table below the chart. The table also provides additional information of the metric dispersion such as its standard deviation, minimum value, maximum value, and its 95th percentile.
Interpreting multiple treatments over time
The majority of our customers test at least two treatments within a split, some customers testing more than two, as it is now even easier with our dynamic configuration functionality. Therefore with this release you are able to see multiple treatments results against your baseline treatment in one view, allowing you to gain a more holistic understanding of their experiment. This will help our users better interpret their experiment results and help them decide on the next steps of their rollout plan and testing cycle.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
Audit logs are a feature that every enterprise customer wants in all of their products. Customers need to know who changed which settings and at what time. They need to know when someone creates a user account in their company’s instance of the product, who accessed what data, and more.…
With Halloween behind us and the holiday shopping season officially kicked off, we all know what that means: code freezes! If you’re an engineering or product leader, you know that also means the annual code freeze is upon us. It’s very common for engineering teams to start a code freeze…
Product experimentation is becoming increasingly relevant in today’s data-driven world. Teams are continuously experimenting with different versions of their features and products to increase understanding of their user experience and increase the developers’ ability to improve upon it. This can translate to more items in a customer’s cart, more time…