Do no harm metrics are metrics that teams use to ensure nothing bad is happening to your teams metrics due to a feature rollout. Many times in product experimentation, you release a feature through a canary release, and monitor your metrics throughout. If your metrics show higher conversion rates and higher engagement, you can continue to roll out the feature to your entire user base. However, sometimes product managers monitor metrics that are not necessarily tied to a specific feature release.
Product experimentation is a way to increase engineering impact and progressive delivery while reducing the risk of moving fast. A/B testing and multivariate testing can reveal user behavior and user trends that you did not foresee. However, the most crucial part of running an experiment is not the variants but to understand why you’re running the experiment in the first place.
Testing to Learn vs. Testing to Launch
According to Sonali Sheel of Walmart Labs, there are generally two reasons to run an experiment: Testing To Learn and Testing To Launch. Testing to learn is about the iterative discovery of what works and what does not, understanding customer behavior, and validating or invalidating a hypothesis. On the other hand, Test to Launch is about gradually rolling out a new feature to the entire population while keeping a close eye on metrics. Product owners launch a feature that’s expected to have a long-term, strategically significant impact and run an experiment to (hopefully) show that conversions and KPIs are not negatively impacted.
The Impact of Do No Harm Metrics
Many times in experimentation, product owners and business stakeholders want to make sure that releasing a specific new feature does not have a negative effect on any existing metrics. They accomplish this with Do No Harm Metrics. This approach is used by product owners to ensure that if they make a change, it won’t make any existing user behavior worse.
The goal here is to watch your team’s do no harm metrics and stopping the rollout early if metrics degrade. Your team can accomplish this with a percentage rollout. These metrics can include time to load, conversion rates, click rates, etc. Test to launch is first and foremost about mitigating risk. The idea here is to launch this feature unless it does something unexpected that you don’t want.
Test to launch uses the same underlying capabilities of an experimentation platform, including managing selective exposure of a new feature and observing the system and user behavior differences between those who get a feature and those who do not. Suppose you’re performing a canary release, or percentage rollout, as a Test To Launch. In that case, your experimentation and analytics systems must be connected so that you can specifically compare what your Do No Harm metrics are for the canary cohort vs. the control. In the analytics system, you should be able to differentiate between the traffic coming in for the experiment and the traffic coming in from the existing state. When you can make this differentiation, you can see what impacts your metrics and make more informed decisions.
It’s essential to not just release features for the sake of proclaiming them as “done” but to ensure that features deliver impact and don’t do any harm to key business metrics. Whether you are building a business or widening an existing business, you can use the same tactics to ensure your engineering efforts make a difference you can be proud of.
Split Arcade includes product explainer videos, clickable product tutorials, manipulatable code examples, and interactive challenges.
Deliver Features That Matter, Faster. And Exhale.
Split is a feature management platform that attributes insightful data to everything you release. Whether your team is looking to test in production, perform gradual rollouts, or experiment with new features–Split ensures your efforts are safe, visible, and highly impactful. What a Release. Get going with a free account, schedule a demo to learn more, or contact us for more information.