What if you ran an experiment and nobody came? Any type of product experimentation whether based on feature flags, A/B or multivariate tests, or simply quick rollout of beta features has one important element: users. Meaningful learnings from experiments require that users both engage in the experiment at a level to produce valuable usage data, and provide clear, detailed feedback.
Product teams that don’t take into account a user communication and user feedback collection strategy as part of their experiments risk not getting enough data to evaluate results. This article walks through a couple key communication steps that product teams should follow to ensure effective and timely product experiments.
1. Use Announcements In Conjunction With Experiment Goals
Generally, when running feature experiments you’re putting an early version or iteration of a new product feature in front of a subset of users. They might be your pre-arranged beta testers or just a random sample of your user base. If you are measuring discoverability of the new feature and user engagement, don’t communicate the feature release but instead monitor their usage data to understand its discoverability.
Intuition tells us to announce our new features, as software users tend to be pretty task-oriented when they’re using a product, and as a result, will often miss minor updates. However, you want to measure discoverability and usage of the new functionality, it is best to resist this urge.
If you are not measuring discoverability and instead want to drive user attention to the new functionality as quickly as possible or maybe what you are testing is a feature change post the initial CTA, it may make sense to announce the feature enhancement. If that is the case, one of the best ways to reach users is by pushing your announcement in the product itself. Targeted banner or tooltip messages will reach actual users when the announcement is relevant to them, and encourage them to try out the feature right then and there allowing you to collect as much data on their usage
2. Make Help Readily Available
It’s inevitable that users are going to have questions anytime they are trying out a new product or feature. The question is whether they are going to ask them to your support team or be able to find the answers on their own. Given that experiments are usually time sensitive, product teams will want to make sure that users aren’t running into roadblocks when trying to use the new functionality. If they are running into roadblocks, make sure you are able to measure these roadblocks as part of the experiment, such as the number of “help center searches”.
In the same way that in-app announcements are a great way to drive awareness, in-app help is a great way to drive education. When introducing new feature experiments to users, it can be helpful to link to documentation or tutorials in the announcement. With effective, contextual help available, you’re shortening the learning curve, and the time needed to get meaningful experiment data.
3. Reduce barriers to feedback
Measuring how users interact with feature experiments is an important source of insight, but just as important is direct feedback from participants. Did they understand how the use the feature? Did they think it was valuable? What suggestions would they have to improve it? Feedback like this is a goldmine for any product team looking to improve a feature. The challenge is that email surveys tend to have very low response rates, and if you’re serving an experiment to a relatively small sample, you may not get much in the way of actionable feedback.
Memberclicks, an association management platform based in Atlanta, overcame these challenges by collecting user feedback in-app. They were working to re-design the search function in their application and wanted to make sure the new design was providing value to customers. To do this, they ran experiments with several iterations of the feature to different subsets of users.
They triggered an in-app survey based on feature usage so that once an individual user had used the feature variant 7 times they were served a quick poll asking for feedback. The poll included a 1 to 5 rating scale and a request for open-ended feedback. By pushing the surveys in-app, the team at Memberclicks were able to get a 70% response rate, and they used the average numerical score on the feedback to gauge when the feature was ready for a full release.
Communication for successful product experiments
Some of these suggestions seem simple, and that’s because they are. It doesn’t take a lot of additional effort to ensure that each product experiment is wrapped with an appropriate communication plan. Product teams that do so will see more rapid and meaningful results from their experiments.
Want to learn more about crafting great product experiments? Join Split and Pendo for an interactive webinar on April 26th where we’ll discuss how agile product teams use experimentation and user insights to deliver greater user value. Register today.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
Over the past few months we’ve been out in the field, chatting with development and operations teams from more than 100 different organizations around the U.S. We wanted to know how they get software to market. How often do they release new features to their users? How often do they…
It’s easy to think of the user interface as the primary target for new functionality, with product teams eagerly watching important business metrics such as conversion rates and user engagement for improvements after each release. But behind the scenes, engineers are continually working on server-side innovations such as changes in…
Canary releases (also “canary deployments”) and feature flag rollouts are two common feature release strategies for testing in the production environment, increasing the safety of continuous delivery, deploying faster and more often. Both aim to reduce the “blast radius” of unforeseen problems and build confidence in a new release. Both…