ChartMogul recently published a compelling piece, highlighting how each product and engineering team across a company can draw the connection between individual contributions and one of the most important product metrics – monthly recurring revenue, or MRR. By understanding this connection, each employee can understand their tangible impact on your company’s goals.
However, the ability to understand the impact of contributions is easier for some teams than others. Traditional tools make it simple for sales and customer success teams to segment key product metrics like seats, MRR and churn by sales rep or account manager, but how do engineers and product teams measure the direct impact of their work on the bottom line?
As the team at ChartMogul points out, new feature releases and the overall performance of the application can have a significant impact on MRR:
New business can be tied back to new product features…[and]…churn could be tied to performance issues within the product, and thus the Product team’s ability to launch features that run smoothly and release timely.
If launches and feature releases can be correlated to customer satisfaction or transaction success, product and engineering teams can understand how product decisions are impacting the business, for better or worse.
We’re moving to a world where product ideas are iterated quickly, decisions are measured, and teams are motivated by their impact on the business. Imagine if you could easily segment customers for feature experimentation and tangibly measure the impact on:
- Support requests via Desk.com or Zendesk
- Usage and engagement via Mixpanel or Heap
- Customer health via Totango
- Application performance via Datadog
What if a similar tool allowed engineering and product teams to understand their impact on seats, MRR and churn?
At Split, we’re helping customers answer these questions through a data-driven approach to software development: controlled rollouts. By using feature flags to slowly roll out features to a segment of users grouped by attribute or percentage, teams can deploy new features while monitoring success – whether that’s machine performance or conversions – through integrations with the tools they already use, like Segment for analytics, Datadog for monitoring, or Slack for communication. And when a feature doesn’t perform well, they can use the Split feature flags UI to roll it back with the click of a button, saving the engineering team time it would traditionally take to redeploy and the business impact typically associated with remediation.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
At Split, we “dogfood” our own product in so many ways. Our engineering and product teams are using Split nearly every day. It’s how we make Split better.
A/B testing is a powerful tool for learning about your users, understanding your features’ impact, and making informed business decisions. To ensure you make the best decisions and are extracting the most insights from your experiments, some experimental design guidelines are essential. These guidelines can be cumbersome or confusing at…
Feature flags provide so much for software organizations: they allow teams to separate code deployment from feature release, test in production, run experiments, and more. However, some rules apply to the feature flagging process that are easy for teams to overlook. I’ve gathered the best practices of feature flags from…