Split manages more than 50 billion feature flags changes around the world every day. A recent study shows that about 5 to 6% of feature flags are rolled-back (or killed, using Split terminology in reference to a kill switch) within 30 days of being created. This speaks to the importance of making data available for short-term feature decisions, especially those that impact core KPIs.
The majority of our customers rely on external systems like APM services and integrations to other analytics systems to understand how a feature change impacts an area of their application they care about.
At Split we believe that feature flags are incomplete without data and so we worked hard over the last few months to create a real user monitoring (RUM) agent to provide insights into how a website’s key indicators are impacted immediately after a feature release. We paired this with our statistics engine to create alerts, even if only a subset of users experience a degradation from the baseline.
Website metrics captured
When Split RUM agent is installed on a website, our customers will be able to automatically receive five different metrics: page.load.time, time.to.first.byte, time.to.interactive, time.to.dom.interactive and errors.
All these metrics are available in our metrics section to allow customers to define custom metrics that better suit their application and on which to be alerted on when a feature is released.
Split will then compare the features each user is exposed to with each user’s metrics (like page.load.time) in order to determine which feature in your release is good, and which is rotten.
Similar to our recently released Sentry integration, by measuring these metrics in Split, our customers will now be able to get automatic feedback whenever a newly released feature causes a degradation for a subset of users for any of these metrics, and be alerted to take remedial action.
Our team is working hard to prioritize the next set of features for our RUM agent. Among them we have mobile support, new metrics like bounce rate, single page app (SPA) support and the ability to attach properties to the metrics from the RUM agent for allowing a more in depth analysis. As always, we’d love to hear your thoughts!
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
Continuous integration, continuous delivery, and continuous deployment are foundational in today’s agile engineering ecosystem. However, many times they are used interchangeably and often incorrectly. Let’s remove the confusion and settle the differences between continuous integration, continuous delivery, and continuous deployment. What is Continuous Integration? Continuous integration happens when developers regularly…
Product teams today are not only tasked with delivering new features but also delighting their customers with the right features. Whether crafting a vision for the product roadmap or experimenting with how a new feature should work, feature flags and data are crucial to success. At Split and Amplitude, we…
Product experimentation is the most effective way to determine what works best for end-users. You can measure engagement with a feature and hypothesize ways to improve that engagement, which will ultimately impact one or more critical business metrics. If you’re setting up an experiment, it’s mission-critical to implement A/B testing…