Product experimentation is the most effective way to determine what works best for end-users. You can measure engagement with a feature and hypothesize ways to improve that engagement, which will ultimately impact one or more critical business metrics. If you’re setting up an experiment, it’s mission-critical to implement A/B testing…
A/B testing, otherwise known as split testing, is the process of testing two different versions of a web page, feature, user flow, or other resource in order to optimize for a metric or set of metrics (often conversion rate). Multivariate tests are run with a higher number of variables and…
How do you know if your rollouts are going well and should be ramped up or are surfacing issues that need to be fixed before you ramp up? That’s the focus of today’s post and video.
Feature launches in leading engineering teams increasingly look like a ramp rather than a one time switch, going through dogfooding, debugging, max power ramp, scalability and learning phases.
One of the best things about building product at Split is getting to use our experimentation and analytics capabilities to understand how our customers are both discovering new functionality and engaging with the product. In this blog post, we walk through how we used Split to add Starring capabilities to our own product.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.