Gatekeeper. LiX. Trebuchet. What do these names have in common? They’re all critical systems that Facebook, LinkedIn, and Airbnb have built in-house to help ensure customer satisfaction and drive successful product development.
Today, Dropbox adds another name to this list with their announcement of Stormcrow, a similar internal platform. Christopher Park, Tom McLaughlin and the rest of the team at Dropbox have done a great job detailing their design decisions—and the impact of Stormcrow—in their blog post, which I highly recommend reading.
These systems are all part of an emerging category of application infrastructure called Controlled Rollout & Experimentation (CRE). With CRE platforms, teams are able to gain granular control over their customers’ experiences, to continuously improve and optimize their product.
Understanding Controlled Rollout & Experimentation
To understand why this category is so important for anyone working in the cloud, let’s take a quick look at its two key concepts:
The ‘Controlled Rollout’ portion of CRE comes from the approach these systems take of looking at products as a collection of unique features, and not just as set of (micro)services. By separating individual features apart from the whole, teams then have the flexibility of launching new, isolated functionality to assess its performance. Controlled Rollout platforms also include targeting, so these new features can be delivered to any audience segment. From a certain percentage of customers, to an interesting demographic group, to users who match multiple attribute requirements, teams can quickly get relevant insight into any new update.
Where CR truly shines, though, is in the opportunity it gives teams to instantly roll functionality back whenever necessary—without affecting the entire customer base or the rest of the features deployed in the microservice. And if the feature performs well? It can then be rolled out gradually to all users, having already proven itself.
Adding to this ‘Controlled Rollout’ concept is ‘Experimentation’: the idea that by deploying a feature or functionality to only some customers, a ‘control’ group is automatically created—allowing the true impact of a feature to be reliably measured. Teams can of course begin to experiment with variations, and ultimately make decisions that are informed by data. While this brings to mind traditional A/B tests, there are a few key differences. First, while standard A/B tests are marketing-led UI changes (think Optimizely), experimentation in a CRE platform is full-stack and product driven. Any application change, whether deep in the backend or in the UI, can be experimented upon. Second, instead of focusing on one or two metrics that happen to be important to a single PM, experiments in a CRE system can measure the global impact of functionality on customer metrics that matter to everyone in the company. To get a comprehensive view of a customer, teams have to consider elements like Revenue, Net Promoter Scores, Daily Active Users, Support Ticket Rates, Site Latency, or Error Rates. By using a strong CRE system, companies can begin to eliminate the blind spots that inevitably occur when important metrics aren’t being directly measured.
A Solution for Companies of Any Size
CRE clearly has roots in feature flagging, and complements existing continuous deployment (CD) techniques like blue-green deploys (a release technique where two identical production environments are run to reduce risk). So far only larger companies, like Facebook, LinkedIn, and Dropbox have typically been able to invest in building their own CRE platforms—though it’s a critical need at every cloud software company.
For any company trying to both “move fast” and “not break things,” a CRE platform is the ideal solution. To balance control with experimentation and optimization, you can invest in resources to build your own in-house solution (like Dropbox did)—or you can explore the emerging CRE marketplace.
Of course I’ll recommend trying Split. Our team comes from companies like LinkedIn and RelateIQ, where we experienced—and solved—the pain of not having a CRE solution. We created Split to offer any company a production-grade, turn key CRE platform. To see how Split helps companies control their customers’ experiences, take a look at this 3-minute video walkthrough. Then, try it out for yourself with our 14-day free trial to see what how a strong CRE platform can change your business.
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
At Split, we “dogfood” our own product in so many ways. Our engineering and product teams are using Split nearly every day. It’s how we make Split better.
A/B testing is a powerful tool for learning about your users, understanding your features’ impact, and making informed business decisions. To ensure you make the best decisions and are extracting the most insights from your experiments, some experimental design guidelines are essential. These guidelines can be cumbersome or confusing at…
Feature flags provide so much for software organizations: they allow teams to separate code deployment from feature release, test in production, run experiments, and more. However, some rules apply to the feature flagging process that are easy for teams to overlook. I’ve gathered the best practices of feature flags from…