Split Use Cases
Innovative development teams use Split’s feature flags to test, target, and securely release and monitor new features to customers. By using feature flags for experimentation-driven product development, development can rapidly convert ideas to features, operations can continuously and securely deliver code, and product managers can measure the outcome of every feature release.
Continuous delivery and trunk development
Trunk-based development helps teams to move faster by integrating their changes continually and avoid the “merge hell” caused by long-lived branches. However, as developers on the team are continuously committing new code to trunk, this can quickly make the master branch unstable. Feature flags are a requirement for trunk development. The only way to ensure that a shared branch is always releasable is to hide unfinished features behind flags that are turned off by default. With feature flags, developers can commence development on any new feature by turning the flag off and deploying the master at any time, preventing merge conflicts before a new feature is completed.
Also known as a staged rollout, a canary launch is a method of rolling features out to a subset of users to assess the reaction of the overall system. Once you’ve built a feature, you may want to first experiment on a small percentage of your traffic to ensure it works. In a canary launch, you can take the successful variation from an experiment, and control the rollout of any feature to ensure that it outperforms the previous version of the experience, or at least doesn’t negatively impact metrics. Development teams use this to measure the reaction from real user “canaries” and look for early indicators of danger or success in when releasing software. This is accomplished by starting with a small percentage of users e.g. 10%, 25%, 50%, and then gradually scaling that up to 100% as confidence in the feature grows. If the user experience with that feature is negative, it can be rolled back or completely turned off.
Testing in production
Live user testing provides real-time visibility into performance and provides dev teams with valuable feedback from alpha and beta groups. Typically, new feature releases in production environments are preceded by functional QA and performance testing. Feature flags allow teams to perform functional and performance tests directly on production with a subset of customers. Split enables organizations to use powerful user segmentation and targeting rules to set up test groups, apply percentage rollouts to slowly expose new features, and gives teams the control to instantly turn any feature off without having to roll code back. This is a secure and performant way of understanding how a new feature will scale with customers.
By having a feature behind a flag, you can not only roll it out to subsets of customers, but you also can remove it from all customers if it is causing problems to the customer experience. If there is a problem, you’ll often have to roll back the entire release to an earlier version, apply a hotfix the problem, then re-deploy, even worse, wait to fix the problem in your next scheduled release. With Split feature flags, whether you’re testing in production, doing a staged rollout or a canary launch, or considering sunsetting a feature, your dev teams will have the peace of mind with every feature release.
Dark launching is a go-live strategy that can be deployed when releasing a new feature to a subset of users in a production environment without users being aware of it, but otherwise exercising all the parts of your infrastructure involved in serving that feature. Often product teams will plan for the public launch of a new product or feature with a target launch date. Once the major feature or product is ready, you can validate those features if they are behind a flag by restricting access only to internal users. This gives the product team the flexibility to activate the new feature independent of a code deployment. Dark launches can be a good strategy to apply when dealing with massive, large-scale deployments. It provides you with visibility on how your infrastructure behaves in conditions that are as close to production as possible.
Granular user testing
Split feature flags enable you to perform powerful granular user targeting using customized metadata. With Split, you can target users based on any attribute and build target groups. This functionality provides you with granular control over who sees what at any given time. Product teams regularly use Split to set targeting rules that help create beta testing groups, manage subscription models, and control how their users experience their products. The ability to granularly target users gives teams the opportunity to launch features early, obtain customer feedback, and further refine the user experience with real customers in a controlled environment, before widely rolling new features and products.
Migration to microservices
Microservices is the practice of breaking up a huge, monolithic release into many discrete services that you can roll out in independent release schedules. Like any large architectural shift, a monolith breakup is best tackled as a series of small steps that incrementally moves the system toward the desired state. By taking advantage of the capabilities of a feature-flagging framework, you can make this transition safe, and in a controlled manner.
You can use feature flags to permanently tie features to subscription types. For instance, a feature could be available to every customer as part of a free trial but is gated afterward by the customer buying a premium subscription.
Feature flags empower your product teams by giving them the ability to grant exclusive access to a new feature or product to a select group of customers or users. Many organizations use Split feature flags to manage their alpha and beta testing programs at scale. By using Split’s targeting rules to create specific user groups, your development will have the ability to include entire groups or just a subset of a group in any test at any given time. Invaluable feedback derived from early alpha and beta teams helps products teams validate new features mitigate any risks and issues before rolling out features to a broader user base.
Development teams use Split feature flags throughout the entire lifecycle of their software delivery process— from feature releases to sunsetting features. At some point features that are older or lower utilized, or those that conflict with new features will just need to be retired. Split feature flags enable teams to gain visibility into which features are still in use, allowing them to make better decisions on what features should remain in their codebase and what should be retired.
Subscription management for feature releases
Advanced development teams use Split feature flags to manage permissions for specific user groups. Building on groups, which allow you to quickly create a collection of users with shared permissions. This capability is critical when managing multiple features across multiple product lines and subscription plans, as it helps consolidate workflows and provides all team members with full transparency in a single environment i.e. customer success managers.
A/B testing and multivariate testing
Industry-leading feature flagging solutions not only give you the ability to safely and reliably launch new features, but they also arm you with the ability to measure the impact of launching new features. Product teams leverage Split feature flags for basic A/B/n testing to test functionality instead of just front-end changes. To ensure that the impact of a new feature is positive, development teams can leverage feature flags to test any given feature in an ‘on’ or ‘off’ state and monitor the difference in outcomes. With Split, all user events are tracked, providing teams with the full visibility of which feature variations are performing.
Best in class feature-flagging platforms tie feature flags to Key Performance Indicators (KPIs) turning them into powerful experimentation systems. Such a system can track user activity, build data ingestion pipelines, and investment in statistical analysis capabilities to measure KPIs within the treatment and control groups of an experiment (on or off for a feature flag). Statistically significant differences between the groups can be used to decide whether an experiment was successful and should continue ramping toward 100% of customers. The anticipated outcome then becomes: ideas turned to products with speed from feature flags, and products turned to results with analytics from experimentation.