Feature Management Architecture & Security

What to Consider

100% of software engineers agree that if you want to embrace CI/CD effectively, feature flags are a necessity. In many ways, they’re becoming a commodity. But don’t be fooled: not all feature management and experimentation platforms are created equal. Making comparisons across the market is a little more complex than perusing the milk cooler for parity products. There are potential security risks with the wrong feature flagging tool and without cautions on cartons. 

As it goes with any piece of technology, it’s all about the build. If a new feature management platform is on your consideration list, take a deeper look at its architecture and security. Even the most subtle nuances can mean the difference between accelerated software delivery and a potential leak of sensitive information. Let’s compare two main approaches to platform architecture & security, so you can measure, learn, and launch features with more confidence. 

A Thin Layer = Less Protection 

All feature management tools deliver feature flags and capture impression data (an impression represents a flag evaluation, when and for whom it was evaluated) by way of software development toolkits (SDK). These SDKs are separated into two categories. One being the client-side SDKs, which sit behind web browsers, IOS, android, and IoT devices. The other is located server-side: these SDKs operate on a server inside your infrastructure or in a cloud-based server of the feature flagging system.

In most feature management platforms, the client-side SDKs are a thin layer. These platforms might argue that “thin” is a nimble design choice, but beware. They’re really just a proxy, incapable of evaluating feature flags locally. As a result, the data needed to evaluate flags (like userid and its attributes) needs to be sent away via encoded urls to a cloud-based server for evaluation. Not only does this delay the analysis process, it increases the risk of a data leak from the urls left behind in access logs. If you’re relying on feature flags to power your banking application for example, there’s a chance that personal identifiable information (PII) could wind up in these logs and in the wrong hands. 

For the most secure and private feature flagging capabilities, it’s crucial to limit the exposure of information across the internet. This particularly applies to companies at enterprise scale and with applications constantly exchanging highly-sensitive data that could be breached. 

Look to a Rules Engine for Maximum Security 

One unique approach to architecture and security starts with the foundation of a rules engine. What does that mean? Both sides of the client-side and server-side SDKs are treated the same. They’re not thin, they’re robust and intuitive. They’re both rules engines, which means they focus on the rules, not the answers. This is a very unique architectural difference that most feature management platforms don’t offer.

On the server side of the platform, feature flag rules are written and shared with the client-SDKs. They are then cached and saved for a more intuitive and private evaluation, and the benefits are major. While traditional feature management platforms can’t make feature flag evaluations within the client-side SDKs, a rules engine-based approach can do it all locally. This can be accomplished right inside your application. For example, it can be done in your online banking app, a healthcare records portal, or any other system requiring a higher level of security and privacy. 

Because the inputs needed to make the feature flagging decision don’t  leave your application to make that long, treacherous trek to the cloud-based server for a feature flag evaluation, neither does your customer’s PII. Social security numbers, location information, date of birth–that information remains between you and your customers. The cloud never sees it, only the recipe for how each feature flag behaves and reaches your customers. Therefore, the chances of a privacy leak getting into the wrong hands is minimized. Rule of thumb: trust a rules engine.

Split’s Rules Engine 

From the beginning and through every update along the way, Split has been designed to be private, fast, resilient, secure, and versatile. By downloading and caching the user-defined ruleset locally, Split’s SDK is able to act as an autonomous rule engine and perform all evaluations locally. Beyond elevated privacy and security, this capability gives Split some additional key advantages.

Not just security, speed: Because all evaluations performed by the Split SDK rely on local data rather than a cloud evaluation, processing time is virtually instantaneous, under a few milliseconds. The on premise evaluations can be re-run and updated at any time. This enables the ability to trigger tests, to capture real-time data, and make smarter feature changes on the fly. 

More resilience: Split is hosted in multiple AWS regions for failover purposes. However, rulesets are additionally cached in our CDN, Fastly, an edge cloud platform for optimized experiences, and would be available even if the AWS hosted Split Cloud were not. Furthermore, once a ruleset has been cached, the SDK can reevaluate it as often as required against a set of ever-changing data attributes. This, in combination with our streaming support, guarantees the feature flags updates are delivered to the SDKs as soon as the updates become available.  With the Client SDK caching the rules, instant local decisions can even be made when there is no network connection. Thanks to this, Split is an ideal choice for many mobile applications, where a data connection is never guaranteed.

Next-Level Versatility: With 14 unique SDKs and REST support (via the Split Evaluator), Split is compatible with virtually any programming language. Furthermore, Split is ideal for supporting multi-tenancy. 

To learn more about Split and its unique architecture and security, schedule a live product demo with our sales representatives. 

Image by Freepik