Join us for Flagship 2024: April 16-17 – Register Now.

Experimentation Discoveries From a Frontend Engineer

Contents

There is no such thing as a failed experiment, only experiments with unexpected outcomes

R. Buckminster Fuller

Split @ Split: Our Way to Practice What We Preach

We always talk about the importance of continuous delivery and measurement. This isn’t just to sell our Feature Data Platform. At Split, we practice what we preach. In fact, we’re currently using our own technology to build and scale the company, empowering a culture of measurement and experimentation from within. We call this effort Split @ Split.

Our Experimentation Advisors on our Customer Success Team are partnering with Split’s internal product and engineering squads to grow their usage of the Split platform and advance their experimentation best practices. As these teams use Split for feature flagging, progressive rollout, monitoring, measurement, and testing, our advisors are checking in to gain a greater understanding of what they’re experiencing. It’s meant to uncover some of the things our customers endure on their journey to best practices, and every new perspective gained is extremely valuable.

The goal of Split @ Split is to continuously improve our platform. We achieve this by gathering best practices and tips from the pros, as well as strengthening our support for customers. The following article is a check-in with one of the engineers involved with the program.

Discoveries on Experimentation

Nicolino Ayerdi is a Frontend Engineer at Split and has been getting his hands dirty, experimenting with Split on the Split platform. Specifically, he’s tasked with building out an experiment to edit or delete environments. Part of the work for building out this functionality also includes adding in analytics to properly measure the new feature.

Where did this experiment idea originate? The Split Nucleo team, in charge of application administration, uses data — both qualitative (user feedback) and quantitative (utilization analytics) to develop a hypothesis that aims to reduce friction for Split users who often edit or delete environments within their workspaces. While the functionality already exists, they see the opportunity to make things more user friendly.

The goal of the experiment is to learn whether or not refined functionality would be a valuable change for users. Either way, Ayerdi and the Nucleo team are gathering additional knowledge and ideas on other potential optimization tactics to implement for future features.

In the process, Ayerdi discovers a few things about experimenting with Split @ Split. Here’s a few tips he wants customers to remember before, during, and after launching experiments.

Before Launch

Before releasing an experiment, Ayerdi says to “remember that you don’t know what kind of code you will run into.” He goes on: “It can be simple, it can be tangled, it can have old patterns, new patterns. It’s hard to know until you roll up your sleeves and dig in. Be prepared. First of all, take a look at the code and evaluate how complicated it is to toggle that code. We call ‘toggle’ the splitting of a component in two: the old version and the new version with the new feature. After doing that, you’ll be able to determine if you need to do some pre-work to make the toggling easier. Then, estimate the scope of your job and get going.”

This process in development work is called “technical feasibility,” or “evaluation.” It’s important to spend time reviewing what’s already in your code to see what you can do. Establish what you’re working with, so you can identify challenges in the early planning stages. This will also help give teams a proper estimate of the work involved. Just like they say in building, “measure twice, cut once.”

Note: knowing where you might be doing experiments before starting implementation can make life a whole lot easier. By having this broad view ahead of time, you’ll be able to strategically design code in a way that’s simple to swap things in and out.

During the Experiment

“Teamwork is essential during your experiments,” Ayerdi says. To work as a unit, communication and transparency are key. When teams bridge silos, they help reduce dependencies, which results in faster releases to production. Ayerdi notes that weekly meetings are a great way to communicate how experiments are going. Working as a group to understand the impact of the experiment as well as monitor performance metrics is a great way to quickly iterate to the next experiment.

A best practice in experimentation is to brainstorm and share findings in all stages of the process. For example, if a product manager discovers an experiment is too heavy a lift, then there’s time to pivot to a more reasonable approach early on. Without everyone on the same page, properly measuring the impact of an experiment is impossible.

“During an experiment toggling a component shouldn’t be complicated,” Ayerdi explains. “Split even has an automated script to do that. However, if the code is tangled, that’s when things get complicated. Unfortunately, you’ll be paying the price of previous coding decisions. It’s nobody’s fault,” he continues. “It’s the nature of our work to fail, learn, and continue on.”

Failing isn’t always a bad thing in software development. With Split, failure can lead to success, and that’s part of the brilliance. By allowing you to fail fast, you can learn and rapidly refine features.

Post Experiment

Once an experiment is concluded, here’s what Ayerdi says to do: “You should clean up the code. This probaly means you have a component split in two. In this case, remove the version that’s no longer in use. Then, there’s less code to maintain. You don’t want too much unused code on your hands,” Ayerdi stresses.

This last step is arguably one of the most important. It’s part of what dev teams call “Lifecycle Management.” Much like a house: if you don’t maintain and clean regularly, major problems arise! When old, unused code is left behind, you run the risk of bloat, bugs, and expensive technical debt. Nobody wants that! Best practices in software development say to always create a clean-up ticket. If you’re an engineer who created that split, you can even leave comments for the next person on how to manage and maintain things right in the code. You’ll thank yourself later. Somebody else might thank you, too.

That’s it from Split @ Split for today. Stay tuned for more insights from the inside as the program progresses. Looking forward to sharing them with you on the Split blog page.

Get Split Certified

Split Arcade includes product explainer videos, clickable product tutorials, manipulatable code examples, and interactive challenges.

Switch It On With Split

Split gives product development teams the confidence to release features that matter faster. It’s the only feature management and experimentation solution that automatically attributes data-driven insight to every feature that’s released—all while enabling astoundingly easy deployment, profound risk reduction, and better visibility across teams. Split offers more than a platform: It offers partnership. By sticking with customers every step of the way, Split illuminates the path toward continuous improvement and timely innovation. Switch on a trial account, schedule a demo, or contact us for further questions.

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.