Flagship Day 3 Highlights:
Experimentation is crucial for innovation and mitigating risks when introducing new features like AI
Joshua and Jessica from Split will share best practices for using experiments to introduce AI into applications and user bases. Split’s dynamic configuration allows for easy testing and optimization of AI parameters without deploying new code. A/B testing and measurement are important for evaluating AI performance and making data-led decisions. Incorporating AI into products requires thoughtful implementation and consideration of user experience. Taking an MVP (Minimum Viable Product) approach and using phased testing can save time and resources while uncovering insights.
An example scenario is presented, showcasing how to build, measure, and learn from an AI chatbot feature
Incorporating hooks or switches for easy feature control using feature flags. Designing the code base and infrastructure to use dynamic configuration or feature flags. Launching AI behind feature flags for quick feature toggling. Considering targeting specific user groups for limited exposure or gradual rollout. Using feature flags and targeting roles for more options and flexibility. Aligning AI-based feature metrics with business goals for evaluation.
Continuously measuring and iterating for improvement. Then adapting and pivoting with feature flags and experimentation. Try to stay up to date with the latest research, best practices, and ethical guidelines in AI development.
Collaboration between Split and Vercel on experimentation and edge compute topics
Leveraging edge compute capabilities to improve the front-end experience for users. Rendering experiments at the edge to avoid client-side burdens and improve performance. Considering the trade-offs between client-side and server-side rendering for experiments. Preventing performance degradation and excessive code by utilizing the edge for experiments. Using feature flags and experimentation at the edge to measure outcomes and trim front-end code. Considering the Cumulative Layout Shift (CLS) to ensure visually smooth user experiences.
Using Feature Flags to Test & Iterate AI Capabilities
If you want to implement AI into your own application, you should do it carefully and thoughtfully. As part of that best practice, it is essential to leverage a tool like Split for any product launch. By combining feature management with measurement and learning capabilities, you can release new AI capabilities more confidently, adapt to user needs, and drive continuous improvement. Join this session, hosted by Split’s advisory experts, to learn more about measuring value, as well as game-changing use cases to train and deploy new models without deploying code.
Experiment at the Edge
Patricio “Pato” Echagüe is the CTO and co-founder of Split, bringing over 18 years of software engineering experience. Pato was one of the first three engineers at RelateIQ, which was acquired by Salesforce. He was a lead committer at DataStax for the open-source Java client Hector. Pato will interview, Guillermo Rauch, CEO at Vercel, a Split technology partner that is extending the art of the possible for Split customers. Computing at the Edge is increasingly becoming a popular tool to develop with. In this session, we’ll discuss the importance of having feature flags and experimentation capabilities available in the Edge toolkit, cover best practices, and more.