Drive smarter product decisions through online controlled experiments
Building an Experimentation Platform
Experimentation platforms consist of a robust targeting engine, a telemetry system, a statistics engine, and a management console.
- The targeting engine is responsible for dividing users across variants.
- Telemetry is the automatic capture of user interactions within the system.
- A statistics engine determines what feature caused a change in your metrics.
- The management console is where experiments are configured, metrics are created, and results of the statistics engine are consumed and visualized.
Four types of metrics are important to experiments:
- The Overall Evaluation Criteria is a measure of long-term business value or user satisfaction.
- Feature Metrics are specific metrics that are important to the team in charge of the experiment.
- Guardrail Metrics should be directional and sensitive but not necessarily tie back to business value.
- Debugging Metrics must be sensitive but need not be directional or understandable.
Running A/A Tests
In an A/A Test, both the treatment and control variants are served the same feature, confirming that the engine is statistically fair and that the implementation of the targeting and telemetry systems are unbiased.
Understanding Power Dynamics
Power Dynamics measures an experiment’s ability to detect an effect when there is an effect there to be detected.
Executing an Optimal Ramp Strategy
During a ramp up process, taking too many steps or taking too long at any step can slow down innovation. But taking big jumps or not spending enough time at each step can lead to suboptimal outcomes.
Building Alerts and Automation
By building in metrics thresholds, you can set limits within which the experimentation platform will detect anomalies and alert key stakeholders, not only identifying issues but attributing them to their source.