Your team is busy shipping features, and they are keeping a close eye on product metrics. While it’s easy (with Statsig) to understand the impact of individual feature launches, one question that often comes up is: What’s the collective impact of multiple features?
At Facebook, every team calculated the cumulative impact of all features shipped over the last 6 months. And to do that accurately, we use “Holdouts” that we create at the beginning of each half. A holdout is usually small (1–5%) in size, and as the name implies, holds that set of people out of any new features launched during that half.
This provides us with a baseline to measure the cumulative impact of multiple launches over 6 months. At the end of the half, we release that holdout and create a new one for the next half. Holdouts are powerful and have many other uses including measuring long-term effects, quantifying subtle ecosystem changes, and debugging metric movements.
Today, we’re making all this available via Holdouts on Statsig. Setting up a holdout is a cinch — you pick the size and the features you want held from users. You can also make holdouts be ‘global’ which means all new features will automatically respect the holdout. And occasionally (hopefully, sparingly) you might want to run a “back-test” and you can do that by turning on holdouts to an existing set of features.
Once set up, our Pulse engine will automatically compute the impact of all those features compared against the baseline. No additional configuration or code necessary.
Go ahead, try it out today! We have a free plan that allows you to get going right away without needing to talk to any sales teams.
Experimenting with query-level optimizations at Statsig: How we reduced latency by testing temp tables vs. CTEs in Metrics Explorer. Read More ⇾
Find out how we scaled our data platform to handle hundreds of petabytes of data per day, and our specific solutions to the obstacles we've faced while scaling. Read More ⇾
The debate between Bayesian and frequentist statistics sounds like a fundamental clash, but it's more about how we talk about uncertainty than the actual decisions we make. Read More ⇾
Building a scalable experimentation platform means balancing cost, performance, and flexibility. Here’s how we designed an elastic, efficient, and powerful system. Read More ⇾
Here's how we optimized store cloning, cut processing time from 500ms to 2ms, and engineered FastCloneMap for blazing-fast entity updates. Read More ⇾
It's one thing to have a really great and functional product. It's another thing to have a product that feels good to use. Read More ⇾