When you conclude an A/B test and decide to ship the winning variant, typically a 100% of the experiment's traffic shifts to the winning variant. Statsig now lets you ramp this gradually if you don't want a sudden, large shift like this.
Typical use case : When you're testing 5 variants; each gets 20% of traffic. When you make a ship decision, the winning variant goes from 20% of traffic to 100% of traffic. If you want to ramp traffic gradually to understand impact on CPU (or other resources), this feature lets you configure and schedule a multi-step, automated ramp.
Learn more here.
Server Core is a full rewrite of our Server SDKs based on a shared, performance-focused Rust library. Today, we're launching the Rust library itself as a standalone, brand-new Rust SDK.
So far, we've launched Node, Java, PHP, Python, and Elixir Core SDKs. Each of those depends on the Rust core, which today we're launching as its own SDK. Given its usage across numerous languages, we've invested in performance optimization of this Rust library, with substantial improvements vs. our original Rust SDK. In addition, Rust Core has new-to-Statsig features like Parameter Stores, Contextual Multi-Armed Bandits, and more.
Rust core is available on crates - read the docs to get started!
Quantifying your impact is critical. Statsig's Insights page provides you a clear view of how experiments impact a metric of interest. It allows you to focus on a single metric, identify which experiments are driving it most and estimate the aggregated impact. You can filter down to a team, or a tag. This is particularly useful to understand your team’s impact or set a reasonable goal for a future period.
You navigate to the Insights section on the Statsig console, or the insight tab for each metric, to check it out. Learn more here.
Want to monitor new sign-ups? Want to be alerted if feature performance is slow or a page fails to load? Topline Metric Alerts allow you to monitor a metric's topline value, independent of any Feature Gate or Experiment Rollout.
To configure a Topline Alert, head to Analytics -> Topline Alerts tab where you can find all your Topline Alerts and configure new ones. Choose the event you want to alert on and the aggregation you're most interested in, then craft a notification message and configure alert subscribers. To learn more, check out our docs here!
Note: this feature is in Beta and not broadly available (yet!) Please reach out to us if you'd like to be opted in for early access.
We're excited to start rolling out support for Informed Bayesian. This new feature allows you to integrate meaningful priors into Bayesian analyses, leading to faster insights and smarter decision making.
Reach out in Slack to opt your team in to this.
The Statsig Contentful app lets you create A/B/n tests and test different content blocks against each other - creating the experiment without leaving Contentful. Marketers can now optimize content, obtain insights, and iterate continuously right from within Contentful.
Learn more here.
Today we're announcing the Statsig Elixir Core SDK, in an early Beta. Elixir core is built around our performance-focused and rust-based Statsig Server Core library, that we're rolling out across multiple languages and frameworks. The Server Core library will receive feature updates and performance optimizations rapidly given its usage across multiple SDKs. Elixir core is written with Rustler and has some operations that are dirty scheduled, so it may not work in all Elixir environments. This SDK is in early Beta, and we'd be happy to hear your feedback in our Slack Channel. Get started with Elixir core in our docs!
Having your history of experiments attempted lets team's create institutional memory. When you create a new experiment with a hypothesis, Statsig now shows you existing, related experiments in case you want to explore what has already been done. You can always go to the knowledge base to explicitly look for this context too - but it's delightful to have this surfaced in-context without being intrusive.
If an implementation issue means that you've over-exposed users to an experiment, you can retroactively apply a filter to only analyze people truly exposed to an experiment. This was previously available on Analyze Only experiments on Warehouse Native (to work around over-exposure from 3rd party assignment tools). It is now available for Assign and Analyze experiments (where you're using the Statsig SDKs for assignment).
To do this, use the Filter Exposures by Qualifying Event option on the experiment's setup page (under advanced settings).
This filtering is also possible to do in Custom Queries (under the Explore tab).
Example use case : There is an experiment on search suggestions, visible only when people click into the search box. User's are currently exposed when the search bar renders, but this causes dilution, since all users see the search bar. In this case, we'd want to filter down exposures to users who actually clicked into the search box - so we'd point to a Qualifying Event (defined in SQL) to filter exposures down to this subset of users.
The Experiment Quality Score is a metric designed to get a sense for the quality of an experiment configured within Statsig. It helps experimenters quickly identify potential issues in experiment setup, execution, and data collection, ensuring more confident decision-making. Measuring this over a number of experiments can help measure improvements in maturity over time.
Learn more about enabling and configuring it here. This is rolling out now.