Product Updates

We help you ship faster. And we walk the walk
Brock Lumbard
Product Manager, Statsig
2/15/2025
Permalink ›

🟢 Node Server Core SDK

Server Core is a full rewrite of our Server SDKs with a shared, performance-focused Rust library at the core - and bindings to each other language you'd like to deploy it in. Today, we're launching Node Server Core (Node Core).

Performance & functionality exclusive to Server Core

Node Core leverages the natural speed of a core written in Rust - but also benefits from all of our latest optimizations in a single place. Out initial benchmarking suggests that Node Server Core can evaluate 5-10x faster than our native Node SDK. Beyond that, Node Core supports new features like Contextual Multi-Armed Bandits, and advanced bootstrapping functionality, like bootstrapping Parameter Stores to your clients. Using Node core with our Forward Proxy has even more benefits, as changes can be streamed, leading to 1/10th of the CPU intensity.

Node Server Core is in open beta beginning today, see our docs to get started. In the coming months, we'll ship Server Core in Ruby, PHP, and more - if you're looking forward to a new language, let us know in Slack.

Brock Lumbard
Product Manager, Statsig
2/14/2025
Permalink ›

🏁 Experiment at Startup

Evaluating at startup: an unsolved problem

One caveat of most experimentation implementations is the latency required to get experiment values for each user. Various approaches attempt to work around this, some of which Statsig provides - like local evaluation SDKs, bootstrapping, and non-blocking initialization, but each have their own caveats - like security, speed, and making sure you have the latest values (respectively).

The Local Eval Adapter

Today we're announcing a new feature that we believe resolves many of these concerns for experimenting at app startup: the Statsig Local Eval Adapter. With this approach - you can ship an app version or webpage with a set of config definitions, which can be evaluated immediately on startup. Following that initial evaluation - values from the network can takeover.

Differences from other approaches

While local evaluation SDKs - which download the experiment ruleset for all users - could theoretically solve this problem in the past by shipping that ruleset with the SDK, they didn't have the ability to switch into a "precomputed" mode after that, meaning if you wanted to ship configurations with an app, you were compromising security. With this approach, you can be selective on the info included in the Adapter, ensuring security. Check out the Local Eval Adapter in our docs!

Vineeth Madhusudanan
Product Manager, Statsig
2/13/2025
Permalink ›

Reuse Experiment Salts

The Statsig SDKs use deterministic hashing to bucket users. This means that the same user being evaluated for the same experiment will be bucketed identically - no matter where that happens. Every experiment has it's own unique salt, so that each experiment's assignment is random.

For advanced use cases - e.g. a series of related experiments that needs to reuse the control and test buckets, we now expose the ability to copy and set the salts used for deterministic hashing. This is meant to be used with care. and is only available to Project Administrators. It is available in the Overflow (...) menu in Experiments.

image
Vineeth Madhusudanan
Product Manager, Statsig

Extreme Measurement with Min and Max on Statsig WHN

We’re excited to release Max/Min metrics on Statsig Warehouse Native. Max and Min metrics allow you to easily track users’ extremes during an experiment; this can be extremely useful for performance, score, or feedback use cases. For example, these easily let you:

  • Understand how your performance changes impacted users’ worst experiences in terms of latency

  • Understand if changes to your mobile game made users’ peak high scores change

  • Measure the count of users in your experiment that ever left a 2-star review, or lower — using MIN(review_score) with a threshold setting

Mins and maxes can map directly onto users’ best and worst experiences, and now it’s just a few clicks to start measuring how they’re changing with any feature you test or release.

Vineeth Madhusudanan
Product Manager, Statsig

Ship experiment with (experiment specific) Holdback

When you're done with your experiment, you can now chose to ship it with an experiment-specific holdback. This is helpful when you're done with the test, are shipping a test group, but still want to measure impact on a small subset of the population to understand longer term effects.

Example use case : When ending a 50% Control vs 50% Test, you can ship Test with a 5% experiment specific holdback. Statsig will ship the Test experience to 95% of your users - and will continue to compute lift vs a the 5% holdback. It compares this 5% holdback (who don't get the test experience) to a similar sized group who got the test experience when you made the ship decision. You can ship to the holdback when you conclude this experiment. See docs

Statsig also natively supports Holdouts. These typically are used across features, and aren't experiment specific.

Akin Olugbade
Product Manager, Statsig
1/31/2025
Permalink ›

👤 User Profiles

Your user data just got more manageable. User Profiles now store user properties independently from events, creating a single source of truth for user attributes that can be joined with any event during analysis.

What You Can Do Now

  • Join user profile data with any event in Metrics Explorer without requiring properties on the events themselves

  • Send events with minimal payload by removing redundant user properties from .logEvent() calls

  • Maintain a centralized, always-current record of user attributes

  • Access a complete view of user properties for any user in the dedicated Users Tab

How It Works

  • User properties sent through .logEvent() automatically sync to the user's profile

  • New properties are added to the profile while existing ones update as values change

  • During analysis, user profile properties are available to join with any event, regardless of when the event occurred

Impact on Your Analysis

Let's say you run a social network and track a user's friend count. Instead of sending this property with every interaction event, you can:

  1. Store friend count once in the user profile

  2. Update it only when it changes

  3. Analyze any event (likes, comments, posts) by friend count segments

  4. Trust that you're always using the most current user data

This separation of user context from event data gives you cleaner event tracking and more reliable analytics, while reducing the complexity of your event logging code.

Akin Olugbade
Product Manager, Statsig
1/31/2025
Permalink ›

⚙️ Custom Metrics in Funnels

Funnels become more powerful with the ability to use saved custom metrics as funnel steps. This integration eliminates the need to manually reconstruct complex event combinations or filtered events each time you build a funnel.

What You Can Do Now

  • Use saved custom metrics as steps in your conversion funnels

  • Apply filtered events and multi-event combinations consistently across your analyses

  • Build funnels faster by using your existing metric definitions

  • Maintain consistent event definitions across your team's funnel analyses

How It Works

  • When creating a funnel step, you can now select from both raw events and your saved custom metrics

  • Each custom metric maintains its original configuration, including filters and event combinations

  • Changes to a custom metric automatically reflect in any funnel using it as a step

  • Mix and match raw events and custom metrics within the same funnel

Impact on Your Analysis

Say you're tracking signup conversion and your "Completed Signup" step needs to capture multiple success events while excluding test accounts. Instead of rebuilding this logic for each funnel:

  1. Use your saved custom metric that already has the correct configuration

  2. Drop it directly into your funnel as a step

  3. Trust that all your funnel analyses use consistent event definitions

This update reduces manual setup time and helps your team measure conversion points consistently across your analytics.

Akin Olugbade
Product Manager, Statsig
1/31/2025
Permalink ›

📊 Distribution Charts++

Distribution Charts now offer three specialized views to help you uncover patterns in your user behavior and event data, along with smarter automatic binning.

What You Can Do Now

  • Analyze user engagement patterns with Per User Event Frequency distributions to see how often individual users perform specific actions

  • Explore value patterns across events using Event Property Value distributions to understand the range and clustering of numeric properties

  • Discover user-level patterns with Aggregated Property Value distributions, showing how property values sum or average per user over time

  • Let the system automatically optimize your distribution bins, or take full control with custom binning

How It Works

  • Per User Event Frequency shows you the spread of how often users perform an action, like revealing that most users share content 2-3 times per week while power users share 20+ times

  • Event Property Value examines all instances of a numeric property across events, such as seeing the distribution of order values across all purchases

  • Aggregated Property Value calculates either the sum or average of a property per user, helping you understand patterns like the distribution of total spend per customer

  • Smart binning automatically creates 30 optimized buckets by default, or you can set custom bucket ranges for more precise analysis

Impact on Your Analysis These new distribution views help you answer critical questions about your product:

  • Is your feature reaching broad adoption or mainly used by power users?

  • What's the typical range for key metrics like transaction values or engagement counts?

  • How do value patterns differ when looking at individual instances versus per-user aggregates?

The combination of flexible viewing options and intelligent binning makes it easier to find meaningful patterns in your data, whether you're analyzing user behavior, transaction patterns, or engagement metrics.

distributionsv2
Brock Lumbard
Product Manager, Statsig
1/31/2025
Permalink ›

🐍 Python Server Core SDK

Server Core is a full rewrite of our Server SDKs with a shared, performance-focused Rust library at the core - and bindings to each other language you'd like to deploy it in. Today, we're launching Python Server Core (Python Core).

Performance & Python-threading optimized

Python Core leverages the natural speed of a core written in Rust - but also benefits from all of our latest optimizations in a single place. Out initial benchmarking suggests that Python Server Core can evaluate 5-10x faster than our native Python SDK. As an added benefit, Python Core's refresh mechanism is a background process, meaning it never needs to take the GIL. Using Python core with our Forward Proxy has even more benefits, as changes can be streamed, leading to 1/10th of the CPU intensity.

Python Server Core is in open beta begging today, see our docs to get started. In the coming months, we'll ship Server Core in Node, PHP, and more - if you're looking forward to a new language, let us know in Slack.

Vineeth Madhusudanan
Product Manager, Statsig
1/30/2025
Permalink ›

Differential Impact Detection on Cloud

This feature automatically flags when sub-populations respond very differently to an experiment. This is sometimes referred as Heterogeneous Effect Detection or Segments of Interest.

Overall results for an experiment can look "normal" even when there's a bug that causes crashes only on Firefox, or when feature performs very poorly only for new users. You can now configure these "Segments of Interest" and Statsig will automatically analyze and flag experiments where we detect differential impact. You will be able to see the analysis that resulted in this flag.

Learn about how this works or see how to turn this on in docs. This feature shipped on Statsig Warehouse Native last summer and is now available on Statsig Cloud too!

image

Loved by customers at every stage of growth

See what our users have to say about building with Statsig
OpenAI
"At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities."
Dave Cummings
Engineering Manager, ChatGPT
SoundCloud
"We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion."
Don Browning
SVP, Data & Platform Engineering
Recroom
"Statsig has been a game changer for how we combine product development and A/B testing. It's made it a breeze to implement experiments with complex targeting logic and feel confident that we're getting back trusted results. It's the first commercially available A/B testing tool that feels like it was built by people who really get product experimentation."
Joel Witten
Head of Data
"We knew upon seeing Statsig's user interface that it was something a lot of teams could use."
Laura Spencer
Chief of Staff
"The beauty is that Statsig allows us to both run experiments, but also track the impact of feature releases."
Evelina Achilli
Product Growth Manager
"Statsig is my most recommended product for PMs."
Erez Naveh
VP of Product
"Statsig helps us identify where we can have the most impact and quickly iterate on those areas."
John Lahr
Growth Product Manager
"The ability to easily slice test results by different dimensions has enabled Product Managers to self-serve and uncover valuable insights."
Preethi Ramani
Chief Product Officer
"We decreased our average time to decision made for A/B tests by 7 days compared to our in-house platform."
Berengere Pohr
Team Lead - Experimentation
"Statsig is a powerful tool for experimentation that helped us go from 0 to 1."
Brooks Taylor
Data Science Lead
"We've processed over a billion events in the past year and gained amazing insights about our users using Statsig's analytics."
Ahmed Muneeb
Co-founder & CTO
SoundCloud
"Leveraging experimentation with Statsig helped us reach profitability for the first time in our 16-year history."
Zachary Zaranka
Director of Product
"Statsig enabled us to test our ideas rather than rely on guesswork. This unlocked new learnings and wins for the team."
David Sepulveda
Head of Data
Brex
"Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly."
Karandeep Anand
President
Ancestry
"We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Statsig has enabled us to quickly understand the impact of the features we ship."
Shannon Priem
Lead PM
Ancestry
"I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig."
Partha Sarathi
Director of Engineering
"Working with the Statsig team feels like we're working with a team within our own company."
Jeff To
Engineering Manager
"[Statsig] enables shipping software 10x faster, each feature can be in production from day 0 and no big bang releases are needed."
Matteo Hertel
Founder
"We use Statsig's analytics to bring rigor to the decision-making process across every team at Wizehire."
Nick Carneiro
CTO
Notion
"We've successfully launched over 600 features behind Statsig feature flags, enabling us to ship at an impressive pace with confidence."
Wendy Jiao
Staff Software Engineer
"We chose Statsig because it offers a complete solution, from basic gradual rollouts to advanced experimentation techniques."
Carlos Augusto Zorrilla
Product Analytics Lead
"We have around 25 dashboards that have been built in Statsig, with about a third being built by non-technical stakeholders."
Alessio Maffeis
Engineering Manager
"Statsig beats any other tool in the market. Experimentation serves as the gateway to gaining a deeper understanding of our customers."
Toney Wen
Co-founder & CTO
"We finally had a tool we could rely on, and which enabled us to gather data intelligently."
Michael Koch
Engineering Manager
Notion
"At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It's also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us."
Mengying Li
Data Science Manager
Whatnot
"Excited to bring Statsig to Whatnot! We finally found a product that moves just as fast as we do and have been super impressed with how closely our teams collaborate."
Rami Khalaf
Product Engineering Manager
"We realized that Statsig was investing in the right areas that will benefit us in the long-term."
Omar Guenena
Engineering Manager
"Having a dedicated Slack channel and support was really helpful for ramping up quickly."
Michael Sheldon
Head of Data
"Statsig takes away all the pre-work of doing experiments. It's really easy to setup, also it does all the analysis."
Elaine Tiburske
Data Scientist
"We thought we didn't have the resources for an A/B testing framework, but Statsig made it achievable for a small team."
Paul Frazee
CTO
Whatnot
"With Warehouse Native, we add things on the fly, so if you mess up something during set up, there aren't any consequences."
Jared Bauman
Engineering Manager - Core ML
"In my decades of experience working with vendors, Statsig is one of the best."
Laura Spencer
Technical Program Manager
"Statsig is a one-stop shop for product, engineering, and data teams to come together."
Duncan Wang
Manager - Data Analytics & Experimentation
Whatnot
"Engineers started to realize: I can measure the magnitude of change in user behavior that happened because of something I did!"
Todd Rudak
Director, Data Science & Product Analytics
"For every feature we launch, Statsig saves us about 3-5 days of extra work."
Rafael Blay
Data Scientist
"I appreciate how easy it is to set up experiments and have all our business metrics in one place."
Paulo Mann
Senior Product Manager
We use cookies to ensure you get the best experience on our website.
Privacy Policy