Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
affirm ea Microsoft OpenAI Univision Notion Whatnot Brex Lattice Plex DispatchHealth Teladoc
GENERAL

How can we investigate missing metrics and user allocations in a Statsig experiment?

Date of slack thread: 8/9/24

Anonymous: Hey team, We are not getting 50% of metrics to our landing pages experiment, and it looks like we are not getting 50% of users to the experiment. Where is the best place in StatSig to see number of users passed and failed for a specific targeting gate for a specific experiment to investigate the issue? I tried to look at diagnostics in feature gate and it shows nothing and there is no way to filter by experiment. https://console.statsig.com/5y0nB2AafSaMtPRuyJo2oU/gates/country_us_excludes_traffic_from_native_lp/diagnostics

Anonymous: We have nothing there, however the gate is used by several experiments that are live. Does it indicate an error in our set-up? https://console.statsig.com/5y0nB2AafSaMtPRuyJo2oU/1N8MWbsaNCi6Fa8DGT4yYt

Anonymous: <@U059V0Y8W6B> FYI

Makris (Statsig): Hi Sergei looks like your screenshot cropped it out. What does that Cumulative exposures chart at the top of the pulse tab show?

Anonymous: Hey <@U0727PC0VM0> That’s all we have there. Please see a full screenshot:

Anonymous: That’s what we have in Diagnostics

Makris (Statsig): Oh i’m sorry Sergei was looking on my phone and misread the screenshot you sent. Yes it was included in the first one.

Makris (Statsig): Sergei, you mentioned that the initial issue was with missing metrics on landing page experiments. I should mentioned that landing page experiments are being deprecated, although I’m unsure if this would be causing the issue you mention. Can you please share a link to the landing page experiment that isn’t showing metrics? I think you’ve only shared links to the gate above.

Makris (Statsig): https://docs.statsig.com/guides/landing-page-experiments/introduction

Makris (Statsig): <@U01RAN2FKJP> Sergei and team are using a feature gate for targeting in experiments. Sergei tried to look at the cumulative exposures tab for the feature gate to see what traffic was hitting it (as a debug for the experiments), but when he did so he saw essentially no traffic on the feature gate, even though the experiments hitting it have a lot of traffic. When experiments use a feature gate in targeting, do we log an exposure and/or check for the feature gate or only the XP?

Anonymous: <@U0727PC0VM0> here is the link to the XP https://console.statsig.com/5y0nB2AafSaMtPRuyJo2oU/experiments/22_07_2024_wis_v2277_ty_page_gg_based/results|https://console.statsig.com/5y0nB2AafSaMtPRuyJo2oU/experiments/22_07_2024_wis_v2277_ty_page_gg_based/results we are getting metrics there but when we compare them with DB it seems there should be twice as many conversions to what we see in STS, taking into account that the test is running on 50% of traffic.

Tore (Statsig): It looks like you are checking this experiment via the httpapi, is that correct?

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy