Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
OpenAI ea Notion affirm Univision Microsoft Ancestry RecRoom Teladoc Crowe Brex Cruise
GENERAL

How can I ensure the `enabled` value from `useExperiment` is correct before making a redirect decision in my React component?

Date of slack thread: 5/9/24

Anonymous: I am using:

  • The React SDK
  • StatsigProvider with waitForInitialization=false
  • The useExperiment hook
  • An experiment that is currently configured to have enabled=true for 100% of users

My component has code like this:

const enabled = config.get('enabled', false)```
My assumption was that once `isLoading` becomes `false` (done loading) that `enabled` would immediately and always be `true`.

However I frequently see that when `isLoading` becomes `false` `enabled` can be `false` for a few renders until it turns `true`.

Is this a bug or am I misunderstanding the return values of `useExperiment`?

Thanks!

**Anonymous:** To put it another way I get the wrong value a few times but after a few renders I get the correct value. This is problematic in my current experiment because my experiment involved whether we should redirect the user to another page. So if the first value is wrong I do the wrong thing (redirect the user when I shouldn’t have) and I can’t correct it. Thanks.

**Vijaye (Statsig):** We don’t automatically update the config object. You will need to refetch the values after init. If you making redirect decision you would want to do this synchronously, right? Maybe if you explain the scenario in detail we could suggest ways of doing this cleanly.

**Anonymous:** When page `/feature` loads redirect them to page `/not-available` unless feature is enabled for this user.

Is it possible to know when the config is fully and the `enabled` value is safe to be used / correct?

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy