Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
OpenAI Microsoft Notion ea affirm Univision beyond NoCap Lattice Teladoc notino UrbanSportsClub
GENERAL

How can I ensure consistent experiment values across client and server components in a Next.js app using a Stable ID in Statsig?

Date of slack thread: 8/12/24

Anonymous: So I’ve been trying to merge the buckets / want to make sure parts of my Next.js app for both client and server components stay in the same experiment (ie in sync, both should be in test or control). However, it seems like there isn’t an easy way to get a “source of truth” for this. I’ve managed to generate a universal “Stable ID” for helping with this but is there a way to ensure my experiment values stay the same as well?

tore (Statsig): Having a single identifier that is consistent across the front end and back end is the solution to this - statsig experiment group evaluation is based on that identifier and the group size/rollout%. Have you tried out your identifier with checks on the server and client to verify you get the same group?

Anonymous: The groups end up being different but fairly certain the identifier is the same. When I log my stable id in either location they match, and also show up in the statsig log. eg. I’m supposed to be in the Control group according to the Exposure Stream, but my server log (despite showing the same stable id, which is what I then set my userID as with useExperiment) ends up logging that it is in a “Test” group.

Anonymous: In fact now that I’m looking at this it seems like my server experiment is constantly bucketing me as “Test”, even as I use incognito mode and rerun new versions with new stable ids.

Anonymous: This is in local dev, is there a reason this would be locked in this state?

tore (Statsig): Is your experiment based on that ID? Which experiment are you checking?

Anonymous: What do you mean by based on that ID? My experiment is a 50/50 split, but since I’m using client + server Next.js code my understanding is I need to set a stable id (which then gets passed as a userID to server components).

Anonymous: More than happy to share any code / project id or whatever you need to get more info.

Anonymous: This is all in local dev by the way, no clue if that matters.

tore (Statsig): Every statsig entity has an ID type. So the gate/experiment you are checking has an ID type - either stableID, userID, or a customID. When you check it on the client side, it will evaluate using that id from the user object. Ditto on server. Right now, it sounds like you pass the identifier around, and sometimes its considered the user id, and sometimes its the stableid.

Anonymous: Oh, I think my assumption was to use the stable ID as the userID in the server object like so:

const useExperiment = await getExperimentSync(
    {
      userID: stableId ?? "",
    },
    "feed-as-homepage"
  );

Is this incorrect syntax if my plan is to use stable id server side?

tore (Statsig): Ah, I see! You need to specify it as a customid. So your user will look like this:

{
  customIDs: {
    stableID: stableId ?? ""
  }
}

Anonymous: Interesting, unfortunately this change made Next.js very angry and it crashed. I wonder if it’s because the stable id isn’t being set before this happens (though my assumption would be that the empty string might protect against a fatal error?).

tore (Statsig): Any exception trace?

Join the #1 experimentation community

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

Why the best build with us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
President
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy