Date of slack thread: 5/23/24
Anonymous: Hello, I’m new to Statsig so wondering if there is a better way to go about the following. When we create a multivariate experiment, given that feature gates only support boolean values, we typically create a feature gate after the experiment concludes to deploy the variant for all users. The problem is, this usually takes several days and in the interim we have to show the underperforming variants as well. I don’t think altering the traffic via experiment is ideal as it would impact the results (not sure)? Should we be approaching these experiments differently?
Makris (Statsig): Hi, I suspect there might be some confusion on the gates vs served parameters. This image is a good example of multivariate XP (I got it from our docs here). You can see that the Targeting step has a spot for a Target Gate. This is where you can drop in your existing Feature Gate that will control which users are (binary result) either allowed into or not this XP. However, the Target gate does not control which variant is served. If a user passes the target gate, then are then randomly assigned into any of the three (in this example) variants. The parameters returned by the variants are not binary results but instead a string. This is the same for both AB tests and ABC tests.
If we were to select variant “Test #2” in this example, and roll it out to all users as Statbot suggested, the winning string would start being served for all customers that pass the Feature gate in the future. This doc explains how your winning variant can be rolled out to all users.
Makris (Statsig): To summarize, while Feature Gates/Target Gates evaluate a user as binary Pass/Fail, the parameter being returned need not be limited to a binary result.
Anonymous: Ah ok got it. This clears up my confusion. Thanks!