These SDKs reduce package sizes by up to 60%, support new features for web analytics and session replay, simplify initialization, and unify core functionality to a single updatable source.
When we first started writing SDKs, we were learning how to make Statsig work in every language and framework.
Over time, we learned from our customers about the variety of ways they needed to customize the SDK behavior to meet their use cases—from relying on cached values only, to evaluation callbacks, to evaluating gates and experiments client side, and much more.
We also created separate packages when we needed a larger change, rather than refactoring and potentially introducing a breaking change to existing customers.
This ultimately bloated our SDKs and made it more difficult to add additional features and maintain updates across the different languages and frameworks supported. We offered packages for js, react, react native, expo, js “lite”, js on device evaluation…and we even explored a js-superlite.
Our latest attempt rethinks how we should deliver feature gating, experimentation, analytics, and future requests while maintaining performant, modular packages.
Take, for example, the old js-lite SDK we released.
Our Javascript SDK started to get so bloated with different features that the size ballooned, but some customers needed a more lightweight version—and we didn’t have a great way to share our Javascript code—so we forked the repository and trimmed back as much as we could.
We got it down to 12kB, but we had to cut a lot of features.
Today, I’m proud to say that the new @statsig/js-client SDK beats the outgoing statsig-js-lite package size, while maintaining support for our session replay and web analytics library extensions.
For comparison, here are the outgoing packages:
Library | Compressed Size (kB) | Download Time (Slow 3G) ms |
---|---|---|
statsig-js | 30.8 | 620 |
statsig-js-local-eval | 17 | 340 |
statsig-js-lite | 12.4 | 247 |
And the new ones:
Library | Compressed Size (kB) | Download Time (Slow 3G) ms |
---|---|---|
@statsig/js-client | 11.9 | 238 |
@statsig/js-on-device-eval-client | 15.3 | 305 |
no need for statsig-js-lite |
Comparisons via Bundlephobia
Now, each of these packages share a common interface and only swap out the in memory store and evaluation, while supporting a similar API signature, and with shared batched logger for event and exposure logging, shared network class for initialization and event logging, and even a shared interface to use either the “remote evaluation” or “local evaluation” with the new @statsig/react-bindings.
This also allows us to add first-party supported bindings for other frameworks more easily; we’re working on angular now and vue after that.
We intend to hold the line on package size for the core @statsig/js-client SDK. We’re still tweaking to get it below 10KB (compressed), where you will always be able to get feature gating, experimentation, and event logging.
One of the major learnings we had from our client SDKs was customers not wanting to take a dependency on a network request to Statsig before rendering their app.
I don’t think I would either, if I were using Statsig on my website or app.
Over time, we added ways to bootstrap the client SDK from a server SDK, or deterministically hit cache only without asynchronous initialization. Now, we’ve taken that pattern further, inspired by our server data adapters.
The SDK can get its values from many places - a local client cache, a network request, or a set of generated values from a Statsig server SDK. These are all encapsulated in the dataAdapter you can set on the StatsigClient
, or just use the one that comes packaged with the SDK.
If you want the most updated values without an additional network request to Statsig, we continue to recommend bootstrapping via a server SDK.
And as a last resort, you can of course initialize asynchronously via a network request.
One of the common patterns we started to get feature requests for was the ability to hook into different SDK lifecycle events like when the internal values were updated, or callbacks for each evaluation. Each time we had a request like this, we used to add a parameter somewhere to handle it. Now, you can use the ClientEventEmitter to subscribe to the events you want callbacks for, and we have a scalable way to add more as we go.
Along with this re-write, we’ve colocated a set of sample apps and snippets to share across our console and docs page. Previously, we’d spin up sample apps in their own repository, if at all. One of the awesome side effects of doing it this way is our samples and code are always in sync - the build steps will fail if a sample tries calling the SDK in a wrong way.
We’ve also added better guides for popular frameworks like nextjs, with more to come for angular and vue. If there’s a framework you’d like to see an example in, let us know!
We try our hardest to avoid breaking changes whenever possible, but we hit a point where it was increasingly difficult to meet the performance benchmarks we set, the high-quality support we deliver, and the ongoing feature development, all at the same time.
For those of you who are already using one of our Javascript SDKs, we know it can be a pain to have to adopt new packages into your products, so we have written up migration guides to make this switch as seamless as possible (for Javascript, React/React Native).
And we think having this more flexible and performant SDK in your hands will be a significant improvement over the SDKs it replaces.
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾