Customer lifecycle and marketing automation platforms like Braze, Marketo, Salesforce Marketing Cloud, and HubSpot offer native A/B testing capabilities that empower marketers to design and run experiments on their customers.
Here are links to the relevant AB Test documentation for these providers: (Braze, SFMC, Marketo, HubSpot).
While these platforms provide the essential tools for configuring, designing, and launching email and push notification experiments, they provide only barebones tools for measuring and analyzing experiments.
The AB Test measurement capabilities offered by these platforms lack the sophistication businesses need to confidently understand the impact of their tests and make data-driven decisions.
This is where Statsig comes in. It allows customers to apply the rigor of experimentation analysis to both the simple engagement metrics associated with these campaigns and downstream business metrics that take place later in the customer journey.
Most marketing platforms provide simple analytics that focus on engagement metrics, such as email opens and click-through rates.
However, these tools don’t incorporate metrics from subsequent phases in the journey, including web and mobile app interactions, purchase behavior, and other business outcomes. This gap can lead to a fragmented view of campaign success and make it difficult for marketers to understand the true impact of their experiments below the surface.
You can do better than this! 👇🏼
Statsig’s Warehouse Native Platform is uniquely positioned to sit on top of the data associated with your market campaigns and provide deep analysis on user metrics. These metrics can be derived in any application—Statsig is entirely agnostic to how the data was produced, as long as it lives in your data warehouse, it can be used for test analysis.
Businesses have rich datasets about their customers in their warehouses, transcending just basic clickstream-type metrics. Leveraging your data warehouse for analysis allows you, for example, to understand how an email campaign impacts customer revenue and perform results segmentation during analysis.
A very common use case with warehouse native is incorporating customer cohorts for analysis, such as spend segments (high, medium, low). So now, instead of just understanding a topline “Click Through” metric per test group (as you’re limited to in marketing tools), you can also understand how the campaign impacted revenue and how your customer spend segments behaved as a result of the campaign.
Standard deviation and variance are essential for understanding data spread, evaluating probabilities, and making informed decisions. Read More ⇾
We’ve expanded our SRM debugging capabilities to allow customers to define custom user dimensions for analysis. Read More ⇾
Detect interaction effects between concurrent A/B tests with Statsig's new feature to ensure accurate experiment results and avoid misleading metric shifts. Read More ⇾
Statsig's biggest year yet: groundbreaking launches, global events, record scaling, and exciting plans for 2025. Explore our 2024 milestones and what’s next! Read More ⇾
A guide to reporting A/B test results: What are common mistakes and how can you make sure to get it right? Read More ⇾
This guide explains why the allocation point may differ from the exposure point, how it happens, and what you to do about it. Read More ⇾