In a previous blog post, I touched on how major e-commerce companies employ different approaches to A/B testing. Today, I want to share an inside look into experimentation at a popular financial services company that offers payment processing services and APIs for e-commerce applications¹.
Conversion is a massive driver for the e-commerce businesses that this company supports. This naturally focuses a lot of the company’s efforts on improving conversion in multiple ways. For example, their core product optimizes for conversion in two ways:
Checkout: The buyer-facing widget optimizes the order of the checkout page to drive the highest conversion experience. For example, making email is optional or required, placing the email field at the top or bottom of the widget, adding a “Remember me” option, all contribute to conversion and are heavily A/B tested.
Adaptive Acceptance: The backend systems that submit payment information to card networks optimize for payment acceptance rates by identifying the best messaging and routing combinations. In addition to running internal A/B tests to develop smart logic that prevents card declines before they happen, they also show customers how much revenue is saved by turning on this feature for a percentage of their transactions.
While their core product processes billions of dollars in payments for millions of users, teams building new products for a much smaller number of users use experiments for essential product validation.
For example, the team working on the flow to add bank accounts sees a few thousand flows per day. They use feature gates to control new feature releases, to validate new feature functionality, and to measure conversion with the new product. For example, they ask: Are people adopting this new flow? What percentage of users are engaging with the flow with high intent? What parts of the product are they engaging with?
Each team chooses their target metrics based on the user experience that they directly drive and optimize. The checkout teams track completed payments whereas the teams working on the flow to add bank accounts track when the user receives the first payment.
Yet, moving top line business metrics, even those set at the team level, may not be easy. This is especially true when teams are making improvements upstream in the flow. For this reason, each team tracks metrics at each step in the flow. The team working on the bank account flow tracks how many users enter the flow, how many users input their credentials, how many users find their bank account, and so on. Their goal is to understand how users are engaging with the product, are these high intention users, and are they able to complete the happy path.
In spite of wide usage at the company, experimentation isn’t free of challenges. Two of the most common challenges are (a) instrumenting the product to capture the right events, and (b) software engineers being wary of handling data.
For example, software engineers may take an initial pass at instrumenting their application. But when the data scientist sees the data and finds gaps in stringing together the signals, the engineers must often redo a lot of the instrumentation.
Software engineers are also hesitant about wading into the data to analyze the results. One leader at the company said, “The space is painful to play with. Seeing data coming in live, ensuring it is loaded reliably, and ensuring all data fields are treated correctly requires handling 50 different exceptions. We need data scientists to focus on these problems because engineers don’t.”
In the absence of data scientists, experiment analysis may fall apart upon deeper inspection because of noisy data. On one occasion, when the team was rolling out a product to new geographies, conversion rate plummeted from high 90s to less than 10%. The team initially rationalized that this wasn’t impossible; a lot of new factors could be causing the conversion rate to decline. However, on closer inspection, they found that the metrics were including a large volume of ineligible transactions in the denominator, in particular from countries where the product wasn’t yet available.
Regardless of the challenges, the company believes that experimental data is key to meaningfully moving their numbers and uncovering insights about user behavior. In a recent instance, the team found that a new frictionless way to add bank accounts using OAuth (vs. manual entry) resulted in a 60% uptick in adoption even when users were offered no incentive. This automatically cut short long discussions and reviews around what incentives should be implemented to change user behavior. The data naturally focused the team on what’s already proven to work!
As the company embarks on the next phase of growth, they’re all in on experimentation. For them, data trumps intuition everyday.
Join the Statsig Slack channel to learn about how the most innovative companies use experiments to accelerate their growth.
[1] The company remains unnamed here at their request.
The Statsig <> Azure AI Integration is a powerful solution for configuring, measuring, and optimizing AI applications. Read More ⇾
Take an inside look at how we built Statsig, and why we handle assignment the way we do. Read More ⇾
Learn the takeaways from Ron Kohavi's presentation at Significance Summit wherein he discussed the challenges of experimentation and how to overcome them. Read More ⇾
Learn how the iconic t-test adapts to real-world A/B testing challenges and discover when alternatives might deliver better results for your experiments. Read More ⇾
See how we’re making support faster, smarter, and more personal for every user by automating what we can, and leveraging real, human help from our engineers. Read More ⇾
Marketing platforms offer basic A/B testing, but their analysis tools fall short. Here's how Statsig helps you bridge the gap and unlock deeper insights. Read More ⇾