Is your organisation A/B testing its website?

Does it ever feel like you’re testing for testing’s sake?

In this article, I’ll argue that the over-emphasis on quantity (or ‘frequency’) of experimentation is misguided and dangerous, and offer a better way of running profitable optimisation programmes for your ecommerce website.

Obsession with Quantity

A/B testing is cheap. It’s easy to run lots of experiments, and the relative (technical) ease of A/B testing has contributed to a mentality of abundance: “keep churning out those A/B tests, something will work!” 

Whilst experimentation is easy, it’s a lot harder to run data-driven, PROFITABLE experiments:

65% of A/B tests fail (no statistically valid winner)*

*Econsultancy’s 2018 Optimisation Report

Unfortunately, the focus on quantity is why so many businesses are stuck in A/B test purgatory. They’re spinning wheels but they’re not going anywhere. The bottom line doesn’t improve.

This ‘abundance’ mentality is propagated by agencies and tools that want businesses to run as many A/B tests as possible. But when you approach website optimisation in this way, you sacrifice customer understanding for speed. Website optimisation should be rooted in user psychology, it’s not a bag of UX tricks.

To improve a website, you DON’T need a high volume of website changes or A/B tests. In fact, focusing on volume can be detrimental because you sacrifice quality. More important than quantity is the quality of your website changes.

It’s the hypothesis, stupid

If more user insight leads to more profits, how can you ensure that EVERY experiment that you run is data-driven and insight-led?

Firstly, ensure that each experiment is based on a hypothesis. Secondly, ensure that your hypothesis is evidence-based. The most reliable way to ensure that a hypothesis is truly data-driven is to use framework. With a nod to the excellent Craig Sullivan:

Because we saw [data/feedback]

We expect that [change] will cause [impact]

We’ll measure this using [data metric]

Data insight about your users often comes from multiple inputs, such as Google Analytics, user testing, surveys, customer interviews, video recordings, etc. The more insight the better. Overlapping insights are not as rare as you might think. For example, for my ecommerce clients I often see the same insights emerging from user testing AND surveys. That’s usually a sign that the insight is solid!

NOTE: some insights are more powerful than others. Since we are largely talking about qualitative, intangible, ‘voice of customer’ data, it’s not really about the volume of the evidence, but more about the strength of that evidence, and whether it comes from multiple sources. 

The takeaway

Data-driven conversion optimisation – it’s what many organisations aspire to. Leveraging a hypothesis framework that forces you to ‘bake’ insights into your A/B testing routine is a sure way to do it.

Want to increase return on investment from your A/B tests? Connect with me on LinkedIn or contact me here and we’ll discuss how to apply the ‘Conversion Velocity’ Formula to your website optimisation programme.