Stephen Pavlovich, CEO – Experimentation at GAIN, explores why A/B testing doesn’t only help brands create better products, it also allows them – and their agency partners – to try out their boldest ideas.

Despite the increasing amounts of time and money spent on research and data, the truth is that most people still make decisions based on gut instinct rather than evidence.

But making choices driven by personal motivation is far from the best option for brands looking to grow. In reality, it risks making their proposition worse.

The power of A/B testing 

A/B testing is often used as a way to validate changes a brand was going to make any way – changing design elements, testing headlines, or even adding new functionality.

But its potential is far greater. Experimentation lets brands test out their boldest ideas – that they may be otherwise too nervous to roll out.

That’s why A/B testing has become a way to inform product strategy, and it’s relatively quick and easy to do.

In short, it involves creating different versions of a brand’s website to see what customers respond best to, and takes away one of the riskiest elements of launching a new product.

Many brands spend months or even years developing a product and bringing it to market, just to see it fail miserably because it wasn’t what people wanted. Even with research to back it up, brands often realise that the insight may be out-of-date or misleading.

By testing it out on audiences first, the customer unknowingly plays an active role in the decision-making process, helping brands determine what works and what doesn’t.

Testing multiple ideas at the same time allows us to pick the highest performer, and if something doesn’t work, we can turn the test off without really having lost anything.

Case study: T.M. Lewin 

GAIN recently conducted research into buying behaviour for British shirt maker T.M.Lewin. The brand has always offered a number of ways for people to customise their shirts, including the somewhat baffling ‘sleeve length’ choice.

I say baffling, because in all honesty, how many men in the UK actually know what their sleeve length is? According to our YouGov survey, 92% do not, and it’s fair to assume that the remaining 8% are lying.

With multiple choices for sleeve length, we decided to see what would happen if we marked one of the options as “regular” and one “long”. Half of the people visiting the website would see this version and the other half would see the usual version without any explainers.

The result was a 7% uptick in sales, without the brand having to make any other changes to its website or its marketing strategy. All it had to do was add a couple of words to the sizing options to offer clarity for customers.

Case study: Testing the market, once slice at a time  

Another fun example is when we tested out new pizza flavours for a leading pizza restaurant. Launching a new product is a huge undertaking for the brand that typically takes 12 months and involves market research, focus groups, taste tests, sourcing ingredients and working with its supply chain.

To speed things up, we tested consumer demand for new flavours by adding one new pizza to the online menu, chosen at random from five different options. The catch was – none of these products existed. We just wanted to see if customers would show interest. 

Half of the people browsing the website would see them and half of them wouldn’t, and then we analysed how many people tried to buy them. Those who did were told that the product in question wasn’t available yet. 

This is a world apart from traditional focus groups. We didn’t bring people into an artificial environment, feed them pizza and ask for feedback. Instead, we tested on real customers who were “in the moment” – hungry, on their sofa, on a Friday night. There’s no better form of evidence – and it’s immediate, too.

Smaller budgets, bigger impact 

There are multiple benefits to A/B testing, and the fastest growing companies out there, the likes of Amazon, Meta, and eBay, do a huge amount of experimentation.

The good news for smaller businesses is that they don’t need a big budget to get started, as long as they have enough website traffic to get statistically significant results. Any brand spending a decent amount on paid search and paid social can and should be experimenting.

What we’ve found is that only around 20-30% of tests are successful – that is, they generate a statistically significant improvement in performance. This means that brands that are running them constantly are gathering user data and making safer choices around how they improve their website and their products.

It also means that brands who don’t experiment using A/B testing will fall behind. Considering that only 30% of changes have a positive impact, that means the brands who aren’t experimenting will still be making these changes – they just don’t know what’s helping and what’s harming. And a lot of their efforts will have no impact at all.

Most importantly, once people realise that the test can be turned off if it doesn’t work, it helps them think about the big, bold changes that they’ve been scared to roll out.

Instead of settling for the safe option, they feel empowered to test something much more aggressive, something their rivals wouldn’t do. And that’s how they get a competitive advantage.

  • Digital Strategy
  • People & Culture

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.