Testing can seem daunting at first to many retailers. There can be a tendency to take on an “if it’s not broke, don’t fix it” attitude. But if you don’t continually test to make sure you’re in tune with your ever changing audience, you’ll find that you stop evolving and fall behind your competitors. Also, it’s fun! Testing should be seen as a positive way to continually grow and learn.
Or perhaps you have done a bit of testing but it’s not really gone as planned? So obviously the most rational thing to do is to pack it in, right? Wrong. Very often I hear “Oh we tried doing a test but it didn’t have enough uplift so we’ve given up”. Just because one test has failed it doesn’t mean that they all will. If your car got a flat tyre you wouldn’t jump out and slash the other three would you? Exactly.
The most frequent question I am asked by a client about to embark on an A/B test is, “How do we increase our conversion/revenue”. If I could give you a 100% guaranteed to work answer to that question I would be writing this from the rooftop pool of my Manhattan penthouse, but I’m not! While an increase in conversion and revenue is always everyone’s end goal, you need to look at it from the right perspective to hit your target. In the same way that your annoying high school maths teacher would insist that you couldn’t just give the answer to the multiplication question, you had to show the workings out – the same is true with developing the logic for an A/B test. How are you going to get an increase in conversion and/or revenue? The answer will be different for each retailer, but it always comes down to giving your customers what they want.
So let’s start with that and do the workings out: what do your customers actually want?
1. Collect insight
Don’t tell yourself what you THINK your customer wants, find out what they REALLY want. The best way to do that is by asking and/or observing them. Ask them to fill out online surveys and give feedback on their shopping experience, card sorting exercises, etc. Or better yet, conduct a usability testing group study. The feedback you will gather from watching your customers interact with your site is priceless. You can then use these facts to fuel your decision-making.
2. Develop a Hypothesis
It’s essential to develop a sound hypothesis as a basis for your test. What’s a hypothesis? A hypothesis is a proposed explanation made on the basis of limited evidence as a starting point for further investigation. Or a theory if you don’t like big words. Without this you’re just throwing random stuff out to test and seeing what sticks. This is never a good idea as you’ll most likely waste your time. How do you analyse why the test was successful or failed if you don’t fully understand why you chose to run the test in the first place? Never mind taking the findings to develop further tests. And don’t think you can cheat yourself by inventing a random hypothesis! Make sure that it is fully developed based on your user insight. If you put pointless information in, you’ll get pointless information out. Make it mean something.
3. Create the Test
When it comes to setting up your test it helps to bear in mind that using a large number of variables can cause it to take a long time to conclude. Also, if you’re testing one set of massive criteria versus another, how definite will the insight that you gain be? How will you underpin exactly which variable caused the test to have the impact that it did? Make sure that you track key interactions so that you can analyse them at the end of the test. Which brings us to point 4…
4. Analyse Results
Don’t be tempted to call your test early, the results will often fluctuate and it’s important not to bottle out before the test has had a fair chance of reaching statistical significance. There are a number of A/B testing calculators available online that will help you establish if your test has reached significance, based on the sample size and number of conversions needed to reach your goal. You should also be sure to measure fair comparisons between the control and variation so that the results are not unfairly skewed.
5. Identify Learnings
So your test has reach statistical significance, great! B outperformed A? Amazing! Why was that? “Erm…”
It’s important that you can underpin exactly which change caused the test to succeed or fail. (Remember back at point 3 where we covered using a large number of variables). Ensure that you fully document the insight gained from your test so that you can refer back to it in future, and look to build a hypothesis for a follow up test based on what you have learned from these results.
Good luck and happy testing!