
If you’re not sure if you’re carrying out A/B tests properly and effectively, then this guide help you be on the right track.
It’s a known fact that most marketers, particularly “noobie” marketers with little or no experience, don’t know how to perform A/B tests correctly, which not only leads to “unnecessary changing of websites,” but also hurt the bottom line of their clients.
According to a report released by Qubit – a conversion optimisation company – most awesome A/B test results are actually misleading, as it is carried out badly.
So, why this situation? How can you, a “noobie” marketer or a website owner, make sure your A/B testing (what many also like to call “split testing”) results are real, and it is properly and effectively carried out?
Well, I’ll get to that point, but first what is A/B testing.
A/B testing is a process of comparing two or more versions of a web page (or a landing page) to figure out which version performs better. Two group of people are shown two different versions of the same page and the results are assessed by how each group interact with each page.
For instance, you may have a web page containing a CTA button in a specific location of the page (top-half-of-the-page), which is compared against another page with a similar design, but containing a CTA in a different location and with a different colour or wording.
(Image source: VWO)
You can also test other elements of a web page, such as:
- Headline
- Page layouts
- Forms
- Images
- Special offers
- Web copy (long or short form)
- Buttons
And according to the above report, A/B tests carried out this way often give you false results and the expected ‘uplift’ (or boost in sales) is hardly realised.
But, A/B testing is fun. There is plenty of easy-to-use tools that can help you easily run A/B testing on your web pages and landing pages. But one thing is for certain: running a proper and effective A/B testing that gives “real” results goes beyond just setting up a test and running it. In fact, over 90% of companies make these simple yet deadly mistakes and flush millions of profits down the toilet.
(Image source: Mightycall)
#1: Most A/B testing are not run long enough
Let’s say you have a website that gets high traffic. You achieve a whopping 200 conversions for each variation in 2 days. Is your A/B test done? Not yet.
When conducting A/B testing properly, you need to let it run for a full week. That means if you started your test on Monday, then you have to end it after 7 days (on next Monday). Why? This is important because conversion rates vary greatly depending on the day of the week.
So, if you’re not giving at least a week to run your test, you’re not giving enough time – and misrepresenting your results again.
Here’s what you to instead: Run a report titled “conversion per day of the week” on Google Analytics to see how much variations there is in your conversions. Look at the screenshot below for example:
Did you see something? Conversions on Thursdays are almost two times better than Saturdays and Sundays.
If we didn’t allow us at least a week, the results would be misleading. So, this is what you’ll do from now on: run A/B testing for first 7 days. If you’re not sure of the results, then run it for another 7 days. If you’re still not confident with the result, run for another 7 days.
There’s only one time you’ll break this rule: when all the data says with 100% confidence that the conversion rate is the same. But, if this is the case, it’s always better to test one week at a time.
Mistake #2: A/B testing is done even when the website doesn’t get enough traffic (or conversions)
(Image source: Conversion XL)
If your website gets very few traffic, and you get 2-3 sales per month and you run A/B test and discover version A yields 20% better results than B, how can you be certain of the result?
Don’t get me wrong: A/B tests are incredibly powerful, but you cannot rely on it to figure out the conversion rates of your website if it gets very little traffic.
It may take you 6 months to find a significant result, but by that time you’d have wasted plenty of money. In such instances, you should use paid advertising options such as Google Ads or Facebook Ads to drive traffic numbers.
You don’t want to make conclusions based on a small sample size. A good ballpark is to
aim for at least 350-400 conversions per variation (can be less in certain circumstances
– like when the discrepancy between control and treatment is very large). BUT – magic
numbers don’t exist. Don’t get stuck with a number – this is science, not magic.
Mistake #3: Conducting tests based on fuzzy hypothesis
Before you do anything (conducting a “random” A/B testing), you need to thoroughly perform conversion research so that you can easily find where the problems are. Then, you can do further analysis to understand what those problems are, and finally, form a hypothesis for solving those website’s problems.
If you conduct an A/B test without a clear hypothesis, and A wins by 10%, that’s awesome, but what will you learn from all this? Nothing! A/B testing is also important to learn about what your audience think, which helps you run even better A/B tests down the line.
If you want to learn how to write a killer A/B testing hypothesis, check out this great resource on Optimizely.
(Image source: Optimizely Blog)
Mistake #4: Time and traffic are wasted on “useless” tests
Are you still testing the colour of your CTA button? Don’t do it.
There is no colour that is best, but it is all about visual hierarchy. Of course, you’ll find someone running A/B testing on colours and discovering better results, but they’re all no-brainers. You should not be wasting your precious time and traffic on A/B testing no-brainers. Simply implement them.
You see, nobody has enough traffic to spare, including you. So, use the traffic to test on high-impact issues. Use it on a data-driven hypothesis.
Mistake #5: Giving up on A/B testing after first few test fails
You ran an A/B testing, and it didn’t give you any significant results? Oh yes. Let’s move on and test another landing page, right?
Don’t come to a conclusion so fast, because it has been found that most first test often fails. Yes. I know you’re tempted to move on, and me too, but the fact is iterative testing is where you’ll get all accurate results. You set up a test and run it, learn from the feedbacks, and improve your hypothesis. Then, you do another follow-up test, get feedbacks and learn from it, and keep on improving your hypotheses. Then, run another follow-up A/B test, and so on. Get it?
Did you know that Conversion XL ran not 1, but 6 tests on the same page to get the kind of uplift they’re seeking? That’s what “real” testing looks like in life.
(Image source: Conversion XL)
If everyone expects the first test will spike up the conversions through the roof, two things will happen: more money will be flushed down the toilet and more people will be jobless. No. It doesn’t have to be that way. In fact, everyone can have plenty of money. It’s just a matter of running iterative tests because that’s where all the cash is.
And here’s how you can you run a proper and effective A/B testing that gives you BIG wins?
Here’s what you can do:
#1: Know what you want to test
As you’re planning to run A/B testing on your landing page or website, it’s very crucial that you plan all the elements you want to test. You could test common elements on your websites such as your landing page, CTA buttons, emails, paid search, and many other things to get started. And if you don’t like to do it manually, you can always use an appropriate A/B testing too, such as Google Analytics.
(Image source: Smash Magazine)
#2: Form a hypothesis that actually works for you.
In short, you’ll figure out your target metrics (or highly specific end goals). Only a few tests were able to yield amazing results because others were performed without taking into account of specific end goals and consumer needs.
#3: Keep on testing your assumptions, and never stop.
(Image source: Safari Books online)
A lot of people think that A/B testing is a one-time process. No. It’s not. In fact, A/B is an ongoing process, something which should never halt. Most people think that at the end of the first A/B testing, they should get a better and optimised campaign or a failed one. Don’t stop testing your hypothesis, even if you get better results. In fact, you must keep on running more A/B testing of the same impactful-elements on your page until you arrive at a nearly optimised element. So, open your Google Analytics account and keep your eyes wide open and keep on testing, improving, and optimising.