What is A/B testing, what you can test and how to get started
An A/B test is the best way for you to optimize the conversion rate of your pages and improve your results in Digital Marketing
One of the things Blue World City always say here on the blog and in our educational materials is that, although there are recommendations and good practices, each company has its own audience, with unique characteristics. Therefore, only from testing is it possible to be sure which approach works best.
One of the best known and most efficient tests is the so-called A/B test, which consists of dividing the traffic of a given page into two versions: the current one and a “challenging” one, with modifications. Then, you measure which version has the highest conversion rate.
Let’s talk about some introductory A/B testing concepts. If you want to know everything about the method, we have an even more complete and free material: it is the A/B Test Guide Kit, which you can receive by filling out the form below.
Why A/B Testing is Effective
A/B testing is an excellent tool because it provides real market feedback, accurately measured and based on data, not guesswork. It is not a simple survey in which someone can answer one thing and do another in practice: they are consolidated facts.
As the different versions are distributed randomly over the same period of time, there is no risk of external factors (such as a lecture or a possible link on another website) influencing the conversion rate.
Where A/B Tests Can Be Used
One of the advantages of using A/B tests is that they can be used in a variety of communication channels you have with users.
The most common are ads on Google Ads and social networks (such as Facebook and LinkedIn), Email Marketing, Landing Pages, and on the pages of a website, but the possibility of applying A/B tests is practically infinite.
It’s possible to do them even offline, but in this article, we’ll stick to testing in digital media.
When should I take an A/B test?
A/B tests should be applied in situations where there is a need or potential to optimize some important metric for the company, whether it is accessed, opens, clicks, lead generation, etc.
For example, if you have an email whose click-through rate is very low, or a Landing Page with a low conversion rate, it’s worth creating a second version to run an A/B test and find out if the new version performs better.
where you need to be careful
In an old post, we already indicated that doing A/B testing is one of the expert pieces of advice that should be ignored by newbies in Digital Marketing. We continue to believe in this opinion and this is basically due to two reasons.
The first is the need for a good volume of access for the test to be statistically valid, something that few new companies in Digital Marketing manage to have. Lack of volume can lead to premature and incorrect decisions.
And the second is that while we’re used to seeing lectures and case studies showing conversion improvements as if they were a simple thing, the truth is, it’s hard to find variables that really make a difference in numbers. It is almost never reported what happens quite often: the experiments that fail, those in which the difference in conversion between one version and another cannot be considered relevant.
For someone who still has many more basic and proven things to do in their company’s Digital Marketing, it’s unlikely that working with optimizations will bring as many results as finishing building your structure.
How to know when to stop an A/B Test
It is important to practically measure the statistical relevance of an A/B test and explain the importance of making a decision only when there is good reliability in the test being performed.
What is Confidence Interval?
Suppose you want to test whether or not a coin is biased towards playing heads or tails. Theoretically, if you flip this coin 200 times, half the time should beheads, and half tails, right?
You then decide to take the test and notice that you got heads 116 times or 58% of the time. With this result, can you confidently say that the currency is biased?
To answer this question, statistics give us a way to quantify the confidence we can have in this test. The so-called confidence interval indicates the probability that the variation between the control (50% heads) and the experiment (58% heads) actually represents the entire population, rather than a biased (and therefore unreal) segment chosen by mere chance.
In the example we gave, the result of 58% face has a confidence interval of approximately 90%, an index considered statistically low. It means to say that there is a 90% chance that the results actually represent reality and not the influence of some chance.
Despite appearing high, this index is not considered statistically reliable, that is, it may give a false impression of difference. It is recommended to give an experiment as valid a confidence interval of 95% or more, with 99% being an optimal index.
Why You Should Make Decisions Only When You Have Relevant Data
In A/B testing, the same scenario we saw above happens: there is a page with a certain known conversion rate (control), and another page with some element that has been replaced or changed (experiment), that we want to test and find out if the Conversion rate is higher or not.
When you conduct a test on a Landing Page, for example, and conclude that the experience was positive even with a low confidence interval, you leave room for the change to go live without the results remaining at the expected level. Worse still, you may have less complete information or less volume for nothing.
In this scenario, the confidence interval is approximately 90%. As we’ve already said, this value has no statistical relevance.
If we consider the test as complete, it is possible that in the new Landing Pages, without the phone field in the form, the conversion rate is closer to what was previously observed and your company loses interesting information without gaining a greater volume of conversions in replacement.
What to test
The first fact that we must pay attention to is the following: it is not recommended to test more than one element at a time, since this way it is impossible to know which change was responsible for the results.
Generally speaking, the items that most often change a page’s conversion results are:
- Headline (highlighted title) of the page;
- Call-to-Action (buttons for conversion);
- Images or videos;
- Offer descriptions;
- Form size and fields;
- Reliability indicators (testimonies, certificates, etc.)
Of course, the tests are not limited to this: it is possible to change the position of elements, colors or even aspects of the offer itself (15 or 30 days of free trial on a software, for example). However, the items we presented above are good starting points.
A/B Test Tools
To efficiently perform an A/B test we recommend using some tool to help you in the process.
- Google Optimize (which has taken over the A/B testing functions previously found in Google Analytics );
- Optimizely ;
- VWO ;
- RD Station Marketing: unlike the tools above, RD Station Marketing is a complete Digital Marketing platform with A/B testing functionality for Landing Pages and Email Marketing.