You hear the term A/B testing thrown around whenever you hear about website conversions. Many people speak about one form of A/B testing versus another, but no one really gets into the difference, what ultimately A/B testing is for, and how to make the best use of it.
What is A/B Testing? What is it for?
The technical definition is that A/B Testing is an experienced-based research methodology. Simply put though, it is a way of figuring out which version of something a visitor (consumer, buyer, purchaser) likes. More importantly, when done right, an A/B test will show you which version causes a visitor to take the actions you want them to take.
Different Types of A/B Testing
There are several A/B Test mythologies. First, let us start with the Multi Bandit methodology, the one we use here at Crazy Egg.
A multi-armed bandit is a type of experiment where:
- The goal is to find the best or most profitable action
- The randomization distribution can be updated as the experiment progresses
The name "multi-armed bandit" describes a hypothetical experiment where you face several slot machines ("one-armed bandits") with potentially different expected payouts. You want to find the slot machine with the best payout rate, but you also want to maximize your winnings. The fundamental tension is between "exploiting" arms that have performed well in the past and "exploring" new or seemingly inferior arms in case they might perform even better. There are highly developed mathematical models for managing the bandit problem, which we use in Crazy Egg content experiments.
Now, how does this differ from the Classical (50/50) Split A/B Test?
Typical (Classical) AB Test
Let us say you have a Control and one Variant. In a typical AB test, traffic will be split evenly until you turn off the test. If the Control is performing with an 80% conversion and the Variant with a 20% conversion, the test will still send 50% of your traffic to the variant that is performing poorly.
Multi-Arm Bandit Test
With multi-arm bandit, the conversion rates of your variants are constantly monitored. This is done because an algorithm is used to determine how to split the traffic to maximize your conversion. The result is that if the Control is performing better, more traffic will be sent to the Control.
Each variation in each test, has a weight, creation date, number of views, and number of conversions. We look at the number of views, conversions, and creation date to decide the weight (what percentage of visitors) see the variation. These weights are adjusted every few hours based on the previous cumulative results.
The end result, you do not lose out on the possible conversions with the new traffic.
How to make the best use of A/B testing?
A/B testing yields the best results when you gather visitor intelligence. You need to gain a deep understanding of what makes visitors take action and determine what makes qualified visitors not to take action. To do this, you need a process.
Check out The A/B Testing Process.