A/B Testing: 101 Guide

What is A/B Testing?

You hear the term A/B testing thrown around whenever you hear about website conversions.  Many people speak about one form of A/B testing versus another, but no one really gets into the difference, what ultimately A/B testing is for, and how to make the best use of it.

What is A/B Testing? What is it for?

The technical definition is that A/B Testing is an experienced-based research methodology.  Simply put though, it is a way of figuring out which version of something a visitor (consumer, buyer, purchaser) likes.  More importantly, when done right, an A/B test will show you which version causes a visitor to take the actions you want them to take.

Different Types of A/B Testing

There are several A/B Test mythologies.  First, let’s start with the Multi-Armed Bandit methodology, one of the more popular ones to use.

A multi-armed bandit is a type of experiment where:

  • The goal is to find the best or most profitable action
  • The randomization distribution can be updated as the experiment progresses

The name “multi-armed bandit” describes a hypothetical experiment where you face several slot machines (“one-armed bandits”) with potentially different expected payouts. You want to find the slot machine with the best payout rate, but you also want to maximize your winnings. The fundamental tension is between “exploiting” arms that have performed well in the past and “exploring” new or seemingly inferior arms in case they might perform even better. There are highly developed mathematical models for managing the bandit problem, which we use in Crazy Egg content experiments.

Now, how does this differ from the Classical (50/50) Split A/B Test?

Typical (Classical) AB Test
Let us say you have a Control and one Variant. In a typical AB test, traffic will be split evenly until you turn off the test. If the Control is performing with an 80% conversion and the Variant with a 20% conversion, the test will still send 50% of your traffic to the variant that is performing poorly. 

Multi-Arm Bandit Test
With multi-arm bandit, the conversion rates of your variants are constantly monitored. This is done because an algorithm is used to determine how to split the traffic to maximize your conversion. The result is that if the Control is performing better, more traffic will be sent to the Control.

Each variation in each test, has a weight, creation date, number of views, and number of conversions. We look at the number of views, conversions, and creation date to decide the weight (what percentage of visitors) see the variation. These weights are adjusted every few hours based on the previous cumulative results. 

The end result, you do not lose out on the possible conversions with the new traffic.

How to make the best use of A/B testing?
A/B testing yields the best results when you gather visitor intelligence.  You need to gain a deep understanding of what makes visitors take action and determine what makes qualified visitors not to take action.  To do this, you need a process.

The A/B Testing Process

A/B Testing works best when you establish a process.  Having an established process will help you focus your energy and determine what is working and what is not quickly. 

Why are processes important? Processes describe how things are done, provide the focus for making the outcome better, and align energies and efforts.  Most importantly though, a process determines how successful the outcome will be.  If you focus on the right processes (and steps within the process), in the right way, you can design your way to success.

Example Process
Mike Loveridge (the Head of Conversion Rate Optimization at TSheets by QuickBooks) wanted to ramp up Intuit’s A/B testing velocity and build an experimentation-forward culture.  To do this though, he needed to establish a process that would allow both speed and ease implementation.  He and his team developed this three-step process.

  1. Reviewing data from Google Analytics and past A/B tests, to find out where website visitors might be getting stuck in the funnel and on specific pages sections.
  2. Using Crazy Egg Snapshots and Recordings to observe the user experience firsthand to further identify areas of opportunities. 
  3. Running experiments (A/B tests) and using Crazy Egg Heatmaps and Confetti reports as part of the analysis during and after the test.

The result he says, “We recently scaled to 20 tests a month.  It’s amazing how different everything is at the company when you have that additional customer data to work with an established process that allows us to design our way to success.”

The Process in Action
Mike’s team completed several experiments, one of which was on their Pricing Page.   Following their process, the team noticed that website visitors showed fantastic engagement with content, but half the traffic still did not convert.

Using Crazy Egg’s user session Recordings, they discovered a high percentage of prospects were checking out the pricing information, then scrolling back up to the header and using the navigation to leave the page.  Going elsewhere on the site.  Essentially, they had an unintended escape hatch.  “We wouldn’t have known this if we hadn’t watched several video recordings for ourselves,” says Mike.

Armed with this insight, the team concluded that visitors were either (1) getting distracted or (2) needed more information (or were curious about other choices).  Now, the team will A/B test a few ways to reduce the noise on the page.  

For another example with proven results, check out the Case Study: Wall Monkey’s 550% Conversion.

Was this article helpful?

Related Articles

Need Support?

Can’t find the answer you’re looking for? Don’t worry we’re here to help!

Submit a request