In a previous post about quantitative vs. qualitative research, I briefly mentioned A/B testing as a type of quantitative research product managers should be familiar with. This post will cover A/B testing in more detail - we'll take a look at what it is, why it's important, how it's done, and some examples.
What Is A/B Testing?
One example of A/B testing is testing changes on a page design against the current design which allows the team to pick the design that produces better results. In order to do this, two versions of the page are shown to similar visitors at the same time. For example, half of visitors would see version A of the page, and the other half would see version B of the page.
Why Is It Important?
A/B testing is important because it allows you to potentially increase your site's existing metrics by testing a hypothesis with a certain sample size and then rolling out the winning split to your global audience. [Tweet ""A/B testing allows you to test a hypothesis with a certain sample size""] The idea is that making small changes to a page could have a large impact on an existing customer deciding to purchase an item, sign up for emails, or whatever other metrics your page focuses on.
How Is It Done?
You can test pretty much anything on a page that the visitor interacts with, including headlines, text, call to action buttons, images, links, and social proof. The best method I've come across for effective A/B testing is to use the scientific method:
- Ask a Question
- Do Background Research
- Come Up with a Hypothesis
- Determine the Number of Visitors
- Analyze Data and Draw Conclusions
- Report Results
Quick Sprout has a very helpful resource page with links that cover how to get started on A/B testing as well as the nitty-gritty details.
What Are Some Examples?
Say you're the product manager at an ecommerce company and you've noticed that mobile shoppers are abandoning their online cart at a higher rate than the industry-standard bounce rate. You ask the question "Why are mobile customers abandoning the cart when they reach this page?" You continue to do some background research by looking at Google Analytics to confirm the trend and see if you can spot similar issues on other pages. After further research you're confident with your hypothesis: "Customers are abandoning their carts because the checkout button is below the screen fold."
With this in mind you prepare the A/B test with two versions - A is the current design with the checkout button below the screen fold, and B is the new design with the checkout button at the top of the page. You decide you want to split your visitors 50/50 and run the test with A/B testing software. Once you take a look at the data you confirm your initial hypothesis since conversion was about twice as high with B than it was with A. Finally, you report these results to your team and update your page design.
Curious to learn more about A/B testing? Meet, learn from, and chat with other product managers around the world in our Product Manager HQ Community.