Better email marketing campaigns: Introducing Inbox Stories

Learn more

A/B Testing


Share This Article

What is A/B testing?

Team Storyly
March 16, 2023
min read

What is A/B Testing?

A/B testing is when a business compares two versions of something to see which performs better. It's more often used in the context of marketing. With A/B testing, each version of the item being tested is deployed at the same time, to a random set of users. Whichever item performs best in the metric being measured is the winner.

Why is A/B testing important?

Not even the best marketers or designers get the perfect copy, design, and messaging on the first try. In fact, the best among those professions are constantly seeking to improve their efforts over time. Some may choose to simply trust that the new version of their work is better than the old one, but those hunches aren't always true.

A/B tests allow for quantitative proof that the new version actually is better than the old one. They allow those in the initial stages of rolling out a product to see which of a given two candidates performs better. They provide a simple way to maximize ROI.

How to do an A/B test?

With the definition of AB testing out of the way, let's look at how the process is actually implemented. The six steps below provide a broad overview of how the process works. The exact steps you use will depend on the A/B testing tools you've chosen to use.

  1. Identify a Specific Element for Testing
    A/B testing is only useful for comparing very specific things. For example, you can see whether one version of your call-to-action button works better than the other. Broad goals, such as increasing conversion rates, need to be narrowed down to small things that can be tested. 
  2. Narrow Down the Variables
    Narrowing down the test to a specific element, such as a call-to-action button, still requires further refinement. Will you try changing the color, or the text? Maybe you'll change the shape. Whatever you decide to test, it's important to limit it to as few variables as possible, otherwise, you won't know which of them made a difference if the metrics change.
  3. Decide Which Variables to Test
    With the list of potential variables to test narrowed down, you can finally decide which ones will make the cut. For each test, you should change as few variables as possible. Choose the ones most likely to make an impact and save the rest for future refinements.
  4. Run Two Versions of the Variable
    Now you need to randomly show some of your visitors the first version of the element and some of them the other version. For example, if you change the text on your CTA button, half of your visitors will see version A, and the other half will see version B. The choice of which visitors see which version should be as random as possible to avoid bias.
  5. Compare the Results of Each
    After the test has run for a suitable period of time, compare the A/B testing statistics from the two versions to see which performed better. For the CTA button, the metric would be how many people followed through on the call-to-action and moved further into the sales funnel. 
  6. Repeat the Process
    A/B testing shouldn't be viewed as a one-time thing. Good marketers know that continually updating their strategy is the best way to grow a business. Not only is there always room for improvement, but markets change over time and what once worked best might not remain the best option.

Why should you consider A/B testing?

With A/B testing, marketing departments can dramatically improve their results. Below are some of the main benefits you can hope to see by implementing a well-thought-out A/B testing strategy.

  • Increase conversions - By finding the best-performing calls-to-action and marketing copy, you'll be pushing more users further into the funnel. This means more qualified leads and more sales conversions. 
  • Improve customer experience - Often, the reason a particular element of the page performs poorly is that it provides a poor user experience. A button may be placed in a location that isn't ideal, or a copy on the page might be confusing. As poor-performing elements are replaced with better performers by A/B testing, user experience improves.
  • Reduce bounce rate - Bounce is one of the worst things that can happen to a marketer. This is especially true if the user came to the page through a paid ad. By creating better-performing landing pages, A/B testing reduces bounce rates and improves the ROI of the marketing efforts that drive visitors to the landing page.

What are the different types of A/B tests?

There are some variations on traditional A/B tests worth mentioning. Although technically distinct from AB testing, marketing departments make use of these types of comparison tests in a similar way.

  • Split URL testing - With traditional A/B testing, Javascript or some other method is used to show different variations of the same page to visitors. Split URL testing relies instead on sending users to an entirely different URL, depending on which version of the tested element they should be seeing. Typically, this is to test two very different page designs.
  • Multivariate testing - We've emphasized the importance of changing as few variables as possible. Controlling variables is a common and important technique in science. With large enough traffic and sophisticated enough statistical analysis, more than one variable can be tracked. When this is done, A/B testing becomes multivariate testing. 
  • Multipage testing - Sometimes, the test a company wants to perform requires changes to more than one page. For example, a business might want to test how an entire different funnel flow affects its conversion rates. When users are sent down an entirely different set of pages, it's known as multi page testing. 

What are the difficulties of A/B testing?

A/B testing seems simple, but there are a number of things that can go wrong during the test that make the results less accurate than they could be. This could result in making changes that don't actually improve things, or worse, have a negative impact on the tested metric. Some of the common problems with A/B testing are listed below:

  • Not enough traffic - One of the most important concepts in statistics is the sample size. If only one person sees each variant, you can't conclude much about how successful each is. How you calculate your sample size depends in part on what you're testing and what confidence interval you want out of the results. In general, you want at least 1,000 impressions.
  • Time constraints - Depending on how much traffic the page you're testing gets, it could take a long time to get to the desired sample size. A related mistake is to stop the test early as soon as one of the competitors starts pulling ahead. There's no guarantee an early lead will finish first. It's important to let the test run its course. 
  • Ensuring random samples - If there isn't an equal representation of people seeing each version, then you can't tell whether it was the variable that made a difference or the difference in visitor demographics. The samples also must match your typical visitor. Some companies spend big on an ad campaign to drive enough traffic to meet sample size requirements. If these visitors differ greatly from those who would normally visit your site, then the data is contaminated and will be less accurate. 

A/B testing and SEO

Making changes to your website could also make changes to your search engine ranking. It would be undesirable for your A/B tests to cause you to lose ranking on major search engines. Search engine companies are aware of the need for testing, however. Google has put together a helpful list of things to keep in mind when testing to ensure your SEO isn't impacted:

  • Don't block Googlebot - Showing one version of a page to Google's bot and another to actual people is a technique often used by those trying to game the system. Because of this, Google punishes sites that do this. Keep this in mind when determining how to split your samples.
  • Use the canonical attribute - The canonical attribute tells the search engine crawler that the page it's looking at isn't the preferred version to index and points them to the one that is. This is useful for multipage and split-URL testing. 
  • Use temporary redirects - There are two types of redirects you can use: temporary and permanent. When redirecting visitors to split your sample, be sure to use temporary redirects so the search engine bot doesn't mistake the test for the new permanent content. 
  • Keep it short - Although your test needs to run long enough to hit your sample size target, running it too long may increase the impact it has on your search rankings. Stop the test as soon as you hit your desired sample size or time frame.

Some A/B testing examples

With the A/B testing definition out of the way, let's now take a look at the types of things that marketers tend to use it for. The possibilities are endless, as anything that can be changed on your site can make up a variable in a test. The categories below will give you some A/B testing examples to get ideas from.


This is the actual copy of your site. Does the tone of your message resonate with customers? Is your call-to-action as effective as it could be? Are you presenting the information in the right order? All of these examples can be checked with A/B testing.


Designers know the importance of choosing the right fonts, color schemes, and shapes. Variations of all these things will appeal to different audiences. Finding the one that appeals the most to your audience is important to improving site performance. A series of A/B tests can make dramatic improvements in this area.


The layout of your site is a subset of its appearance. Rather than focusing on colors, fonts, and shapes, the layout is more concerned with the positioning of elements. Where on the page do buttons perform best or do forms generate the most submissions? Are the navigation elements located in a way that provides the easiest flow for the customer? A/B tests on layouts can improve customer experience.