A/B testing—presenting two versions of an item to a test audience to determine which is more favorable—is a valuable tool ecommerce business owners can use to accomplish a variety of goals. Not only does it provide valuable insights into what your target audience likes; it also reduces the guesswork involved in making key decisions about your site, increases conversion rates, and more.
While most A/B testing tools have great in-product instructions and knowledge base articles for how to use their particular tool, there are a few more general actions that must be taken to ensure that the results you acquire from these tools are useful. Below, we detail the steps you should take before and after running an A/B test to deliver the most accurate (and productive) results.
Before Running Your A/B Test
As you prepare to set up your A/B test in the testing tool of your choice (Optimizely and Visual Website Optimizer (VWO) are a few of the most popular), follow these steps to ensure that your test is optimized for a successful run:
1. Define the overall goal of your A/B test
Prior to your test, it’s important to choose what metric to monitor so you know what to analyze for success or failure. Think about what area of your site you would most like to improve. Are your clickthrough rates falling short? Are you having trouble building a list for email marketing? Are you just not seeing many conversions? Pick one area or metric and set a specific goal for yourself (ex: I want to raise my conversion rate by X%).
2. Record your site’s current performance
Before you make any changes to your site, make sure you record your site’s existing metrics so that you have an idea of how it performs under normal circumstances. What is your conversion rate? How many clicks do your CTAs typically receive? How many sales do you usually make in a specific period of time? Noting these metrics beforehand will give you additional information at the conclusion of your test about how effective the winning variable was.
3. Identify which items you want to test
Choose a variable to test that you believe will help you in achieving your chosen goal. Some of the most often-used variables in ecommerce A/B tests include headlines, body copy, CTAs, forms, site design, navigation, and SEO tactics (meta descriptions, keyword density, etc.). While it may be tempting to make multiple changes to find a successful combination more quickly, it’s important to not test too many things at once—if you do, it will be difficult to determine which of your variables was actually responsible for higher or lower performance.
4. Set up your two test variations
Document the specific differences you will test and repeat the same variables across all pages for each variation. For example, if you are testing a horizontal site menu (A) against a vertical one (B), make sure that variation A includes a horizontal site menu on every page and that variation B uses a vertical menu on each page. This will ensure that the customer’s experience remains similar throughout the site and that there is no confusion about what factors cause what results.
5. Test both versions (A and B) concurrently
To make things easier, you may wish to run variable A for a certain amount of time and then switch to variable B for the same amount of time. However, as the conditions of the web are not guaranteed to be identical for two separate time periods, this will undoubtedly skew your results in unpredictable ways. For example, say you are selling grills. If you run variation A for two weeks, and then switch to variation B at the same time as a safety campaign regarding fatal grilling accidents goes viral, you may see significantly lower sales during variation B’s run that have nothing to do with the changes you made to your site.
6. Enter your information into your A/B testing software
Once you have determined and set up the above items, plug all of your information into your A/B testing software and set the test to run. Most software should do the work of evenly and randomly dividing your test audience into groups A and B for you, so you shouldn’t have to worry about this part of the testing process.
7. Let the test run long enough to produce meaningful results
The amount of time your test should run can vary by type of variable, your usual site traffic, and other factors. For instance, SEO updates take longer to populate and may take a few months of testing to relay significant results, whereas a high-traffic site that is testing the placement of a CTA button will likely need a much shorter amount of time to determine which variation works better. Think about your users and their experience on the site to determine the proper amount of time to let your test run.
After Running Your A/B Test
After your test has concluded, there are a few additional steps you should take to ensure you are getting the most out of the process:
8. Analyze the results of your test
Of course, once a test is concluded you should look at the results. Most testing tools will very clearly spell out which variation was the more successful of the two, but you should also look at any more granular data provided to give yourself a clearer picture. If the results were surprising to you, consider what may have caused them to turn out this way. Perhaps you assumed a more clean design would appeal to your customers, but they actually went for a variation with more abundant, striking imagery. Do your products simply require more visual representation to be enticing? Does having more imagery make your business seem more trustworthy?
9. Pay attention to statistical significance
When conducting A/B testing, there is a risk of random chance influencing the outcome. However, today’s testing platforms automatically account for this variance using statistical calculations that factor in relevant data, such as the number of site visitors included in your experiment (sample size) and differences in conversion rates between your test groups. When looking at the statistical significance, the output is typically:
- A percentage that more or less represents how solid the findings are; 90% or more is pretty solid, while less than 90% is not.
- A range of conversion rates known as the “confidence interval.” This represents the range in conversion rates that you might observe long-term or in repeated sampling of your site visitors. It indicates that 95 out of 100 similar samples will fall within this range.
10. Decide what action(s) to take
Now that your test is complete and you have looked through the results with a critical eye, you can decide which action to take next. If you’ve got a clear winner, update your site to reflect the improved experience and enjoy the added dollars your site will make as a result. If your test results were very close for both variations, consider whether the work to implement a change is really needed, or whether you need to re-analyze your variables and run another test.
In Conclusion
While A/B testing may seem like a complicated process at first glance, it’s not too difficult to get started—once you’ve set up the groundwork, most A/B testing platforms manage the logistics behind running a successful test for you. With a clearer understanding of how to approach A/B testing, you can run tests that provide you with meaningful and actionable results.