A/B testing is a scientific method of comparing the performance of two different versions of a marketing campaign and using the results to improve your methods and increase your return on ad spend.
Although it takes creativity to market or advertise well, it also takes scientific research. That’s because successful marketing and advertising isn’t subjective. Great marketing and advertising aren’t driven by our whims, no matter how genius they may seem. They’re driven by data.
You get real results from your campaigns, good or bad. And those results should inform your marketing and advertising decisions. In Scientific Advertising, advertising pioneer Claude C. Hopkins says, “Almost any questions can be answered, cheaply, quickly and finally, by a test campaign. And that’s the way to answer them – not by arguments around a table.”
Test Marketing Campaigns with A/B Testing
A/B testing measures and justifies your marketing campaigns by pitting two different versions of the same campaign against each other. The different versions are randomly shown to equally split audiences, such as website visitors or email subscribers. The results of A/B tests then inform marketers which version most resonates with the audience.
Here are some of the ways you can use A/B testing to improve your marketing:
- Reduce bounce rate
- Improve click-through-rate (CTR)
- Increase subscribers/leads
- Increase sales
How to Perform A/B Testing
- Start with a successful marketing asset that already has baseline statistics. This is your control, or version A. Typical marketing assets to test include an email campaign, a web page, a landing page, social post, an online ad, and so on.
- Choose a Marketing Tool. A/B tests are possible through several marketing tools, but the marketing asset you’re testing will generally choose for you. For example, to run an A/B test on a Google Ad, you’ll set up a campaign experiment. Individual social channels support A/B testing when you run a paid ad. Or you can use Hootsuite to A/B test social ads and Constant Contact will run a simple test on email subject lines. You can also use a tool like HubSpot or Optimizely to run complete A/B tests on email, landing pages and CTAs.
- Have a goal. Before you start, know what you’re hoping to achieve from your A/B test. This could be increased CTR, subscribers, average session length (or time spent per visitor) on your web page, purchases and so on. Choosing a goal ahead of time helps you to form your hypothesis.
- Form a hypothesis. A/B tests prove or disprove a hypothesis. Don’t just throw spaghetti at a wall to see what sticks. You’ll waste time testing unimportant variables. Use market research and/or analytics (from your website, social media channels and email client) to gain insights on which variables may be important to your audience and form your hypothesis.
For example: If you get more engagement when your social posts feature pets, you can theorize that photos are important to your audience, and moreover, that your social ad will perform better if you feature a pet in the product photo. - Pick one variable to test, such as an email subject line, an ad headline, content length or tone, CTA location or text, an image.
- Create two versions of the same campaign and only change that one variable. You can test more than one version of the same campaign by splitting the test three or more ways (which is called an A/B/n testing).
- Determine how statistically significant your results must be. Most A/B testing tools use a 95% confidence rate, which indicates the amount of certainty that the result didn’t occur by chance or error. It’s important for you to know when your own results are statistically significant.
- Run your test for at least 4 full weeks or until you reach your statistical significance—whichever comes later. Conversion rates fluctuate depending on the day of the week. Because of this, if you need to run your test longer, increase in 7-day increments, making sure to end your test on the same day of the week it began.
- Analyze your results. When your test has ended, your marketing tool will show you how your two campaigns performed and let you know if the results are statistically significant.
- Run another test. With each A/B test, you can improve another variable of your marketing campaign until you have nearly perfect marketing assets.
A/B Testing Mistakes to Avoid
- Splitting your A/B test between dissimilar parameters. For example, your results will be skewed if you run variation A on a different day/time than variation B, or if you split your audience by gender, region, age, income, etc.
- Testing too many variables at once. The important thing to remember about A/B testing is to limit the different variables in version A and version B. The more variables you change, the more difficult it will be to know which variable made the most difference or how one affected the other.
For example: if you change the location and text of a CTA button, you won’t realize that your new text would have been successful in the original location, however, the button is now being overlooked in the new location. - Testing low numbers. It will be difficult to get statistically significant results or, more importantly, significant improvements if you’re working with a small audience. In fact, many marketing tools require a minimum (traffic, subscribers, followers) before you can perform a test.
- Not accounting for holidays or events in your testing timeline. Seasonal and unanticipated events that affect browsing and shopping habits can affect your results. This includes events that affect your brand reputation. Make sure you take these into consideration when analyzing the test results.
- Declaring an early winner. Don’t end a test early because you believe the results are statistically significant. You need at least a full 4 weeks of data to determine a real winner.
- Not retesting. Test results with a 95% confidence rating have 5% false positive. Verify your findings regularly.
Bottom Line
If you’re not A/B testing, you’re missing an opportunity to improve your marketing campaigns. When you back your marketing decisions with data, you get the biggest bang for your buck, reducing the gap between spend and return.