A/B testing is one of the best—and lowest cost—ways to improve your marketing. Comparing one message’s real-world performance against another gives you hard data about what your customers actually respond to. And if you leverage A/B testing to the fullest, you can really boost your KPIs.
I talk to our customers every day about A/B testing best practices—here are some of my favorite tips.
For the most part, marketers test subject lines, calls to action, and occasionally plaintext versus HTML. There are other use cases (like send day, send time, personalized content, email layouts, etc.) but we’ll focus on these first three. Let’s dive in!
What you’re measuring: open rates
If you’re trying to move the needle on open rates, your subject line is prime territory for optimization.
You can A/B test aspects of your subject line like:
Tip: I get a lot of questions about emojis in subject lines, and there are no hard-and-fast rules. They can add a nice touch in the right context, like an envelope with a heart on Valentine’s Day. But, don’t overdo it. Lots of emojis can be distracting.
It’s also important to know your audience. In some situations—like B2B marketing, formal industries, serious topics—they can strike the wrong note.
What you’re measuring: click-through rates (CTRs)
You can write an amazing subject line, but it won’t translate to better CTRs if your call to action (CTA) doesn’t land. And a successful CTA could up your conversion rates by 42%.
Here are some factors you might want to test:
Tip: When testing CTAs (and as a general rule), don’t include too many links in your email. If your customer can’t tell which link is the CTA, you won’t get accurate test results. I suggest no more than three (not counting footer links)—and only one of them should be a button.
What you’re measuring: deliverability, inbox performance, CTRs
This kind of testing is straightforward: is plaintext or HTML email better for your audience? You might assume HTML is always the way to go—who doesn’t want more design control and better tracking? But it can be worth testing to get a read on some key metrics.
Tip: Plain text versus HTML isn’t necessarily an all-or-nothing game. Various messages and audience segments might call for different approaches. For example, you might test to see if customers and non-customers respond differently or how a personalized welcome email performs in different formats.
Here’s the key to A/B testing: pick one thing.
That is to say, you can test as many variables as you want—but only one at a time.
Every A/B test should include one control and one variation. Here’s an example subject line test for that imaginary Valentine’s Day email I mentioned earlier:
To run the test, you’d send the exact same email, but 50% of your audience would get the control subject line, and 50% would get the variation.
Let’s say the variation outperforms the control with a statistically significant increase in open rates. That tells you pretty definitively that a personalized subject line with an emoji boosted your open rates!
But if you changed the subject line and the CTA in your variation—how would you know which change had what effect? Say the CTR was higher in the variation. Maybe people responded better to that CTA, maybe they were more engaged because they liked the subject line—who knows?
Tip: Testing multiple variables in one kind of message is easy if you do your tests in sequence:
For your test results to be meaningful, they have to be statistically relevant. We represent this as the Chance to Beat Original (CTBO), meaning whether the difference between the control and the variation is better than the odds of random chance.
Sometimes your A/B test results are clear: a CTBO > 95% means your variation outperforms your control, and <5% means the control performed best.
But CTBO values that fall somewhere in the middle—that are not statistically significant—can be confusing. Here’s what’s usually happening:
Tip: Picking a winner is easier when there’s a clear KPI associated with each test, because you’ll know why you’re testing—and that can help you optimize for what matters most.
Did you know that you can test entire journeys? For example, instead of A/B testing a single email in a workflow, you could explore how SMS performs compared to email throughout a customer journey.
To test a journey, you’ll need to set up random cohort branches. If you were testing performance with SMS versus email, for example, you’d set up two random cohort branches: everyone in Branch A would get emails, and everyone in Branch B would get SMS.
A lot of marketers get stuck trying to find the perfect solution, to create the perfect email. But marketing is dynamic! And your customer base is always changing. That’s why you should run no more than three A/B tests on any individual variable.
For example, if you’re trying to find the right subject line, and you’ve already tried three variations, stop and go with the one that performed best. Once you do that fourth test (and beyond), you’re getting into the weeds with data overkill that may not be relevant.
Tip: You can preserve your historical metrics within your workflow by creating a new A/B test underneath the old one each time you do another test. Make sure to label the old one “do not send”! That way, you have historical data that can inform future iterations.
Putting your data to work with A/B testing can make a huge difference in your KPIs. I hope these tips will help you get started on the right foot. And if you want to dive deeper, check these out: