Skip to main content

How to Build a Better A/B Test: Tips and Tricks

A/B testing is one of the best—and lowest cost—ways to improve your marketing. Comparing one message’s real-world performance against another gives you hard data about what your customers actually respond to. And if you leverage A/B testing to the fullest, you can really boost your KPIs.

I talk to our customers every day about A/B testing best practices—here are some of my favorite tips. 

What can you A/B test?

For the most part, marketers test subject lines, calls to action, and occasionally plaintext versus HTML.  There are other use cases (like send day, send time, personalized content, email layouts, etc.) but we’ll focus on these first three. Let’s dive in!

Subject lines

What you’re measuring: open rates

If you’re trying to move the needle on open rates, your subject line is prime territory for optimization. 

You can A/B test aspects of your subject line like:

  • Length. Generally, avoid going over 50 characters, but there’s a lot of room to experiment. Even very short ones can be effective—it totally depends on your customer base.
  • Wording. You need to make those 50 characters count, so test out what lands for your audience: things like emojis, numbers, punctuation, personalization, and tone.
  • Offers/promos. Are your customers more likely to open your email if they know it contains an offer? Many people assume yes, but it’s not always the case for every audience.

Tip: I get a lot of questions about emojis in subject lines, and there are no hard-and-fast rules. They can add a nice touch in the right context, like an envelope with a heart on Valentine’s Day. But, don’t overdo it. Lots of emojis can be distracting.

It’s also important to know your audience. In some situations—like B2B marketing, formal industries, serious topics—they can strike the wrong note.   

Calls to action

What you’re measuring: click-through rates (CTRs)

You can write an amazing subject line, but it won’t translate to better CTRs if your call to action (CTA) doesn’t land. And a successful CTA could up your conversion rates by 42%.

Here are some factors you might want to test:

  • Wording. Specific CTAs almost always outperform generic ones. You’ll probably get more mileage out of something like “Explore FAQs” than “Click here.” 
  • Button color. First of all, make sure people see your CTA button in the layout. Different colors can also get different reactions. Use color psychology to your advantage!
  • Placement. CTA location can make a big difference. Customer.io email layouts give you lots of customization options, so you can try out a few placements. 

Tip: When testing CTAs (and as a general rule), don’t include too many links in your email. If your customer can’t tell which link is the CTA, you won’t get accurate test results. I suggest no more than three (not counting footer links)—and only one of them should be a button. 

Plain text versus HTML

What you’re measuring: deliverability, inbox performance, CTRs

This kind of testing is straightforward: is plaintext or HTML email better for your audience? You might assume HTML is always the way to go—who doesn’t want more design control and better tracking? But it can be worth testing to get a read on some key metrics. 

  • Deliverability. ESPs all handle various aspects of HTML emails differently, and some things (like large images or funky code) might kick those messages into the SPAM bucket.
  • Inbox performance. HTML can look amazing on one device and a mess on another; the same thing goes for different email clients. There’s also the question of accessibility, since HTML can cause problems for screen readers.
  • CTRs. Marketers love to send HTML emails, and customers supposedly like to get them. But what people say they want and what they actually engage with can be very different.

Tip: Plain text versus HTML isn’t necessarily an all-or-nothing game. Various messages and audience segments might call for different approaches. For example, you might test to see if customers and non-customers respond differently or how a personalized welcome email performs in different formats. 

Test one variable at a time

Here’s the key to A/B testing: pick one thing.

That is to say, you can test as many variables as you want—but only one at a time.

Every A/B test should include one control and one variation. Here’s an example subject line test for that imaginary Valentine’s Day email I mentioned earlier: 

  • Control: Send your Valentine some love
  • Variation: Sam, send your Valentine some 💌 

To run the test, you’d send the exact same email, but 50% of your audience would get the control subject line, and 50% would get the variation. 

Let’s say the variation outperforms the control with a statistically significant increase in open rates. That tells you pretty definitively that a personalized subject line with an emoji boosted your open rates!

But if you changed the subject line and the CTA in your variation—how would you know which change had what effect? Say the CTR was higher in the variation. Maybe people responded better to that CTA, maybe they were more engaged because they liked the subject line—who knows?

Tip: Testing multiple variables in one kind of message is easy if you do your tests in sequence:

  1. Test one variable against your control 
  2. Pick a winner; the winner becomes your new control
  3. Test a new variable against the new control, and so on

Crunch the stats

For your test results to be meaningful, they have to be statistically relevant. We represent this as the Chance to Beat Original (CTBO), meaning whether the difference between the control and the variation is better than the odds of random chance. 

Sometimes your A/B test results are clear: a CTBO > 95% means your variation outperforms your control, and <5% means the control performed best. 

But CTBO values that fall somewhere in the middle—that are not statistically significant—can be confusing. Here’s what’s usually happening:

  • Your variation didn’t vary enough. That is to say, it was so similar to your control that customers couldn’t distinguish between them. The fix? Retest with more distinct content.
  • The change you made didn’t matter to the audience. If they responded well to both messages, the change won’t affect how people interact with your email. You can test another variable or go with the message as-is.

Tip: Picking a winner is easier when there’s a clear KPI associated with each test, because you’ll know why you’re testing—and that can help you optimize for what matters most.

Think big with random cohort branches

Did you know that you can test entire journeys? For example, instead of A/B testing a single email in a workflow, you could explore how SMS performs compared to email throughout a customer journey. 

To test a journey, you’ll need to set up random cohort branches. If you were testing performance with SMS versus email, for example, you’d set up two random cohort branches: everyone in Branch A would get emails, and everyone in Branch B would get SMS.

Hit the three-test sweet spot

A lot of marketers get stuck trying to find the perfect solution, to create the perfect email. But marketing is dynamic! And your customer base is always changing. That’s why you should run no more than three A/B tests on any individual variable.

For example, if you’re trying to find the right subject line, and you’ve already tried three variations, stop and go with the one that performed best. Once you do that fourth test (and beyond), you’re getting into the weeds with data overkill that may not be relevant.

Tip: You can preserve your historical metrics within your workflow by creating a new A/B test underneath the old one each time you do another test. Make sure to label the old one “do not send”! That way, you have historical data that can inform future iterations. 

Ready, set, test!

Putting your data to work with A/B testing can make a huge difference in your KPIs. I hope these tips will help you get started on the right foot. And if you want to dive deeper, check these out: