A/B testing campaign messages
A/B testing allows you to test various aspects of both email and push messages in a campaign, and measure if the change you’ve made will perform better than the original. For email, this means you can test subject lines against one another, your call to action copy, or try out a new design. For push notifications, this means that you can test for content, imagery, or even deep linking.
You can test multiple messages in every triggered campaign, though we do recommend sticking to making just one change in your variation. This allows you to get an accurate picture of how that change measures up against the original without being confused by other data points.
For either email, Push, SMS, or Urban Airship Push messages, this process is the same. First things first, you’ll want to head to the campaign where you want to run your test, and create the type of message you want to test—email or push notification.
Then, on the message you’d like to add a test to, you’ll see an Turn into A/B Test Button.
Want to run a hold-out test instead?
Clicking this creates an A/B test workflow item. Here’s an example with SMS:
By default, 100% of the traffic goes to the original, and 0% to the variation.
Make your changes to the variation. You can change most anything about the message. In the case of email, this could be your from address, subject, body, even the sending mode (“Queue Draft” vs. “Send Automatically”). For push, feel free to test your push title, body content, or get creative with custom payloads. Any delays and time windows are shared across both versions, though.
If you have a low volume of messages being sent or are just starting with A/B testing, try testing a small subject line or content change, and measuring opens.
Once you have an A/B test running in an active campaign, you should see an A/B Test tab appearing in your campaign overview. That’s where you can go to see the results of the campaign and pick a winner.
When you have statistically significant results, or if you want to end the test before that, you can pick one of the options as the winner by clicking the “Select winner” button. We’ll remove the other option and remove that particular A/B test from the screen.
Want more information about what statistical significance means or how we’re calculating different things? Read about it in more detail here.
What if you want to perform an A/B test in which some of your audience gets a message and the rest of your audience doesn’t? You use a Random Cohort Branch!
A random cohort branch lets you perform an arbitrary split in your workflow. In this case, you’ll split your audience; some of them will receive a message and the rest won’t.
- Drag a Random Cohort Branch into your workflow.
- Set the percentage of the audience that you want to send down each path. For this test, we’re splitting the audience cohort 50-50; half the audience gets a message and half doesn’t.
- Add a message to one of the paths.
- (Optional) Add people to a segment based on whether or not they received the message if you want to track who received the message and who didn’t.