Skip to main content

Process: Journeys at Customer.io

Some UX thoughts on building a big feature

At Customer.io, I work on an app which helps businesses make their communication (email and otherwise) more human. In short, they send us data on what their users are doing, and we enable them to send messaging based on that data. Their communication is actually relevant to their users, because it’s based on their activity.

It sounds counterintuitive: “personalised messaging at scale.” But for me as a UX designer, it’s a cool challenge, encompassing what our customers’ end users are doing in aggregate and individually, and representing that data in actionable ways.

In the Customer.io app, businesses create messaging campaigns based on trigger segments and events, and their users move through and exit those campaigns based on conditions that can be super-simple and super-complicated. A good example is an onboarding campaign: “when a user signs up [trigger], send them a series of email and SMS messages with three-day delays between them [campaign], and stop messaging them if they set up their profile.”

The problem we wanted to solve

When the above is working well, it’s great! But how do we support our customers when things don’t go as expected— when their users get the wrong messages, or no messages at all?

In good UI/UX design, we always want our users to know where they are, how they got there, and have confidence in where they’re going. This is similar, but has a twist; we want our customers to be able to see their users’ campaign journey, and understand:

  • which messaging they’ve received as part of the campaign;
  • where they are in the campaign right now; and
  • which messages they’re due to receive

Campaigns can often be based on a very complex set of entry conditions. In each case, our customers want to be confident that their users will

  • get (or skip) the right message…
  • at the right time…
  • and if they haven’t gotten a message or left the campaign early, understand why: did that one person leave a campaign because they met a conversion goal? did they leave it because a trigger condition was set up incorrectly? are they being unintentionally filtered out?

Until recently, the latter especially was difficult for folks to figure out. That’s why we built our Journeys feature, and while I wrote a quick announcement on our blog, I wanted to write something a bit more in-depth about the design and user experience challenges it presented.

The process

We usually start big features like this with a Pitch: someone at the company advocates for a piece of work they want to do— a problem they’d like to solve. In my Journeys pitch, I included the following speculative design, but little else in the way of visuals:

Journeys v0.00001; what I envisioned it looking like before really digging in: “completed stuff” “current stuff” and “upcoming stuff”

This embryonic view gave an idea of the problem I wanted to solve, but definitely wasn’t hugely thought-through yet.

After we decided that we wanted to work on Journeys in our next cycle of product work, the first thing I needed to know was: what data do we have available about a given campaign? How much do we know about the people who move through these campaigns? And if there are changes to those campaigns that then impact the people in them, how do we surface that?

Scenarios

The best way to understand what would be helpful to show our customers would be to imagine specific campaign scenarios, each with some changes, and the result. Our engineer Ian put together a really helpful list of these.

Delay Scenario #1 
campaign: 30 minute delay + email
User 5 minutes into delay, delay shortened to 10 minutes
Result: campaign matched at date/time, 10 minute delay — exited after 10 minutes, email

We set up a development environment where we could set up with some campaigns, make changes to them, and see what the output was. So I started a test campaign with a trigger, a delay of 30 minutes, followed by an email. The OUTPUT looked like this initially:

Wow, that’s not really very clear at all

Then, when I shortened the delay, this happened.

That’s even less clear

What the above means is that a user was scheduled to wait thirty minutes. After two minutes, the delay was shortened to ten minutes (a few seconds ago). They’re still waiting and will move out of that delay in eight minutes.

Then, once the email was drafted:

This information was initially really hard to parse, but doing this for a lot of campaigns helped me understand how to represent to our customers what they needed to know, and what data we had available to do that.

For the above (and every other scenario) I created a quick “What we should show” that looked like this:

Ivana’s “What we should show”

This is all good, until it gets complicated, and users stop matching campaign conditions, but then re-match them within a grace period. This happens, and our customers want to know that. So check out this output:

This was a campaign with an email set to send in a time window of Mon/Wed/Fri 9–11AM. What it means is:

Cool, right?

This is what those scenarios looked like in each case:

LOTS OF SCENARIOS!

As a UX exercise, this was amazing. There were complex campaign scenarios with changes and user movements that we want to make:

  • readable
  • actionable
  • scalable (this one is particularly challenging when it comes to language, as you can imagine)

Sketching

Once I had this understanding, I felt safe enough to start doing some sketching with the goals of the project in mind. So we imagined a particular end-user moving through a Welcome Campaign:

We wanted to show the state of the campaign (are they still in it, has it been completed, or something else), entry conditions, when they started, when they’re due to end (and what will happen before they do), and where they are now. You can see all of that in the sketch above. Below, you can see an initial example of a campaign that an end-user left early.

A “left early” situation.

We also thought about iconography. We wanted to echo the iconography from the rest of the app, but also introduce new items where appropriate (for skipped messages, current states, future states, etc):

There were some constraints already in our interface: we knew the space in which we’d have to present this, what data we had available, and the other UI conventions of our app. We could flex design and UX muscles to some degree, but clarity was paramount.

Communicating progress

Most work took place in Basecamp docs and files, and a Slack channel confined to the team specifically working on that feature. We could work out fine details, iterate on things, ideate, and throw out ideas.

Then, when we had made our decisions and we felt it was time to communicate progress to the rest of the team, we wrote a Progress Report which gave a quick run-down of what we’d done and where we’d arrived. Nothing more. If folks wanted to know how we’d arrived there, we provided links and explanations, but didn’t clutter the post with them too much. We wanted people to know where the feature was and, if we needed feedback, offer it directly and purposefully.

Higher-fidelity + prototyping

After sketching, we moved on to some higher-fidelity work:

One of our first attempts at something higher-fidelity, translating sketch to real Customer.io assets and seeing what it would look like.

We went through a lot more detailed versions here, as prototyping continued and would ramp up in the background.

Here, we started looking at things like: when a message was skipped, how do we best show why? How do we adjust the phrasing around time windows and delays, both of which have subtle differences in functionality?

This is also where we introduced the sidebar with campaign information, such as the Trigger, any filters, and a campaign goal.

We also worried about details like phrasing (“Customer”? “User”?) and iconography once more:

Variations on The Row

After this, though, we had mostly moved to an interactive prototype of the feature with real data where we could actually test the scenarios and see if we were getting back the information and the interface we expected.

What we landed on

I’m really proud of the Journeys feature we shipped. Yes, there are aspects of it that we want to improve and gather feedback on, of course. But now, our customers can look at any given person in their system, for any given campaign they’re in, and see their progress (or lack thereof, maybe), through it.

Remember, this was the Very First Pitch:

The very first drawing of what Journeys might be. Contrast it with what we shipped, below.

And this is Journeys V1.0:

You can see that this person waited, then met the conditions for a time window, an email was drafted as part of an A/B test, and they skipped another email because they didn’t meet certain conditions, and now they’re waiting. In the future, they’ll get some emails and then leave the campaign!

When changes are made to a delay, for example, we can show that, and our customers have clarity over why a given person moved through a campaign faster than expected:

Connecting the dots

Throughout the rest of the application, too, the experience had to connect and make sense. From where should our customers access Journeys, and where would they be useful?

We have a Journeys index page on each person, so that customers can see which campaigns this person is a part of, and their state in each:

We can click straight through to a user’s journey from the Campaign page, too:

Clicking any of the emails will take you to that specific user’s Journey.

This process of connection is obviously just as critical to the user experience as the rest of Journeys; we want them to be not only useful, but accessible where and when appropriate— when our customers need answers to their questions, to be able to dig into their campaigns on a granular level, and understand how they’re working and why.

Where we’re going

As we went, we kept a list of things we didn’t do for Journeys; things that were out of scope, or pieces of feature work we thought, “hey, that’d be fun!” but we kept it out of the MVP. We’re also gathering user feedback on the feature when it arises— if phrasing isn’t clear, or if we could be better about surfacing different explanations, and so on.

For me as a designer, this was a great chance to dig into a feature that was complex in many ways:

  • I had to deal with LOTS OF DATA: we had to make it human and readable, but also scalable. An explanation for one changed delay had to scale for almost every other changed delay.
  • Understanding the outputs of our system was a big challenge for me, not just in terms of the language itself, but how it really worked, and the potentially-huge impacts on the smallest piece of phrasing.
  • Our customers are relying on us to send thousands of messages to people, messages that mean something. Our interface needs to enable them to do that safely, without errors, both in simple situations and super-complex ones alike.

Documenting this process was also something we haven’t done before at C.io in this particular way, and I enjoyed that also. The team got to see progress on a cool feature without feeling overwhelmed by it (I hope!), and our small feature team drove this from start to finish. I’m hugely proud of it.


Thanks to everyone who gave feedback and invaluable insight along the way, but especially so to the engineers who built this with me, and exercised endless patience: Alisdair, Ian, Stephen.