8 Steps to Running a Successful A/B Test

Dark
Light
INBOUND 2018 HubSpot Overit Marketing

This morning we heard from HubSpot’s Director of Marketing, Rachel Leister.

A pint-sized powerhouse, Rachel is full of incredible insights and information. In this presentation, she’s talking about how to grow and manage your growth marketing team.

In her talk, she shares the keys to running a successful marketing experiment.

As marketers, we know that part of growing is testing and optimizing our content, landing pages, copy, emails, and pretty much anything else our audience interacts with so we can improve conversions at each point of the marketing funnel.

Rachel has a concept she calls the “MVT” – minimum viable testing. Similar to the familiar “MVP” (minimum viable product), the idea is to find a way of testing something that uses the least resources, effort, and execution.

Rachel shared the example of changing a CTA within a product (pretty high requirements in terms of resources, effort, and execution) vs. on a landing page (much more minimal requirements from you or your team).

Then Rachel outlined the eight steps her team takes when running split-tests.

1. Establish an Objective

What is your goal with running the experiment? This should be written on a really high level. For example, your objective might be to determine if an email with a video converts better than an email with a photo.

2. Create Your Hypothesis.

What will happen if you run the experiment? At this point, we’re still not getting into specific metrics, but just creating an idea of what we think will happen as a result of this experiment.

3. Structure the Experiment.

How are you setting up the experiment? Outline each step of the process – what you’re testing, how you’re testing, etc.

You’ll know if you’ve done this right, if you could hand off a doc on the experiment to a new person and they’d totally understand how the experiment was set up, tested, measured, and results determined.

Important to note here is that you want to change only ONE variable. If you change the copy, the landing page, the subject line, etc., you’re going to get inaccurate results, because the change in performance could be attributed to any of those things, or a combination of those changes, not the one variable you’re testing.

4. Make Your Prediction.

What will the results of the experiment be? This is important, because if your prediction shows that the results of the experiment won’t move the marker that much, it might not be worth doing the experiment in the first place. As marketers, we’re all busy and have a lot we could be doing. Creating an experiment that won’t create much in the way of results won’t help your long-term business goals.

5. Set up KPIs.

What are the key performance indicators of the actual test? This is the step in the process where you actually run the test and report on key metrics.

6. Report on Results.

What are the results of the experiment? How did that compare to your hypothesis? Don’t worry if your hypothesis was wrong. Part of the marketing game is that we’re not always right! It’s OK to be wrong and it’s OK to fail. As Rachel reminded us:  Failure = Data and Data = Learning.

7. Think About the Learnings.

What do you know that you didn’t before? Regardless of whether your hypothesis was right or wrong, there are things you can glean from your experiment.

For example, maybe the video got more clicks, but people didn’t watch the video all the way through. Next time, you might want to test a shorter video. Or you might have gotten emails from some of your subscribers with their thoughts about the video that you can take into consideration for your next test.

8. Determine Your Next Steps

You’ve run this test, you’ve reported on the results, and you’ve thought about the different things you’ve learned from the experiment.

Now what are you going to do with that information?

Rachel provides some great ideas, such as:

  • Will the same thing work on another email flow?
  • Does this change translate to more revenue at the bottom of the funnel? (If it increases lead conversions, but doesn’t turn into more customers, was it really that effective?)
  • Could we apply this same logic somewhere else?

There you have it – 8 steps to running a successful A/B test!

Rachel would want me to remind you one more time, before we wrap up, that failing is OK. As Elon Musk says, “Failure is an option here. If things are not failing, you are not innovating enough.”