Four Things You Can Do With an Inconclusive A/B Test

How to handle blah test results.

It is true that we can learn important things from an “inconclusive” AB test. But that doesn’t mean we like inconclusive tests. Inconclusive tests occur when you put two or three good options out for an AB test, drive traffic to these options and — meh — none of the choices is preferred by your visitors.

  1. Our visitors like the page the way it is (we call this page the “control”), and reject our changed pages.
  2. Our visitors don’t seem to care whether they get the control or the changed pages.

Basically, it means we tried to make things better for our visitor, and they found us wanting. Back to the drawing board.

Teenagers have a word for this.

It is a sonic mix of indecision, ambivalence, condescension and that sound your finger makes when you pet a frog. It is less committal than a shrug, less positive than a “Yes,” less negative than a “No” and is designed to prevent any decision whatsoever from being reached.

It comes out something like, “Meh” It is a word so flaccid that it doesn’t even deserve any punctuation. A period would clearly be too conclusive.

If you’ve done any testing at all, you know that your traffic can give you a collective “Meh” as well. We scientists call this an inconclusive test.

Whether you’re testing ad copy, landing pages, offers or keywords, there is nothing that will deflate a conversion testing plan more than a series of inconclusive tests. This is especially true you’re your optimization program is young. Here are some things to consider in the face of an inconclusive test.

1. Add Something Really Different To The Mix

Subtlety is not the split tester’s friend. Your audience may not care if your headline is in 16 point or 18 point font. If you’re getting frequent inconclusive tests, one of two things are going on:

  1. You have a great “control” that is hard to beat, or
  2. You’re not stretching enough

Craft another treatment, something unexpected and throw it into the mix. Consider a “well-crafted absurdity” a la Groupon. Make the call to action button really big. Offer something you think your audience wouldn’t want.

2. Segment Your Test

We recently spent several weeks of preparation, a full day of shooting, and thousands of dollars on talent and equipment to capture some tightly controlled footage for video tests on an apparel site. This is the sort of test that is “to big to be inconclusive.” However, video is currently a very good bet for converting more search traffic.

Yet, our initial results showed that the pages with video weren’t converting significantly higher than the pages without video. Things changed when we looked at individual segments, however.

New visitors liked long videos while returning visitors liked shorter ones. Subscribers converted at much higher rates when shown a video recipe with close-ups on the products. Visitors who entered on product pages converted for one kind video while those coming in through the home page preferred another.

It became clear that, when lumped together, one segment’s behavior was cancelling out gains by other segments.

How can you dice up your traffic? How do different segments behave on your site?

Your analytics package can help you explore the different segments of your traffic. If you have buyer personas, target them with your ads and create a test just for them. Here are some ways to segment:

  • New vs. Returning visitors
  • Buyers vs. prospects
  • Which page did they land on?
  • Which product line did they visit?
  • Mobile vs. computer
  • Mac vs. Windows
  • Members vs. non-members

Your guide to test your way to high-converting landing pages.

AB Test Planning Guide for Landing Pages

What works in conversion-centered design when testing landing pages.
  • The components of a high-converting landing page.
  • Headlines and offers to test.
  • Images that increase conversion rates.

Get Your Free Copy

3. Measure Beyond the Click

Here’s a news flash: we often see a drop in conversion rates for a treatment that has higher engagement. This may be counter-intuitive. If people are spending more time on our site and clicking more — two definitions of “engagement” — then shouldn’t they find more reasons to act?

Apparently not. Higher engagement may mean that they are delaying. Higher engagement may mean that they aren’t finding what they are looking for. Higher engagement may mean that they are lost. So, if you’re running your tests to increase engagement, you may be hurting your conversion rate. In this case, “Meh” may be a good thing.

In an email test we conducted for a major energy company, we wanted to know if a change in the subject line would impact sales of a smart home thermostat. Everything else about the emails and the landing pages were identical.

The two best-performing emails had very different subject lines, but identical open rates and click-through rates. However, sales for one of the email treatments was significantly higher. The winning subject line had delivered the same number of clicks, but had primed the visitors in some way making them more likely to buy.

If you are measuring the success of your tests based on clicks, you may be missing the true results. Yes, it is often more difficult to measure through to purchase, subscription or registration. However, it really does tell you which version of a test is delivering to the bottom line. Clicks are only predictive.

4. Print A T-shirt That Says “My Control Is Unbeatable”

Ultimately, you may just have to live with your inconclusive tests. Every test tells you something about your audience. If your audience didn’t care how big the product image was, you’ve learnd that they may care more about changes in copy. If they don’t know the difference between 50% off or $15.00 off, test offers that aren’t price-oriented.

Make sure that the organization knows you’ve learned something, and celebrate the fact that you have an unbeatable control. Don’t let “Meh” slow your momentum. Keep plugging away until that unexpected test that gives you a big win.

Post Signature

Brian Massey is the Founder and Conversion Scientist at Conversion Sciences. He is the author of Your Customer Creation Equation. His rare combination of interests, experience and neuroses were developed over almost 20 years as a computer programmer, entrepreneur, corporate marketer, international speaker and writer.

This was adopted from an article that appeared on Search Engine Land.

Categories: AB Testing
  • Agree with the points you made above Brian. Quite insightful.

  • VoiceTranscribing

    Thanks for the article Brian. Yes, more and more of our clients use our transcription service http://voicetranscribing.com to transcribe their podcasts, webinars, interviews and generate content for blog posts.

  • Hi Kristi,

    I can respond with a lengthy comment, arguing the futility and wastefulness in running A/A or A/A/B etc. test, but I’ve already done an article on that back in 2014, so I’ll just share that: http://blog.analytics-toolkit.com/2014/aa-aab-aabb-tests-cro/ If you’d like to check it out and, hopefully, respond to it I think it will be beneficial for the readers of this blog.

    Kind Regards
    Georgi

    • Thanks, Georgiev. When you have a test setup that spans multiple domains, servers and security features, an A/A test is critical. We have been saved by A/A tests. In response to your excellent article I ask, “Which is more wasteful: Running a series of A/A tests or running a series of A/B tests that result in the wrong decisions?” The latter can impact sales for months or years.

      • Sounds like an unusually complicated test setup there, Brian. What kind of problems did those many A/A tests reveal? Randomization issues? User experience uniformity issues? Statistical engine issues? I’m just thinking there has to be a better way to detect & debug most of these, but the statistical engine ones…

        • We never really found the smoking gun, but we suspected cookie persistence issues, iframe security delays, page load times, etc. We redesigned the approach and verified the setup with an A/A test.

  • nrennie

    Thanks for removing my comments @bmassey:disqus.

    Surely constructive criticism is part of making things better, and excluding a market leader from your “Top tools” was exactly this?

    So my valid point was why not include Maxymiser? It’s a huge gap in your post.

    • Cut the sarcasm, @nrennie. It’s never appropriate. You commented on the wrong post here. I assume you meant to post on “The Most Recommended AB Testing Tools by Leading Experts”. I’ll reply to your comment there, but we didn’t list Maxymiser because nobody recommended it. Our team used it for one client and found it lacking on several key features.