Let’s dive into a brand new online marketing concept: Contextualization. Thanks to AI and ML, we have come a long way from creating customer segments. To improve conversions, we also need to understand context. Read on.

I predicted years ago that my business would be using machine learning for much of what we do manually today.

When I talk to people like Olcan Sercinoglu, I know that day is coming. Olcan is the CEO at a company called Scale Inference. He studied and worked under Peter Norton from Google – the guys who wrote the book on Machine Learning – and has spent the last 25 years as a developer engineer. Scaled Inference focuses on applying machine learning to online user interactions, and to personalize their experiences in ways we could never do by ourselves..

If we can understand how machine learning is different, we can understand how our digital marketing will be changed in the near future.

And so my interest was, “OK, this is great, but how do we how do we build a platform that is useful to others?”

Olcan Sercinoglu | Why Context Matters

Subscribe to the Podcast

iTunes | Spotify | Stitcher | Google Podcasts | RSS

From Segmentation to Contextualization: The New Way to Look at Marketing Key Takeaways

  1. Moore’s Law. Back in 1965, Gordon Moore predicted that we’d be able to fit twice as many transistors on a microchip every year. We are experiencing a golden age of tools – the tools are getting better, less expensive and getting easier to use.
  2. The future of AI marketing. Is it all about personalization? Are the metrics you’re optimizing for clear? And if not, can AI even work for you? Or how do we take all this data and make it matter?
  3. Contextualization. We are taking this idea of personalization and introducing you to a new term – contextualization. Everything you do as a marketer should flow from optimization. By understanding the metric first, then you can ideate and create based on the context that’s being emerged from the data.

How do we use AI to make us better marketers?

AI Optimization-Why context matters with Olcan Sercinoglu

AI Optimization-Why context matters with Olcan Sercinoglu

But at the end of the day or what companies actually want out of that saying there hasn’t been much progress. I think a lot of progress is going to happen as machine learning shifts towards metrics and these easier modes of integration.

Moore’s Law: As Valid Today as it was a Few Decades Ago

In 1965, a man named Gordon Moore made a bold prediction, a prediction that was expected to fail almost every year since. It is a prediction that helps to explain the dizzying speed with which our lives are being upended by new tech..

What Moore said in 1965 is that we’d be able to fit two times more transistors on a microchip every year, year after year. What this meant for the semiconductor industry is that microchips would get twice as fast and cost half as much to produce every single year.

This, they thought, was crazy talk.

A Grain of Rice and a Chessboard

Take a typical chess board. On the first square place a grain of rice. On the next square put two grains of rice. On the next square, four. And double the number of grains of rice on each subsequent square.

By the time you reach the final square, number 64, the amount of rice you would need would require the entire surface of the earth and its oceans to grow, 210 billion tons.

That’s the power of compounding.

Every few years, the skeptics declared that we had reached the end of our ability to shrink these tiny transistors any more. “It’s just not physically possible,” they said.

And every time, Moore’s prediction was proven more or less true.

Even today, as the wires that run across microchips approach the width of an atom, engineers find ways to make things half the size.

Do not miss: Can AI Marketing Tools Increase Your Website’s Conversion Rates?

Why should you care? As microchips shrink and drop in cost, so do the things we build with them. For example, the camera that is found in any laptop has a HD resolution and costs the manufacturer a few dollars. The cost of servers and storage space has plummeted as well. Hence, most of our computing and storage is done in the proverbial cloud.

All of this has created a golden age of technology — for consumers, for businesses, and especially for marketers. Entrepreneurs are using the cloud and cheap computing power to make digital marketing cheaper easier, and more predictable.

It is now more expensive to ignore the amazing data we can collect than it is to buckle down and put it to use.

While we’re sitting around wondering what to do with all of this data, entrepreneurs and engineers are using it teach machines to learn.

The Era of Neural Networks: Is the Future of AI marketing all about Personalization?

Neural networks are computer programs that work like human neurons. Like the human brain, they are designed to learn. Neural nets have been around for decades, but only in recent years have we had enough data to teach them anything useful.

Machine learning is lumped together with Artificial Intelligence, or AI, but it’s really much simpler than building an intelligent machine. If you have enough data, it’s relatively easy to teach a machine how to learn and to get insights from it.

In fact, machine learning is being used all around you and you probably don’t even know it.

In this episode, I am going to change the fundamental question you ask as a marketer. You will no longer ask, “Will this creative work for my audience?” You will ask, “Which people in my audience will this creative work for?”

And we’ll ask some more tactical questions.

  • How do we pull meaningful things out of our data in a reasonable amount of time.
  • So how do we understand the information that the machine pulls for us?
  • Are you optimizing for the right things? And if not, can machine learning even work for you?
  • How do we take all this data and make it matter?
  • How do we as marketers, become better at using the tools and resources available to us in the age of Moore’s law?

I start the conversation, asking Olcan, “Is the future of AI marketing all about personalization?”

From Segmentation to Contextualization: Focus on the Context that Your Visitors Arrive In

My favorite take away from my conversation with Oljan Sercinoglu is that context matters.

There is one big context that you don’t need machine learning to address: It is the context of your mobile visitors.

You may say that your website is responsive, and that you’ve already addressed the smartphone context. But, you haven’t.

Do you want proof? Check your analytics. You’re smartphone conversion rates are probably a half or a quarter of your desktop sites, even with that responsive design. I know this without looking at your analytics.

Mobile visitors are coming in a completely different context than desktop visitors. They don’t need a shrunk down version of your website. They need a different website.

Fortunately, you don’t need a machine learning program to identify these visitors. You can start personalizing your mobile site to deal with this new context.

Try this as a contextualization exercise: Reduce the number of fields on the mobile forms, or eliminate the forms altogether. Replace them with click-to-call. If you have an eCommerce site, make “Add to Cart” secondary and build your mobile subscriber list. Email is the life’s blood of eCommerce.

If your website is generating millions of visits, you may want to consider putting that data to work for you. Not every business is ready for machine learning, but you don’t want to be the last business in your market to start using it.

When You Get Back to the Office

When you get back to the office, I recommend that you share this episode of Intended Consequences with someone else in your company. It’ll make you look smart and forward thinking.

If not I have a challenge for you.

Here’s my challenge to you this week – start to really think about how you define success. Answer the question, “I’ve done a great job because…” and fill in the blank. Answer this questions three ways. everything you do as marketer should flow from optimization.

Then ask, how do I measure each of those with data I’m collecting today. Once you’re clear that it’s the idea that by understanding the metrics, first then you can begin to prioritize your data gathering and create based on the context that’s being emerged from the data.

Alright scientists, that’s it for this week.

Resources and links discussed

Olcan Sercinoglu | Why Context Matters

Subscribe to the Podcast

iTunes | Spotify | Stitcher | Google Podcasts | RSS

In this episode of Intended Consequences, we discover how to implement website surveys without affecting conversions and we evaluate some great tools to measure and analyze the gathered data.

Implementing website surveys is always a great idea. Unfortunately, if wrongly implemented, they may lower conversions. Our visitors may decide to respond to the survey and forget what they added to the shopping cart. Today, we’ll analyze the importance of well crafted website exit survey questions that will shield results. We will also share with you some AI-powered tools that can help you find out how to diagnose your webpages and get visitors past the obstacles that most of us unintentionally create.

Intended Consequences: Interview with Curtis Morris of Qualaroo

Subscribe to the Podcast

iTunes | Spotify | Stitcher | Google Podcasts | RSS

Resources and Links Discussed

How to Implement Website Surveys without Affecting Conversions Key Takeaways

  1. Thank you page survey: Find out why this should be a part of every website that processes sales, subscriptions or registrations of any kind.
  2. What almost kept you from buying today?: In this episode, learn what’s more effective than Net Promoter scores or pre-sale feedback queries.
  3. “Liking” In Action: Curtis shows us when is the best time to ask someone to take desired action.
  4. Data Tools: Find out which tools to use that allow you to be more creative, all while gathering data to be effective.

An Interview with Curtis Morris of Qualaroo

Our guest, Curtis Morris formerly with Qualaroo

Curtis Morris formerly with Qualaroo

Qualaroo let’s you discover issues — good and bad — that are affecting your prospects and customers. It provides a business with the ability to ask website visitors questions, collect answers, and process high quantities of input. The tools uses sentiment analysis and AI-driven text recognition to summarize inputs from hundreds or thousands of participants.

There is no better focus group than your prospects and customers. Qualaroo keeps you in touch with them.

How Automatic Solved Their Sales Problem with a Website Exit Survey

The people at a company called Automatic had an idea. What if we created a device that would connect your smartphone to your car’s computer. The idea was great, but then they ran into a problem.

How do you get people to buy the more profitable version of your product? How do you get people to click on the things you want them to click on? How do you get them to

When you take any car built since 1996 to a mechanic, one of the first things they will do is plug your car into a computer. The mechanics computer will essentially ask your car what’s been going on.

This makes me think of Star Wars, when Han Solo tells C-3PO that he needs him to talk to the Millennium Falcon.

It turns out that there’s a lot that your car can tell the mechanic, most of it uninteresting to the mechanic.

When one of the many sensors around your car detects a problem — your oil is low, or your engine temperature is getting high — your car shows you a “check engine” light, as if you couldn’t handle the details.

But your car knows more. Much more.

You car knows how fast you’re accelerating. It nows how fast you’re slowing down. It knows if your airbags have been deployed. It knows the levels of all of the fluids, the pressure in the tires, even the quality of the emissions coming out the tailpipe.

For your mechanic, all of this information becomes available through a special port in your car, called the OBD-II port. They get an engine code from your car’s computer and can lookup the problem, probably online.

The people at a company called Automatic had a idea. What if we created a device that would plug into the port on your car, and connect your smartphone to your car’s computer. Then your pocket C-3PO could talk to your four wheeled Millennium Falcon, translating engine codes and much more.

It turns out that Automatic was on to something. Their device connected your car’s computer to your phone, and then their app tracked your trips, monitored your acceleration and deceleration — to help you save gas — and even connected to a variety of apps so you could expense travel miles and turn on your Nest thermostat when you pulled into the driveway.

How Implementing Website Exit Surveys Increased Conversions

In 2016 the company released a more advanced version of the product. Automatic Pro had its own always-on 3G connection. This meant that it didn’t need your smartphone to communicate with the internet. This opened up new opportunities.

Automatic Pro could alert someone if your airbags deployed, even if your phone was broken in an accident. If your car was stolen, you would know exactly where it is. The site touted “event-based apps” and “streaming apps” and “parking tracking.”

The old device was recast as Automatic Lite and sold online for $80 beside the Automatic Pro at $130.

And most people bought the Lite version.

This was a bit of a problem as the Lite version was a lower margin product. Why weren’t people buying the clearly superior Pro version of the product? Should Automatic just accept that car owners are cheap and adjust their expectations?

Fortunately, Conversion Sciences was working with them, and tackled this problem for them. Using our sophisticated scientific minds, we devised a strategy for finding out why buyers weren’t jumping on the Pro product. We asked them.

Whenever someone bought an Automatic Lite, we served up one question in a popup box: “Why didn’t you choose the Automatic Pro?”

Within two weeks, we had over 150 responses. And these responses were from people who had already been all the way through the purchase process. The popup had no negative effect on conversion, because it appeared AFTER THE SALE.

And it told us what was wrong.

After analyzing the responses, one comment really summed things up.

“I don’t think I need crash alert. I have apps that track where I park just fine, nor have I ever needed it. I don’t know what Live vehicle tracking means. I don’t know what event-based apps means. I don’t know what streaming apps mean, either.”

In short, the site wasn’t doing a good job of helping them choose the right solution for them. So they defaulted to the cheapest option. This is the classic problem of the Pricing Page. The job of the pricing page is not to show off all of the features. It’s to help the buyer choose the right plan, the right level or the right feature set.

By modifying the way the features were presented on Automatic’s pricing page, we were able to significantly increase the number of units sold overall, and increase sales of the profitable Automatic Pro as a percentage. This was proven with an AB split test.

Things were going well enough that Automatic was acquired by SiriusXM, the satellite radio people, for 100 million dollars.

This is the power of qualitative data. Qualitative data is that delicious, juicy input that comes directly from buyers, prospects and pretenders. It’s typically gathered in surveys, focus groups and polls. These can deliver quantitative data, but qualitative data is prized for its messiness. It helps us understand how people think about products, how they talk about their problems, and what really is important to them.

The downside of this kind of data is that it is harder to process. We had 150 responses to analyze for Automatic. Imagine if you got thousands a day. Every day.

These are the problems that Curtis Hill thinks about. He is CEO of Qualaroo, and believes, as I do, that quantitative data means nothing if it’s not supplemented with qualitative data. So, listen to the Podcast for all the juicy details on how to implement website surveys without affecting conversions.

Intended Consequences: Interview with Curtis Morris of Qualaroo

Subscribe to the Podcast

iTunes | Spotify | Stitcher | Google Podcasts | RSS

Discover how AI marketing tools truly work and find the answer to the question: Can they really increase your website’s conversion rates?

Do you know how machine learning is impacting conversion rate optimization for marketers? We all know what the acronym “AI” stands for: “As If”. Data scientists are telling us that by using AI, they’ll will be able to create a predictive model of the visitors to your website that will tell you exactly who is ready to buy.

I say, “As if.”

We may marvel that such things can be done, but we also recognize that these things require a great deal of data and the skills of some serious brainiacs to get a machine to tell us something we don’t already know.

The truth is, you are probably already using “AI”, or more accurately, machine learning in your marketing. It’s hiding in the tools we use, like monsters under our bed. Machine learning and the more sciencey-sounding AI will change the way you take products to market, but your human mind will still be needed and loved.

Unless you resist – “as if.”

 

Augmenting Our Brains: AI-powered conversion optimization

Things like AI-driven predictive models are exciting, because our job as marketers is to predict the future. We’re like that exotic fortune teller gazing into an empty tea cup or a crystal ball.

We say things like, “If you give me a budget, I’ll generate six times that amount in revenue.” This is is like saying, “If you put a chicken foot under your pillow, you will find true love.” As if.

But this is what we do, and the data on which we base our predictions is often no more valid than the layout of tea leaves at the bottom of a cup. Our brains are wired to find patterns in anything, even when a pattern isn’t really there.

If I came into your office and said, “The last three leads we generated were all visiting the website using a Firefox browser,” your brain would jump to the conclusion, “If I can get more Firefox users to visit our website, I’ll generate tons of leads.”

Do AI marketing tools impact your website conversion rates?

Do AI marketing tools impact your website conversion rates?

Purveyors of AI, or more accurately Machine Learning (ML), would tell us that the machine doesn’t make mistakes like this. Our 100% genuine intelligence just doesn’t stack up to their Artificial Intelligence.

The problem is that machines will make exactly the same mistake if we don’t give them lots of data.

Just as machines need data, we know that we need more data before we start an ad campaign targeting Firefox users. We’ll ask our analytics person to pull together all website visits for the last year, and calculate the conversion rate for each. This increases the size of our dataset from three to many.

If this analysis goes the way of most analyses, we’ll find that there’s not a meaningful difference in conversion rates among browsers. Most experiments end up being inconclusive. That’s just the way it is.

In this scenario, we “wasted” an hour of our data scientist’s time, an hour of our time, and another twenty minutes explaining to our boss why we were so unproductive today.

“What if,” the AI crowd says, “you could get a machine to sort through your data looking for clues and figuring out who’s more likely buy. You don’t have to waste your time. Let the machine do it.”

This is an exciting proposition. The machine wouldn’t just look at the browser. It would look at the time of day, day of week, and week of the year that visitors converted. It could consider the device being used, screen size and operating system. It could add in the source of the visit, the number of times a visitor has been to the website, and whether the visitor has bought before.

After crunching through all of your analytics data, the machine would give you a percentage chance that the next visitor to your website will convert. And here comes a person with a Safari browser on a Mac computer at 3:30pm EST on a warm Tuesday afternoon who’s never been to the site before.

The machine might spit out, “There is a 51% chance this person will complete the lead form.” Actually, the machine will just say, “0.51”. Machines are so boring.

It’s amazing that a machine can so accurately predict a human being’s behavior. This is incredible.

But, is 51% good? And if this is true, what should my website do differently to make this Safari visitor more likely to buy? Do I reduce the price by 49%? Do I flatter this visitor for being above average? Do I ignore them?

This is “the rub” with machine learning. The machine can’t tell us what to do with the data it gives us. There are systems that will tell us if a visitor is “at the top of the funnel” or “in the consideration phase.” Still, what do we do with that? A price-sensitive buyer may want to see a discount when “at the top” of their purchase process. A relational buyer may not care about discounts until they’re “at the bottom,” ready to buy.

The machine won’t tell us, “Target Internet Explorer visitors coming late at night on a Windows computer during the springtime months with a picture of a cat.” It spits out the probability for each visit: “0.51, 0.34, 0.71, 0.92”.

Wait! A 92% probability? Is that important!? Well, no. They’re probably going to buy no matter what we do. “As if.”

AI-Driven Results

Scoring customers in a customer relationship management (CRM) platform has required that marketers hand-code the algorithm. We decide which actions indicate that a prospect is moving closer to buying. We decide how to value each action. It can work, but it isn’t rocket science – or AI.

Alternatively, we can dump sales data into a machine learning algorithm and let it calculate the probability that each prospect will turn into a customer. The sales force can focus on those high-probability clients and disregard the low-scoring leads. It’s using past performance to predict the future, and should be more accurate than arbitrary assignment of values to actions.

This is how machine learning is entering your life as a marketer.

Can AI Marketing Tools Increase Your Website’s Conversion Rates?

Amazon famously introduced product suggestions to the eCommerce world. “People who bought this also bought that and that and that.”

It’s not an easy problem to solve. There are a lot of variables to crunch and it has to be done quickly. This is a prime area for AI.

Mailchimp launched a similar tool to add product suggestions to the emails of its eCommerce clients. Every time you send an email to someone, Mailchimp will include a few product suggestions at the bottom of the email. The machine learning algorithm will compute the probability that one or more products will appeal to a subscriber, based on the behavior of all email recipients. Those products with the highest probability get added to the email. This prompts the visitor to buy.

As if.

It’s hard to know how well the machine has learned what your visitors buy collectively. This is the limitation of AI. We can’t really see what is inside the box. All we get is a number.

If you implement a suggestion engine on your website, we recommend running an A/B test to measure its effectiveness. This is done by adding the “Also bought” suggestions for half the visitors and hiding it for the other half. This will give us some conclusive data about how the suggestion AI is performing. Is it increasing the order size on average, or reducing it?

Multivariate testing offers high-traffic websites the ability to find the right combination of features and creative ideas to maximize conversion rates.

However, it is not sufficient to simply throw a bunch of ideas into a pot and start testing. This article answers the question, What is a multivariate test?, explains the advantages and pitfalls of multivariate testing, and offers some new ideas for the future.

If you run a relatively high-traffic site, consider this question: Will I profit from running multivariate tests?

Before we dive into the question, let’s be sure to define the terms. I’ll talk about the dangers of doing multivariate tests (MVT) and when you should consider using them.

What Is Multivariate Testing?

Multivariate testing is a technique for testing a hypothesis in which multiple variables are modified.

Multivariate testing is distinct from A/B testing in that it involves the simultaneous observation and analysis of more than one outcome variable. Instead of measuring A against B, you are measuring A, B, C, D & E all at once.

Whereas A/B testing is typically used to measure the effect of more substantial changes, multivariate testing is often used to measure the incremental effect of numerous changes at once.

This process can be further subdivided in a number of ways, which we’ll discuss in the next section.

Multivariate, Multi-variant or Multi-variable

For this article, we are focusing on a specific way of testing in which elements are changed on a webpage. Before we dive into our discussion of multivariate testing, we should identify what we are talking about and what we are not talking about.

One of the frequent items tested is landing page headlines. Getting the headline on the page can significantly increase conversion rates for your landing pages. When testing a headline, we often come up with several variants of the words, testing them individually to see which generates the best results.

A multi-variant tests multiple variants of one variable or element. Find out more about multivariate testing.

A multi-variant tests multiple variants of one variable or element.

This is a multi-variant test. It changes one thing–one variable–but provides a number of different variants of that element.

Now suppose we thought we could improve one of our landing pages by changing the “hero image” as well as the headline. We would test our original version against a new page that changed both the image and the headline.

An example of a multi-variable test. Here we are testing the control against a variation with two changes, or two variables.

An example of a multi-variable test. Here we are testing the control against a variation with two changes, or two variables.

This is a multi-variable test. The image is one variable and the headline is a second variable. Technically, this is an AB test with two variables changing. If Variation (B) generated more leads, we wouldn’t know if the image or the headline were the biggest contributors to the increase in conversions.

To thoroughly test all combinations, we would want to produce multiple variations, each with a different variant of the variable.

Two variables with two variants each yield four page variations in this multivariate testing example.

Two variables with two variants each yield four page variations in this multivariate testing example.

In the image above, we have four variations of the page, based on two variables (image and headline) each having two variants. Two variables times two variants each equals four variations.

Confused yet?

A multivariate test, then, is a test that tests multiple variants of variables found on a page or website.

To expand on our example, we might want to find the right hero image and headline on our landing page. Here’s the control:

The Control in our multivariate test is the current page design. A multivariate test, then, is a test that tests multiple variants of variables found on a page or website.

The Control in our multivariate test is the current page design.

We will propose two additional variants of the hero image–for a total of three variants including the control–and additional two variants of the headline, three including the control.

Here are the three images:

We want to vary the hero image on our page. This variable in our test has three variants.

We want to vary the hero image on our page. This variable in our test has three variants.

Here are three headlines, including the existing one.

  1. Debt Relief that Works
  2. Free Yourself from the Burden of Debt
  3. Get Relief from Debt

A true multivariate test will test all combinations. Given two variables with three variants each, we would expect nine possible combinations: three images x three headlines.

Here’s another example that will help you understand how variables, variants and variations relate. An ecommerce company believes that visitors are not completing their checkout process for any of three reasons:

  1. The return policy is not visible
  2. They are required to register for an account
  3. They don’t have security trust symbols on the pages

While these all seem like reasonable things to place in a shopping cart, sometimes they can work against you. Providing this kind of information may make a page cluttered and increase abandonment of the checkout process.

The only way to know is to test.

How many variables do we have here? We have three: privacy policy, registration and security symbols.

How many variants do we have? We have two of each variable, one variant in which the item is shown and one variant in which it is not shown.

This is 2 x 2 x 2, or eight combinations. If we had three different security trust symbols to choose from, we would have four variants, three choices and none. That is 2 x 2 x 4, or sixteen combinations.

We’ll continue to use this example as we explore multivariate testing.

Why Multivariate Testing Isn’t Valuable In Most Scenarios

A multivariate test seeks to test every possible combination of variants for a website given one or more variables.

If we ran an MVT for our ecommerce checkout example above, it would look something like this:

Variations multiply with multivariate tests requiring more traffic and conversions.

Variations multiply with multivariate tests requiring more traffic and conversions.

There are many reasons that multivariate testing is often the wrong choice for a given business, but today, I’m going to focus on five. These are the five reasons multivariate tests (MVTs) are not worth doing compared to A/B/n tests:

  1. A lack of time or traffic
  2. Crazy (and crappy) combinations
  3. Burning up precious resources
  4. Missing out on the learning process
  5. Failing to use MVT as a part of a system

Let’s take a closer look at each reason.

1. Multivariate Tests Take a Long Time or a Whole Lot of Traffic

Traffic to each variation is a small percentage of the overall traffic. This means that it takes longer to run an MVT. Lower traffic means it takes longer to reach statistical significance, and we can’t believe the data until we reach this magical place.

Statistical Significance is the point at which we are confident that the results reported in a test will be seen in the future, that winning variations will deliver more conversions and losing variations would deliver fewer conversions over time. Read 2 Questions That Will Make You A Statistically Significant Marketer or hear the audio.

Furthermore, statistical significance is really measured by the number of successful transactions you process.

For example, MXToolbox offers free tools for IT people who are managing email servers, DNS servers and more. They also offer paid plans with more advanced features. MXToolbox gets millions of visitors every month, and many of them purchase the paid plans. Even with millions of visits, they don’t have enough transactions to justify multivariate testing.

It’s not just about traffic.

This is why MVTs can be done only on sites with a great deal of traffic and transactions. If not the tests take a long time to run.

2. Variations Multiply Like Rabbits

As we saw, just three variables with two variants resulted in eight variations, and adding two more security trust symbols to the mix brought this to sixteen combinations. Traffic to each variation would be reduced to just 6.25%.

Multivariate testing tools, like VWO and Optimizely offer options to test a sample of combinations — called Partial, or Fractional Factorial testing — instead of testing them all, which is called Full Factorial testing. We won’t dive into the mathematics of Full Factorial and Partial Factorial tests. It gets a little messy. It’s sufficient to know that partial factorial (fractional factorial) testing may introduce inaccuracies that foil your tests.

What’s important is that more variations mean larger errors… because statistics.

Every time you add another variation to an AB test, you increase the margin of error for the test slightly. As a rule, Conversion Sciences allows no more than six variations for any AB test because the margin of error becomes a problem.

In an AB test with two variations, we may be able to reach statistical significance in two weeks, and bank a 10% increase in conversions. However, in a test with six variations, we may have to run for four weeks before we can believe that the 10% lift is real. The margin of error is larger with six variations requiring more time to reach statistical significance.

Now think about a multivariate test with dozens of variations. Larger and larger margins of error mean the need for even more traffic and some special calculations to ensure we can believe our results aren’t just random.

Ultimately, most of these variations aren’t worth testing.

All eight variations in our example make sense together. As you add variations, however you can end up with some crazy combinations.

Picture this:

It’s pouring down rain. You are camping with your son.

While huddled in your tent, you fire up your phone’s browser to find a place to stay. While flipping through your search results on Google, your son proclaims over your shoulder, “That one has a buffet! Let’s go there, Dad!”

Ugh.

The last time he ate at an all-you-can-eat buffet, he was stuck in the restroom for an hour. Not a pretty picture.

Then again, neither is staying out in the wretched weather. So you click to check out the site.

Something is off.

The website’s headline says, “All you can eat buffet.” But nothing else seems to match. The main picture is two smiling people at the front desk, ready to check you in.

As you scroll to the bottom, the button reads “Book Your Massage Today”.

Is this some kind of joke?

As strange as this scenario sounds, one problem with MVTs is that you will get combinations like this example that simply don’t make sense.

This leaves you with two possibilities:

  1. Start losing your customers to variations you should not even test (not recommended).
  2. Spend some of your time making sure each variation makes sense together.

The second option will take more time and restrict your creativity. But even worse, now you need more traffic in order for your test.

With an A/B/n test, you pick and choose which tests you like and which to exclude.

Some may argue it can be time-consuming to create each A/B/n variation while a multivariate test is an easy way to test all variations at once.

Think of a multivariate test as a system that automatically creates all possible combinations to help you find the best outcome. So on the surface, it sounds appealing.

But as you dig into what’s really going on, you may think twice before using an MVT.

3. Losing Variations are Expensive

Optimization testing can be fun. The chance of a breakthrough discovery that could make you thousands of dollars is quite appealing. Unfortunately, those variations that underperform the Control reduce the number of completed transactions and fewer transactions means less revenue.

Every test — AB or Multivariate — has a built in cost.

Ideally, we would let losing variations run their course. Statistically, there is a chance they will turn around and be a big winner when we reach statistical significance. At Conversion Sciences, we monitor tests to see if any variations turn south. If a losing variation is costing us too many conversions, we’ll stop it before it reaches statistical significance. This is how we control the cost of testing.

This has two advantages.

  1. We can control the “cost” of an AB test.
  2. We can direct more traffic to the other variations, meaning the test will take less time to reach significance.

When tests run faster, we can test more frequently.

On the other hand, multivariate tests run through all variations, or a large sample of variations. Losers run to statistical significance and this can be very expensive.

Lars Lofgren, former Director of Growth at KISSmetrics, mentioned that if a test drops below a 10% lift, you should kill it. Here’s why:

What would you rather have?

  • A confirmed 5% winner that took 6 months to reach
  • A 20% winner after cycling through 6-12 tests in that same 6 month period

Forget that 5% win, give me the 20%!

So the longer we let a test run, the higher that our opportunity costs start to stack up. If we wait too long, we’re foregoing serious wins that we could of found by launching other tests.

If a test drops below a 10% lift, it’s now too small to matter. Kill it. Shut it down and move on to your next test.

Keeping track of all the MVT variations isn’t easy to do (and also is time consuming). But time spent on sub-par tests are not the only resource you lose either.

4. It’s Harder to Learn from Multivariate Tests

Optimization works best when you learn why your customers behave the way that they do. Perhaps with an MVT you may find the best performing combination, but what have you learned?

When you run your tests all at one time, you miss out on understanding your audience.

Let’s take the example from the beginning of this article. Suppose our multivariate test reported that this was the winning combination:

If this combination wins, can we know why?

If this combination wins, can we know why?

What can we deduce from this? Which element was most important to our visitors? The return policy? Removing the registration? Adding trust symbols?

And why does it matter?

For starters, it makes it easier to come up with good test hypotheses later on. If we knew that adding trust symbols was the biggest influence, we might decide to add even more trust symbols to the page. Unfortunately, we don’t know.

When you learn something from an experiment, you can apply that concept to other elements of your website. If we knew that the return policy was a major factor, we might try adding the return policy on all pages. We might even test adding the return to our promotional emails.

Testing is not just about finding more revenue. It is about understanding your visitors. This is a problem for multivariate tests.

5. Seeing What Sticks Is Not An Effective Testing System

Multivariate tests are seductive. They can tempt you into testing lots of things, just because you can. This isn’t really testing. It’s fishing. Throwing a bunch of ideas into a multivariate test means you’re testing a lot of unnecessary hypotheses.

Testing follows the Scientific Method:

  1. Research the problem.
  2. Develop hypotheses.
  3. Select the most likely hypotheses.
  4. Design experiments to test your hypotheses.
  5. Run the experiment in a controlled environment.
  6. Evaluate your results.
  7. Develop new hypotheses based on your learnings.

The danger of a multivariate test is that you skip steps 3, 4 and 7, that you:

  1. Research the problem
  2. Develop hypotheses.
  3. Throw them into the MVT blender
  4. See what happens.

Andrew Anderson said it well

The question is never what can you do, but what SHOULD you do.

Just because I can test a massive amount of permutations does not mean that I am being efficient or getting the return on my efforts that I should. We can’t just ignore the context of the output to make you feel better about your results.
You will get a result no matter what you do, the trick is constantly getting better results for fewer resources.

When used with the scientific method, an A/B/n test can give you the direction you need to continually optimize your website.

Machine Learning and Multivariate Testing

Multivariate testing is now getting a hand from artificial intelligence. For decades, a kind of program called a neural network has allowed computers to learn as they collect data, making decisions that are more accurate than humans using less data. These neural networks have only been practical in solving very specific kinds of problems.

Now, software company Sentient Ascend has brought a kind of neural network into the world of multivariate testing. It’s called an evolutionary neural network or a genetic neural network. This approach uses machine learning to sort through possible variations, selecting what to test so that we don’t have to test all combinations.

These evolutionary algorithms follow branches of patterns through the fabric of possible variations, learning which are most likely to lead to the highest converting combination. Poor performing branches are pruned in favor of more likely winners. Over time, the highest performer emerges and can be captured as the new control.

These algorithms also introduce mutations. Variants that were pruned away earlier are reintroduced into the combinations to see if they might be successful in better-performing combinations.

This organic approach promises results faster and with less traffic.

Evolutionary neural networks allow testing tools to learn what combinations will work without testing all multivariate combinations.

Evolutionary neural networks allow testing tools to learn what combinations will work without testing all multivariate combinations.

With machine learning, websites that had too little traffic for pure multivariate testing can seriously consider it as an option. At Conversion Sciences, we leverage the use of machine learning tools for AI Optimization of high-traffic websites with outstanding success.

If you would like to explore how our CRO with machine learning services can give you a more accurate targeting of your audience and a revenue lift, schedule your call.

Final Thoughts: Is There Ever a Case For doing MVTs?

There are instances when introducing many variables is sometimes difficult to avoid or better to focus on.

Chris Goward of WiderFunnel gives four advantages to doing MVTs over A/B/n tests:

  1. Easily isolate many small page elements and measure their individual effects on conversion rate
  2. Measure interaction effects between independent elements to find compound effects
  3. Follow a more conservative path of incremental conversion rate improvement
  4. Facilitate interesting statistical analysis of interaction effects

He later admits, “At WiderFunnel, we run one Multivariate Test for every 8-10 A/B/n Test Rounds.”

Both methods are valuable learning tools.

What is Your Experience?

It is a bit of heated subject between optimization experts. I’d be curious to hear from you about your ideas and experience on what matters the most. Please leave a comment.