ab-testing

Discover how to identify what keeps visitors from converting on your site. Five factors you MUST look into to improve online conversions right now.

There’s one thing, one thing that’s keeping your visitors from converting on your site.

It may not be the only thing, but it is the primary thing that your online business isn’t delivering the results you expect. It’s where you start when you optimize your website.

So, traffic but not conversions? It’s one of these five things:

  1. The Value Proposition and Messaging isn’t clear.
  2. They perceive risk when considering taking an action.
  3. You aren’t showing up as credible and authoritative.
  4. They want to know if others have benefited from you.
  5. Your design and layout aren’t helping them digest the buffet of content you’re presenting.

Find out what keeps visitors from converting on your site and start testing to increase your conversions right now.

How to identify what keeps visitors from converting on your site.

How to identify what keeps visitors from converting on your site.

Value Proposition & Messaging

Do you think your value proposition is the one thing that keeps visitors from converting on your site? Let’s take a look at the anatomy of a value proposition. Your value proposition is composed of all of the things you do to solve a problem and is communicated by:

  1. Brand awareness
  2. Content and Copy
  3. Images
  4. Pricing
  5. Shipping policy
  6. Words used in your navigation
  7. Design elements

All of these website elements are used to let your visitors know how you solve a set of problems, and why your solution is the best choice. The one that will save the most time and money, or that will deliver the most satisfaction.

But your value proposition doesn’t have to be communicated through words and images alone. Video, audio and animations are proven ways to communicate your value to a prospect.

And herein lies the rub.

Digital media gives us the amazing ability to put anything onto a landing page that our hearts desire. And if you can do anything, how do you know which is the right element to use? Here lies the conundrum.

How to know if your value proposition is what keeps visitors from converting on your site

A high bounce rate is a sign of three things:

  1. You’re bringing the wrong traffic
  2. Your lead isn’t hitting the mark
  3. You’ve been attacked by bots

If your landing page suffers from a high bounce rate, look at the source of your traffic. Does the page keep the specific offer made in the paid ad, email, or organic search query that enticed the visitors to click on your site? If it’s your homepage, the answer is most certainly, “No.”

If you feel that your traffic is good, and is coming to a relevant page, then we should ask if the lead is hitting the mark. By “lead” I am referring to the headline + hero image.

Often, hero images are wasted on something non-concrete. The headline should act as the caption for the image it accompanies.

Don’t show a city skyline. Don’t show a person smiling at a computer. These things don’t scream for meaningful captions and don’t help conversions either.

You should also look at the words you use in your main navigation. These should communicate what your site is about in the words of the visitor, not just the structure of your website.

Still don’t know what’s keeping them from converting? Ask your visitors

If you still don’t know what is keeping visitors from converting on your site, consider using an exit-intent popup that asks one open-ended question: “What were you looking for when you came to our site?” or “Why didn’t you purchase?”

We are also big fans of putting an open-ended question on your thank-you page or receipt page: “What almost kept you from buying?” or “What almost kept you from signing up?”

Discover How Our Conversion Rate Optimization Analysis Services Work

You May Be Scaring Visitors Away: Use and Misuse of Risk Reversal

In general, more people make decisions based on fear than on opportunity. So, your amazing value proposition is destined to die in the minds of many of your prospects because of fear.

  • What if I don’t like the product?
  • What if my identity gets stolen?
  • Will a pushy salesman call?
  • Will I have to deal with tons of email?

At the heart of it all is, “Will I feel stupid if I take action right now?”

Risk reversal (and most of the following) is a set of tactics that puts the visitor’s fears at rest. It consists of things like:

  • Guarantees
  • Warranties
  • Privacy policies
  • Explicit permissions
  • Return policies

Placing these items in clear view near a call to action can do wonders for your conversion rates.

Don’t put fears into their mind

There is a potential danger. Your risk reversal tactics can actually put fear into their mind.

For example, stating, “We will never spam you.” can actually place the concept in the mind of someone who wasn’t concerned about it. You might say instead, “We respect your privacy.” with a link to your privacy policy.

Traffic but not Conversions? Help Visitors Convert on your Site with Social Proof

Social proof demonstrates that others have had a positive experience with your brand. These take the form of:

  • Testimonials
  • On-site ratings and reviews
  • Third party reviews
  • Case studies
  • Social media shares, likes and comments
  • Comments

If social proof is your one problem that keeps visitors from converting on your site, customers don’t feel that you’re right for someone like them. Make sure you show them that they are in the group of people that benefit from you.

Negative Reviews Help

Ironically, it also serves to answer the question, “Just how bad was a bad experience with this company?” This is why negative reviews have proven to increase conversion rates on eCommerce sites. Cleaning your reviews or only posting good reviews can shoot you in the foot.

Is it Lack of Credibility & Authority What Keeps Visitors from Converting on your Site?

If you are in an industry with lots of competition, or with “bad actors” who manipulate to get sales, your one problem may be credibility and authority.

The design of your website is one of the first things that communicate credibility. But be careful. A fancy, overly-designed site may communicate the wrong idea to visitors. It may convey that you’re expensive or too big for your prospects.

Credibility can be established by emphasizing things about your company, and by borrowing credibility from other sources such as, your clients. your payment methods, you media appearances and the like.

Brand Credibility

You gain credibility by building confidence with your brand and value proposition. How long have you been in business? How many customers have you served? How many products have you sold? How many dollars have you saved?

Brand credibility generally takes the form of implied proof.

Borrowed Credibility

Your website or landing page can borrow credibility and authority from third-party sources. Placing symbols and logos on your website borrows from these credible sources. Ask yourself:

  • Have you been interviewed or reviewed in well-know publications?
  • Have you been interviewed on broadcast media outlets?
  • What associations are you a member of?
  • What awards have you been nominated for or won?
  • Has your business been rated by consumer organizations like Consumer Reports or the Better Business Bureau?
  • Have your products been reported on by analysts such as Forrester?

Place proof of your associations on your site’s landing pages to borrow authority and credibility from them.

User Interface & User Experience: Factors that Keep Visitors from Converting on your Site

Nothing works if your visitors eyes aren’t guided through your pages.

No value proposition, no risk reversal, no social proof, no credibility stands a chance if the layout and user experience don’t help the reader understand where they’ve landed or where to go from there.

Long load time equals poor experience

The first thing to look at is site performance. If your pages load slowly, you visitors may be bouncing away. If any element requires a loading icon of any sort, you are probably providing a poor user experience.

Clutter means bad visual hierarchy

When a visitor looks at a page, it should be very obvious what is most important element and what can be looked at later. This is called a visual hierarchy.

For example, we like to make call to action buttons highly visible, so that it is clear to the reader that they are being asked to do something.

Designers use their knowledge of whitespace, negative space, font, font size, color, and placement to design an experience that is easy for the visitors’ eyes to digest.

Don’t add surprises

A good user experience has little place for novelty. Arbitrarily adding animations, fades, parallax images or scroll-triggered effects are generally unnecessary, can cause technical glitches and may actually hurt conversion rates.

How to Know “what” is Hurting your Conversion Rate

We recommend this process to determine the primary problem that keeps visitors from converting on your website.

1. Gather all of your conversion optimization ideas

Begin recording all of the ideas you have for improving the site in the spreadsheet. Sources for these ideas:

  1. Ask your team
  2. Read your customer reviews
  3. Read your customer surveys
  4. Pull from your marketing reports
  5. Read your live chat transcripts
  6. Generate heatmap reports for your key pages
  7. Watch recorded sessions

Don’t be surprised to have dozens of ideas for a website or landing page.

2. Categorize each of your ideas

The ROI Prioritized Hypothesis List spreadsheet has a column for classifying each idea.

  1. Messaging
  2. Layout/UX
  3. Social Proof
  4. Risk Reversal
  5. Credibility

There will also be some things that you just want to fix.

3. Count your conversion optimization ideas

Count out how many ideas you have for each category. The category with the most ideas is probably the one problem you should address first. We use a pie chart to illustrate the different issues.

What Keeps Visitors from Converting on your Site? This site's one problem is Value Proposition and Messaging followed by Layout and UX

This site’s one problem is Value Proposition and Messaging followed by Layout and UX

4. Start working

Begin working on the ideas in the category with the most ideas.

This is a great time to start AB testing to see which of your ideas really are important to your visitors.

Your search traffic will demonstrate their approval through more sales, more leads and higher conversion rates overall.

This sounds like a lot of work

It is a lot of work. But you could consider hiring us to identify what keeps visitors from converting on your site and we will test our way to your success.

You can request a free consultation with us.

This article is an updated and revised version of our original article published on Search Engine Land.

Brian Massey

Let’s see why knowing your customer is key to marketing and conversion success and from this insight you can begin to find opportunities for growth.

Valentin Radu is a businessman, a successful businessman, who believes knowing your customer is fundamental. He has built the first online car insurance company in Romania and sold it to within a few years.

So, if you’re Valentin, what do you do for an encore?

You build the tools you wish you had when you were building your business and offer them to other businesses so that they can be successful.

You can lead a horse to water, but he still won’t look good in a bikini.

Valentin Radu believes we spend too much time chasing new customers, when we should be spending our time and energy on our “true lovers”. Listen and see if you agree.

Knowing Your Customer with Valentin Radu

Subscribe to the Podcast

iTunes | Spotify | Stitcher | Google Podcasts | RSS

Resources and Links Discussed

Key Takeaways

  1. The Human Biases Holding You Back. Learn more about human biases, how they work together, and why it impacts your role as a marketer.
  2. Gain Executive Buy-In. How do you know what made your customer buy to begin with? Who is your buyer? And when you get the answers to these questions – how do you get buy-in from leaders in the organization to make the pivots needed based on the data?
  3. Understanding What Drives. It’s important to know which calls to action tend to drive the most clicks, and which pages (Instagram, Facebook, Twitter, etc.) are getting the most ad traffic.

It turns out that these exciting tools bump up against something less procedural, and more… human.

Imagine this: You are offered a magical machine that lets you read the thoughts of the people coming to your website. Not the personal stuff, just the stuff that applies to your business.

You can see how they solve problems. You can try different designs, different copy, different calls to action to see if they find it easier to buy. And you don’t have to redesign your website.

You can hear what they are trying to do and what is confusing them.

You can point them to the information they need at any time.

And the magic tools wouldn’t violate their privacy in any way.

You might be skeptical, of course. But would you be resistant to this?

The answer is, that you probably would be. This is human. There are a number of biases that all humans harbor. These biases — confirmation bias, availability bias, novelty bias, survivorship bias — work together to keep us doing what we’ve always done, even when we clearly need change.

Fortunately, humans are also social animals. Our biases can be up-ended by the behaviors of others. When we talk about using social signals to change human behavior, we are talking about Culture.

In a company, culture is a huge, powerful lever. This also makes it difficult to move, especially if you are not a leader in your company. You can feel like Sisyphus, pushing that bolder up the hill. Over and over agin.

The opportunity, however, is great. Marketing has always been about knowing your customer. We’ve never had access to more information about our customers. Will you be an agent of knowledge or will you remain mired in your biases?

Understanding Your Customers

When a visitor arrives on your site what is it that you want them to do? Well most marketers would say first, you want them to buy. And then you want them to come back.

This is the charge.

So how do we take our customers — our site visitors — and turn them into ‘true lovers’ as our guest today, Valentin Radu from Omniconvert calls them?

Getting them to buy and come back is the charge. But here’s the challenge.

How do you know what made your customer buy to begin with? Who is your buyer? How do you know the action they took when they first landed on your site? How do you get the freedom as a marketer to experiment, to look at the data, to understand the data in order to make decisions to increase conversions?

And when you get the answers to those questions, how do you get buy-in from leaders in the organization to make the pivots needed based on the data?

Knowing your customer is key to marketing and conversion success.

Knowing your customer is key to marketing and conversion success.

Experimenting with Your Marketing

These are the questions we explore in this episode. Experimenting with your marketing is the only way that you can truly know what is working. It’s the only way you can succeed. Marketing and status quo cannot go together. At least for my listeners.

You might be thinking, that all sounds great Brian, but how do I influence change to allow for more more experimentation and effect true company growth?

Omniconvert is a CRO tool that helps marketers increase conversion rates. From surveys to overlays – it’s a marketers sandbox. You can find out more by connecting with me or head on over to omniconvert dot com.

When you get back to the office.

When you get back to the office, I suggest that you start using a little data in your decision-making process. You can start with some data that is already “laying around.”

When was the last time you looked at what your PPC and Facebook ad team were doing? Many digital marketers don’t spend a lot of time with the advertising, but there are some real gems of growth here.

And most of us are doing some sort of advertising.

Call down to your ad team and ask them for a spreadsheet of all of the ads they’ve been running. Go back six months or even a year. Ask for the ad text, the number of impressions, the number of clicks, the cost per click and the link URL. This is easy for them to generate. If they can track conversions, definitely ask for conversions for each ad.

Then spend some time with this data. You’ll understand:

  • Which calls to action tend to drive the most clicks.
  • What pages are getting the most ad traffic. You’ll want to go and see how these pages are performing in analytics.
  • How many ads are sending traffic to the home page.

From this, you can begin to find opportunities for growth.

Are you using words like the best clicked ads? Are you sending good clicks to bad pages? And is there a better place to send traffic than the home page? The answer is yes, by the way.

Then share your findings with at least one other person.

You have just begun culture change. You radical, you.

Alright scientists, that’s it for this week.

Knowing Your Customer with Valentin Radu

Subscribe to the Podcast

iTunes | Spotify | Stitcher | Google Podcasts | RSS

IF we had to pick one thing that has made Conversion Sciences a successful AB testing agency, it would be this: We are very good at picking what to test.

This isn’t because the team is made up of geniuses (except you Brian, we all know you’re a genius). It’s because we have a consistent methodology for conducting AB testing research. In other words, we do our homework.

Like we talked about in our our rundown of the best AB testing tools, “your AB tests are only as good as the hypotheses you are testing.”

With the proper research, we can consistently make better hypotheses, leading to more profitable testing results and a better experience for our visitors.

100 Million Neurons in Our Guts

Would you believe there are 100 million neurons in the human gut? This concentration is second only to our brains, even prompting scientists to refer to it as our “second brain”.

While we don’t use our gut to make conscious decisions, it can greatly influence our mental state and is likely the reason we have “gut reactions” or “gut feelings”.

There are times when “going with your gut” makes sense. That may happen when you don’t have any other options, or when your gut is trying to tell you something but you are unable to rationally identify it. If there is no information available to you, your gut may be a good second opinion for your brain.

On the web, there is rarely a need to go with your gut due to lack of information. So. let’s redefine these terms.

  • Whenever someone says “My gut reaction is…” you should hear, “I don’t really know. Let’s do some more research.”
  • Whenever someone says, “I have a gut feeling that…” you should hear, “I don’t have enough information. How can we better inform ourselves before making this decision?”

We are living in a golden age of digital marketing information. With such easy access to research methods, there is no good reason to ever go from the gut on web design, copywriting, value proposition, or conversion optimization.

You don’t need your intestines to design your website.

After all, the primary output of a healthy gut is… well… crap.

What research skills can keep you from resorting to your colon for inspiration?

To answer that question, we worked with KlientBoost to capture many of the key AB testing research methods and enjoy the satisfying feeling of winning AB tests.

AB testing research: How to pick what to test. By Conversion Sciences.

Conversion Research Evidence Infographic.

Research + Framework = Growth

AB testing research feeds an AB testing framework for test results that are consistently positive and repeatable. Feed it well, and it will poop out revenue growth month after month. This is the only resemblance to your gut I could think of.

It doesn’t make sense to test an idea without some evidence that it will make a difference. Good research is full of nutrients, vitamins and fiber. And that is the last time I’ll refer to the digestive system in this article.

Related: What Keeps Visitors from Converting on your Site?

The Heart of AB Testing Research

Before we get into the details, it’s important to understand the core of testing research, and ultimately, the core of conversion optimization itself:

“The definition of optimization boils down to understanding your visitors.” – Brian Massey

Optimization is just a fancy word for bettering our understanding of our customers and giving them more of what they want.

Behavioral data is the best, most reliable source for split testing. With it, we can eliminate tripping points and optimize the visitor’s experience.

We find this data in our analytics databases. But you may notice that much of our AB testing research will not be behavioral. And that is fine.

Generally speaking, there are two types of research:

  1. Quantitative Research
  2. Qualitative Research

Both kinds of research will provide us with good data to form hypotheses and test variations. Let’s review their pros and cons.

Quantitative Research Data for AB Testing

Quantitative data is generated from large sample sizes. Quantitative data tells us how large numbers of visitors and potential visitors behave. It’s generated from analytics databases (like Google Analytics), trials, and AB tests.

The primary goal of evaluating quantitative data is to find where the weak points are in our funnel. The data gives us objective specifics to research further.

There are a few different types of quantitative data we’ll want to collect and review:

  • Backend analytics
  • Transactional data
  • User intelligence

Understanding Qualitative Research

Qualitative data is generated from individuals or small groups. It is collected through heuristic analysis, surveys, focus groups, phone or chat transcripts, and customer reviews.

It can uncover the feelings and reactions of your user experience as they visit a landing page and the motives or reasons why they interact with your website in a certain way.

As qualitative data is often self-reported, we should analyze it with grain of salt. Humans are good at making up rationalizations for how they behave in a situation. These qualitative research studies are not conducted at great scale, therefore reducing their statistical significance. However, it is a great source of test hypotheses for future testing that can’t be discerned from quantitative behavioral data.

There are a number of tools we can use to obtain this information:

  • Surveys and other direct feedback
  • Customer service transcripts
  • Interviews with sales and customer service reps
  • Session Recording
  • Heat maps

In summary, quantitative data tells us what is happening in our funnel and qualitative data tells us why visitors behave the way they do. Both types of data give us a better understanding of what we should test.

Usability and User Experience

Why are we going through all this data to perform ab testing research? Because two of our key goals are to evaluate our website’s usability and user experience.

  • Usability deals with how easy it is for someone to learn and use the functions of our site. If we can make any part of that customer journey easier or more intuitive, we are increasing Usability.
  • User Experience deals with the emotions and attitudes users experience as they use our site. If we can make the customer journey more enjoyable, we are improving UX or User Experience.

While these two concepts often go hand in hand, they are not the same, and both need to be kept in mind when collecting data.

The Importance of Segmentation

It’s not enough to simply know that “visitors” are doing ____ when they visit a given webpage or flow through a given funnel.
Which visitors?

  • Are they on mobile or desktop?
  • Are they here via paid ads or organic search?
  • Are they using Chrome or Firefox?
  • Did they click-through via a Facebook post or a Tweet?
  • Are they a new or returning visitor?

In order to properly understand and evaluate our visitors and customers, divide them into strategic segments to understand the differences across each segment.

It’s especially important to know what these key segments are before we run our AB tests, because otherwise, our tests won’t tell us anything about them.

Follow the Proven System A/B Testing Agencies Use

AB testing research is a fundamental part of any proven CRO framework, and it’s an important part of what separates an ROI-generating A/B testing agency from a waste of money.

As you finish up the year and move into a brand new one, it’s time to take things up a notch. In the past, hundreds of businesses drastically improved their bottom lines via a proven, systematic CRO process.

Why not join the party? Take our free gift and click here to schedule a call with one of our CRO professionals.

References

Think Twice: How the Gut’s “Second Brain” Influences Mood and Well-Being

There are a ton of AB testing tools on the market right now, and that number is only going to increase.

When evaluating these tools for use in your own business, it can be difficult to wade through the marketing rhetoric and identify exactly which tools are a good fit. That’s why we reached out to our network of CRO specialists in order to bring you a comprehensive look at the best AB testing tools on the market.

Our goal here isn’t necessarily to give you a complete review of each tool, but rather, to show you which split testing tools are preferred by full-time CRO experts – people whose businesses depend completely on the results they are able to deliver to their clients.

We’ll cover two primary categories of tools:

  1. Tools for running the actual AB tests
  2. Tools for collecting data in order to make good hypotheses

At the end of the day, the “right” tool is going to vary depending on the business. As Paul Rouke explains:

We see it time and time again: companies sign up to multi-year contracts for feature rich, enterprise level tools which have a fantastic looking client list, and it ends up burning through their entire CRO budget. Companies invest without considering the need for resource and skills, or they are simply sold on the tool’s ‘ease of use’.

Many companies don’t have the internal skills in place yet to actually utilize this tool, and so the all-singing, all-dancing tool hardly gets used. Also, people using the tool don’t understand the need for or cost of customer research, data, psychology, design, UX principles etc., meaning they’re ultimately testing the wrong things.

The tools that in my experience deliver the most long-term value are those which are reasonably priced, allowing companies to spend more of their budget on making sure they are testing intelligently and developing an effective testing process.

No tool on this list will be the right fit for every business. That said, without breaking up our list into tiers, we would like to note 4 tools that came up very consistently from the experts we queried.

The two most popular AB testing tools by a wide margin were Optimizely and VWO. These are the most common AB testing tools used by Conversion Sciences clients, and virtually every single expert we chatted with is using both of these tools on a regular basis.

Another two tools that came up frequently (in about a third of responses), were Convert Experiences and UsabilityHub. Both of these tools received consistently strong reviews from the experts who used them and fill key needs in the CRO space, which we’ll discuss in their respective entries.

Without further ado, let’s talk a look at our list of recommended AB testing tools.

Most Recommended AB Testing Tools

The following tools are our experts’ recommended options for running AB tests. We’ve listed these in order of frequency with which they were mentioned by our experts. This is not to be confused as a ranking by quality.

  1. Optimizely
  2. VWO
  3. Convert Experiences
  4. SiteSpect
  5. AB Tasty
  6. Sentient Ascend
  7. Google Experiments
  8. Qubit
  9. Adobe Target
  10. Marketing Tools With Built-In Testing

Optimizely

20 best AB testing tools CRO experts conversions.

Optimizely is the leading A/B testing tool.

Optimizely is basically the big kid on campus. It’s our experts’ go-to choice for working with enterprise level clients, and despite the significant price increases over the years, it remains the king.

It’s also reasonably user friendly for such a complex tool, as Shanelle Mullin summarizes:

Optimizely is the leading A/B testing tool by a fairly large margin. It’s easy to use – you don’t need to be technical to launch small tests – and the Stats Engine makes testing easier for beginners.

Since the Conversion Sciences team uses this tool every single day, I asked them to give me a few thoughts on what they like and dislike about it.

According to the team, Optimizely offers some of the following benefits.

  • Easy editing access through the dashboard
  • Retroactive filtering (i.e. IP addresses)
  • Intuitive data display and goal comparison
  • Saved Audiences (not available in VWO)
  • Great integration with 3rd party tools
AB testing software Optimizely dashboard with AB Test Experiments highlighted and Edit highlighted.

AB testing software Optimizely dashboard with AB Test Experiments highlighted and Edit highlighted.

On the flip side, Optimizely is bit lacking in these ways:

  • Test setup is not as intuitive compared to other tools
  • Slow updates for saved changes to the CDN
  • Doesn’t carry through query params/cookies within a certain test
  • Targeting is more difficult

Optimizely’s multivariate testing setup is simple and intuitive, and it’s the leading split testing tool for a reason. For businesses with the budget and team to utilize Optimizely to its fullest potential, it is clearly a must-own.

VWO

A screen capture of the AB Testing Tool VWO Dashboard

AB Testing Tool VWO Dashboard.

Coming in just behind Optimizely in the AB testing pantheon is Visual Website Optimizer (VWO). VWO is incredibly popular in the marketing space, and in addition to serving as a top choice for businesses with smaller budgets, it is also frequently used in conjunction with Optimizely by businesses who run complex testing campaigns.

According to the Conversion Sciences team, VWO offers some of the following benefits as compared to Optimizely:

More intuitive interface with color coding

  • Faster updates
  • Easier goal setup
  • Easier to download data
  • Better customer support

On the flip side, VWO is lacking in the following areas:

  • Can’t view goal reports all at once, which makes them harder to compare
  • No saved targeting, so you must start fresh with each test unless you clone
  • No cumulative CR graph if you have low traffic (or what VWO considers low traffic). Instead it gives CR ranges. You must export the data to get any usable information.

This perspective is mostly shared by the ConversionXL team as well, as explained by Shanelle Mullin:

VWO is very easy to use, especially with its WYSIWYG editor. They have something similar to Optimizely’s Stats Engine called Smart Stats, which is based on Bayesian decisions. VWO also offers heatmaps, clickmaps, personalization tools and on-page surveys.

Overall, VWO is in intriguing solo option for smaller to midsized businesses and also works very well in conjunction with Optimizely for enterprise clients.

Convert Experiences

Screenshot from Convert Experiments shos testing dashboard for the Etsy Product Page.

AB testing tool Convert Experiences.

While Optimizely and VWO were the tools most commonly mentioned, Convert Experiences received some of the most effusive praise from those who had worked with it.

It seems to have hit a sweet spot for SME/SMBs, combining an exceptional power-to-price ratio with an intuitive interface and highly regarded customer support.

We are platform agnostic, so if our client already has a tool in use, then we try to use that.  But in cases where the client has never done any testing before, we typically look first to use Convert (convert.com).  I like Convert for a number of reasons.  From the very beginning, it has been one of the easiest tools to integrate with Google Analytics.  Also, for tricky variations, I’ve had better luck with Convert than others (Optimizely) at getting the variation to display just the way we want.  And the support at Convert has always been excellent—again, better than most of their competitors.

We focus on small to medium size clients, and Convert is excellent for that segment with flexible pricing.  It’s a great solution for small businesses doing in-house conversion optimization, but it can also work very well for agencies.

– Tom Bowen, Web Site Optimizers

Convert Experiences also stood out as the type of tool that catches new fans wherever it’s discovered, leading me to believe that it will continue to grow and pick up market share.

We have come across convert.com more and more in recent months working on client campaigns.  If you are a true marketer and want actionable data then they are a good choice.  The user interface is actually pretty good and you can actually understand the data they give you on experiments.  They run on the typical drag and drop style experiment setup engine that most others do and can be manipulated even if you aren’t a technical wizard.

The price isn’t too bad either as they fall somewhere in the middle of Optimizely and VWO.  I would recommend them to someone who has a bit of budget constraints but wants a bit more testing power.  We have used them on multi million dollar per month campaigns with much success.

– Justin Christianson, Conversion Fanatics

Convert Experiences is known for having some of the most robust multivariate testing options in it’s class. At the same time, it is also one of the few tools in its class to not offer any sort of email split testing capabilities.

Overall, it’s a highly recommended AB testing tool that is worth trying out.

Convert has great customer support (via live chat) and is easy to use. We’d recommend it to the same people who are considering using Optimizely and VWO.

– Karl Blanks, Conversion Rate Experts

SiteSpect

AB Testing software SiteSpect Report Screen Capture

AB testing software SiteSpect report.

SiteSpect initially distinguished itself as one of the first server-side testing solutions on the market, and it has remained a top choice for more technically sophisticated users and security-conscious clients.

For a long period, SiteSpect was one of the few platforms offering a server-side solution. This has given them a huge advantage by allowing more complex testing, by adapting to newer JavaScript technologies, and by accommodating security-conscious clients.

– Stephen Pavlovich, Conversion

SiteSpect has the advantage that it works in a different way. It’s tag-free. SiteSpect edits the HTML before it even leaves the server, rather than after it has hit the user’s browser. It tends to be popular with companies that want to self-host and are technically sophisticated.

– Karl Blanks, Conversion Rate Experts

As a server-side testing solution, SiteSpect avoids many of the issues that can arise with the more typical browser-based testing platforms that utilize javascript tags.

  • Tag-based solutions typically charge by the number of tag calls you make, even if those tags don’t end up being used.
  • Tag-based solutions often require third-party cookies, which certain browsers or browser settings might not support, causing you to lose the ability to test a large percentage of traffic.
  • Tag-based solutions can have imprecise reporting because the javascript doesn’t always fire.

While this value proposition won’t be the deciding factor for many businesses, for those requiring a server-side solution, SiteSpect is one of the best options on the market.

AB Tasty

AB Testing software ABTasty Reports Screen Capture

AB testing software ABTasty reports screen capture

AB Tasty is a solution for testing, re-engagement of users, and content personalisation, designed for marketing teams. Paul Rouke had a good bit to say here, so I’m going to let him take it away.

The tools that in my experience deliver the most long-term value are those which are reasonably priced, allowing companies to spend more of their budget on making sure they are testing intelligently and developing an effective testing process. I talk about this in-depth in my article The Great Divide Between BS and Intelligent Optimization.

On this note, my favorite tool would be something like AB Tasty, which is priced sensibly, yet has a powerful platform that facilitates a wide range of testing, from simple iterative tests through to innovative tests, along with strategic tests which can help evolve a business proposition and market positioning.

I would recommend AB Tasty (and similarly Convert.com) to the following types of companies:

(1) Companies just starting to invest in conversion optimisation – they won’t break the bank, they won’t overwhelm you with add-ons you will never use as you’re starting out, but they have the capability to match your progress as you scale up your testing output

(2) Companies who have been investing in conversion optimisation but who want to start using a higher portion of their budget (75% or more) on people, skills, process and methodology in order to deliver a greater impact and ROI

(3) Companies frustrated at investing significant amounts of money in enterprise testing platforms, which aren’t being used anywhere near their potential and are taking away from the budget for investing in people, skills and developing an intelligent process for strategic optimisation

Sentient Ascend

AB Testing Software Ascend showing results on a computer screen and a mobile phone screen.

AB testing software Ascend uses machine learning.

Sentient Ascend (formerly Digital Certainty) is a new player bringing advanced machine learning algorithms to the CRO space. Conversion Science’s own Brian Massey explains why this is a big deal:

Sentient Ascend is one of the new generation testing tools that utilize machine learning algorithms to speed multivariate testing. Evolutionary, or genetic algorithms do a better job of finding optimum combinations, isolating the richest local maximum for a solution set.

We love being able to assemble our highest rated hypotheses and throw them in the mix to have the machine sort them for us.

In the future, tools like this will let us optimize for multiple segments simultaneously. We believe this is the final step forward full time personalization solutions.

Google Optimize

Screen capture fo the reports provided by the AB testing software Google Optimize

AB testing software Google Optimize example

Google Optimize is a split testing function of Google Analytics. If you are looking for a reasonably powerful AB testing solution with no monetary cost, look no further.

Although I am not normally a big fan of Google for the sake of split testing, Google Experiments has its advantages.  The first one is that you can’t beat the price.  The second is you can have all your data in one place under your Google analytics account. There are some downfalls in that you are going to have to leverage a bit more technical gumption in setting up your experiments and the overall process might take a little longer, but if you are looking for a low barrier to entry in your testing then this is a good place to start.

– Justin Christianson, Conversion Fanatics

As Justin eluded to, Google Experiments is going to be particularly useful to avid Google Analytics users who have the ability to utilize its more complicated features.

Qubit

Screen capture of testing platform Qubit with sample reports shown.

Testing Platform Qubit Example Screen Capture

Qubit is a testing platform focused primarily on personalization. Accordingly, it has some of the strongest segmentation capabilities of any tool on this list.

Qubit has a strong focus on granular segmentation – and the suite covering analytics through to testing gives it an advantage. They’ve now broken out of their traditional retail focus to become a strong personalisation platform across sectors.

– Stephen Pavlovich, Conversion

If advanced segmentation or personalization are a priority for your business or clients, Qubit is a tool worth checking out.

Adobe Target

AB Testing Software Adobe Target Screen Capture

AB testing software Adobe Target

Long known for being the most expensive AB testing tool on the market, the benefits of using Adobe Target can be summed up in this one sentence from Alex Harris:

Adobe Target works great with sites that already use Adobe Analytics.

There’s honestly not much more to say here. If your business is already paying for Adobe Analytics, adding Adobe Target is virtually a no-brainer. If your business is not using Adobe Analytics, ignoring Adobe Target is virtually a no-brainer.

Just for good measure, here’s Stephen Pavlovich to reiterate the point:

I like Adobe Target. The integration of Adobe Analytics and Target is strong – especially being able to push data two-ways. And the fact that Target is normally an inexpensive upsell for Analytics customers is a bonus.

Marketing Tools With Built-In AB Testing

In addition to dedicated AB testing tools, there are some great marketing tools out there that include built-in split testing capabilities. This is fairly common with tools like landing page builders, email service providers, or lead capture solutions.

As Justin Christianson explains, there are some positives and negatives to relying on these built-in tools:

Most page builders out there such as LeadPages and Instapage have split testing capabilities built into their platforms.  The problem is you don’t have much control over the goals measured and the adaptability to test more complex elements.  The good thing is they are extremely easy to setup and use for those quick and dirty type tests.  I recommend the use of this to just get some tests up and running, as constantly testing is extremely important.  If you are currently using a platform with these native testing capabilities then this is a good place to start.

One particular tool that was highlighted by several of our experts was Unbounce, one of the web’s more popular landing page builders.

I also like Unbounce, and not just because I like Oli Gardner. It seems most everyone there lives and breathes landing pages, so the expertise that comes with the tool is virtually unmatched.  Their support is also excellent.  Unbounce works really well when we’re creating a new landing page from scratch and want to try different variations, since it’s so easy to create brand new pages using the tool.

– Tom Bowen, Web Site Optimizers

 

Unbounce is an excellent tool for A/B testing your landing pages. While many landing page tools also offer A/B testing, I think Unbounce has the best and most flexible page editor when creating variations of your pages to be tested, and their landing page templates have the most CRO best practices included already.

Unbounce is outstanding for online marketing teams that want the most flexibility when creating and A/B testing their landing pages – many other landing page tools are limited to a fixed grid system which makes it much harder to make changes.

Rich Page

Another popular tool was OptinMonster, which began as a popular popup tool and has evolved into a more fully featured lead generation software.

Optin Monster is an outstanding tool that lets you easily A/B test visitor opt-in incentives to see which converts best – not only headlines, images and CTAs, but also which types perform best (like a discount versus a free guide). In particular it offers great customization options and many popup styles, and exit intent popups.

Optin Monster is particularly useful for the many website marketers who don’t have enough traffic to do formal A/B testing (using tools like Optimizely or VWO) but still want to get a better idea of their best performing content variations. It has great pricing options suitable for online businesses on a low budget.

– Rich Page

Tools For Gathering Data

As every good split tester knows, your AB tests are only as good as the hypotheses you are testing. The following tools represent our experts’ favorite choices for collecting data to fuel effective AB tests.

  1. UsabilityHub
  2. Google Analytics
  3. Crazy Egg
  4. UserTesting.com
  5. Lucky Orange
  6. ClickTale
  7. HotJar
  8. Inspectlet
  9. SessionCam
  10. Adobe Analytics

UsabilityHub

User testing platform UsabilityHub Screen Capture

User testing platform UsabilityHub

UsabilityHub was by far the most frequently mentioned analytics tool by our group of CRO experts. UsabilityHub is a collection of 5 usability tests that can be administered to visitors in order to collect key insights.

UsabilityHub is great for clarity testing and getting quick indications of potential improvements. It is also great for uncovering personal biases in the creation of page variations. I would recommend it to anyone doing conversion optimization or even basic usability testing.

– Craig Andrews, allies4me

While many of the tools on this list deal primarily with quantitative data, UsabilityHub offers uniquely efficient ways to collect valuable qualitative data.

Once I’ve identified underperforming pags, the next step is to figure out what’s wrong with those pages by gathering qualitative data. For top landing pages, including the homepage, I like to run one of UsabilityHub’s “5 Second Tests” to gauge whether people understand the product or service offered. The first question I always ask is “what do you think this company sells?”. I’ve gotten some surprisingly bad results, where large numbers of respondents gave the wrong answer. In these cases, running a simple A/B test on a headline and/or hero shot to clarify what the company does is an easy win.

– Theresa Baiocco, Conversion Max

It also can be a cost-effective alternative if your website doesn’t get enough traffic to facilitate use of an actual split testing tool.

UsabilityHub is essential if you want to do A/B testing but your website doesn’t have enough traffic to do so. Instead it enables you to show your proposed page improvements to testers (including your visitors) to get their quick feedback, particularly using the highly useful ‘Question Test’ and ‘Preference Test’ features.

UsabilityHub can be particularly useful for the many website marketers who don’t have enough traffic to do formal A/B testing (using tools like Optimizely or VWO) but still want to get a better idea of their best performing content variations.

– Rich Page

Google Analytics

Analytics platform Google Analytics Screen Capture

Analytics platform Google Analytics Screen Capture

To the surprise of exactly no one, Google Analytics was high up on the list of recommended analytics tools. Yet despite its popularity, very few marketers or business owners are using this free tool to its full potential.

Theresa Baiocco makes the follow recommendations for getting started:

There’s so much data in Google Analytics that it’s easy to suffer from paralysis by analysis. It helps to have a few reports you use regularly and know what you’re looking for before jumping in. The obvious reports for finding the most problematic pages in your funnel are the funnel visualization and goal flow reports. But I also like to look at top landing pages, and using the “comparison” view, I see which of them have higher bounce rates than average for the site. Those 3 reports together are a good starting point for identifying which pages to work on first.

When it comes to applying Google Analytics to your AB testing efforts, John Ekman of Conversionista offers some advice:

Most of the AB testing tools provide an easy integration with Google Analytics. Do not miss this opportunity in your AB testing setup!

When you integrate your testing tool with GA it means that you will be able to break down your test results and look at A vs. B in all dimensions available in GA. You will be able to see behavior segmented by device, returning vs new visitors, geography etc.

For example: if you are using Enhanced Ecommerce setup for GA you will be able to compare your E-commerce funnel for the A version vs. the B version. Maybe the A version gets more add to carts, but then that effect withers off and the result in the checkout is the same?!

Example of Google Analytics ecommerce report for AB test variation.

Example of Google Analytics ecommerce report for AB test variation.

Word of warning: as soon as you start segmenting your data you might lose statistical significance in the underlying segments. Even if your AB test results are statistically significant on the overall level that does not mean that the deviations you see in smaller segments of your test data are significant. The smaller the data sample size, the harder it is to reach significance. What you think is a strong signal is just some data noise.

For those interested in tapping into the full potential of Google Analytics, here’s some resources you may need..

Crazy Egg

User intelligence tool Crazy Egg confetti report screen capture.

User intelligence tool Crazy Egg confetti report screen capture.

Crazy Egg is one of the more popular heatmap and click-tracking tools online, thanks to an attractive interface, an affordable price point, and a deceptively powerful feature set.

Crazy Egg is a highly recommended budget tool by Brian Massey and the Conversion Sciences team, who had the following to say:

Crazy Egg offers tools to help you visually identify the most popular areas of your page, help you see which parts of your pages are working and which ones are not, and give you greater insight as to what your users are doing on your pages via both mobile and desktop sites.

UserTesting.com

User testing platform UserTesting Screen Shot

User testing platform UserTesting.com

UserTesting.com is a unique service that provides videos of real users in your target market experiencing your site and talking through what they’re thinking.

This service is recommended by Craig Andrews, who had the following to say:

UserTesting.com is great for hypothesis generation and uncovering personal biases. It is an absolutely fantastic tool for persuading clients on the reality and importance of certain site issues, and I would recommend it to anyone doing conversion optimization or even basic usability testing

Lucky Orange

Lucky Orange is kind of like Crazy Egg with a bit of UserTesting.com, a bit of The Godfather, and a bit of a hundred other things. It’s a surprisingly diverse package of conversion features that make you start to believe their claim as “the original all-in-one conversion optimization suite”, despite the incredibly low price point.

Despite the hundred new tools that have popped up since Lucky Orange hit the market, Theresa Baiocco still swears by the original:

No testing program is complete without analyzing how users behave on the site. Optimizers all have their favorite tools for gathering this data, and while the newest and hottest kid on the block is Hotjar, I still like using my old go-to: Lucky Orange. Starting at just $10/month, Lucky Orange gives you visitor recordings, conversion funnel reports, form analytics, polls, chat, and heat maps of clicks, scroll depth, and mouse movements – all in one place.

ClickTale

Heatmapping and session recording tool ClickTale dashboard screen capture

Heatmapping and session recording tool ClickTale dashboard

Clicktale is a cloud-based analytic system that allows you to visualize your customer’s experience on your website from their perspective. It’s an enterprise-level tool that combines session recording with click and scroll tracking, and while it comes with an enterprise price tag, it’s made some significant quality strides over the last few years.

As Dieter Davis summarized recently for UX Magazine:

There has been a huge improvement in Clicktale over the past three years, in tracking, reporting and accuracy. If you want “any old session recording JS”, boxed-product application out there, there are a variety of options. If you want accurate rendering that is linked to your existing analytics and a company that will help you tune as your own website evolves, then Clicktale is a good choice. It’s the one I’ve chosen as I wouldn’t want to risk the privacy of my customers or risk degrading the performance of my website. Clicktale also gives me a representative sample that is accurate by resolution and responsive design.

Hotjar

Hotjar offers heatmap reports, session recordings, polls, surveys and more

Hotjar offers heatmap reports, session recordings, polls, surveys and more.

HotJar is the latest SaaS success story to blaze its way across the web. It’s a jack of all trades type tool: an all-in-one tool that does heatmaps, scroll tracking, recordings, funnel tracking, form analysis, feedback polls, surveys, and more.

And from what a few of our conversion experts have seen so far, it does all of those things about as well as you would expect from a jack of all trades.

On the plus side, Hotjar has prioritized creating an exceptional user experience, so if you are a solo blogger wanting a feature-rich, easy-to-use toolkit in one place with a reasonable price tag, Hotjar might be the perfect choice for you.

Stephen Esketzis had the following to say about his experience with the tool:

So overall, HotJar really is a great tool with a lot of value to offer any online business (or website in general at that). There’s not many businesses that work online I wouldn’t recommend this tool to.

With a no-brainer price point (and even a free plan) it’s pretty hard to go wrong.

Inspectlet

Session recording software Inspectlet screen capture

Session recording software Inspectlet.

Inspectlet is primarily a session recording tool with additional heatmaps as well. Here’s what Anders Toxboe had to say about it in a recent review:

Inspectlet is simple to use. It gets out of the way in order to let the user do what he or she needs. The simple funnel analysis and filtering options is a breeze to use and covered my basic needs.Inspectlet does what it does good with a few minor glitches. It doesn’t have the newer features that have started appearing lately such as watching live recordings, live chatting, surveys, and polls.

In other words, Inspectlet is an easy-to-use, budget-friendly session recording tool that might be right for you depending on your needs.

SessionCam

Session recording software SessionCam offers a Suffer Score.

Session recording software SessionCam offers a Suffer Score.

SessionCam is a session recording tool that has also added heatmaps form analytics to its offering. It’s a classic example of a tool that combines better-than-average functionality with a more-difficult-than-average user interface.

Peter Hornsby had the following to say in his review for UXmatters:

SessionCam provides a lot of useful functionality, but its user interface isn’t the easiest to learn or use. Getting the most out of it requires a nontrivial investment of time.

And later:

UX designers have long known that, where there is internal resistance to change, showing stakeholders clear evidence of users experiencing problems can be a powerful tool in persuading them to recognize and address issues. SessionCam meets the need for a tool that provides this data in a much more dynamic, cost-effective way than using traditional observation techniques.

SessionCam [also] manages [to protect user data] effectively by masking the data that users enter into form fields, so you can put their concerns to rest.

If you are looking for a more robust session recording and form analytic tool that keeps user data safe, SessionCam is worth checking out.

Adobe Analytics

Screen capture of Analytics platform Adobe Analytics Site Overview.

Analytics platform Adobe Analytics site overview.

Adobe Analytics is a big data analysis tool that helps CMOs understand the performance of their businesses across all digital channels. It enables real time web, mobile and social analytics across online channels, and data integration with offline and third-party sources.

In other words, Adobe Analytics is a $100k+ per year, enterprise level analytics tool that has some serious firepower. Here’s what David Williams of ASOS.com had to say about it:

After a thorough review of the market, we chose Adobe Analytics to satisfy our current and future analytics and optimization needs. We needed a solution that could scale globally with our business, improve productivity, and offer out-of-the box integration with our key partners to deliver more value from our existing investments. Adobe’s constant pace of innovation continues to deliver value for our business, and live stream (the event firehose) is the latest capability that opens up exciting opportunities for how we engage with customers.

AB Testing Tools Conclusion

Well that’s that: 20 of the most recommended AB testing tools from a diverse collection of the web’s leading CRO experts.

Have you used any of these tools before? Do you have a favorite that wasn’t included? We’d love to hear your thoughts in the comments.

And if you are looking for a quick way to calculate how a conversion lift could increase your bottom line, be sure to check out our Optimization Calculator.

A/B testing statistics made simple. A guide that will clear up some of the more confusing concepts while providing you with a solid framework to AB test effectively.

Here’s the deal. You simply cannot A/B test effectively without a sound understanding of A/B testing statistics.

And while there has been a lot of exceptional content written on AB testing statistics, I’ve found that most of these articles are either overly simplistic or they get very complex without anchoring each concept to a bigger picture.

Today, I’m going to explain the statistics of AB testing within a linear, easy-to-follow narrative. It will cover everything you need to use AB testing software effectively.

You might have been told that plugging a few numbers into a statistical significance calculator is enough to validate a test. Or perhaps you see the green “test is significant” checkmark popup on your testing dashboard and immediately begin preparing the success reports for your boss.

In other words, you might know just enough about split testing statistics to dupe yourself into making major errors, and that’s exactly what I’m hoping to save you from today.

Here’s my best attempt at making statistics intuitive.

Why Statistics Are So Important To A/B Testing

The first question that has to be asked is “Why are statistics important to AB testing?”

The answer to that questions is that AB testing is inherently a statistics-based process. The two are inseparable from each other.

An AB test is an example of statistical hypothesis testing, a process whereby a hypothesis is made about the relationship between two data sets and those data sets are then compared against each other to determine if there is a statistically significant relationship or not.

To put this in more practical terms, a prediction is made that Page Variation #B will perform better than Page Variation #A, and then data sets from both pages are observed and compared to determine if Page Variation #B is a statistically significant improvement over Page Variation #A.

This process is an example of statistical hypothesis testing.

But that’s not the whole story. The point of AB testing has absolutely nothing to do with how variations #A or #B perform. We don’t care about that.

What we care about is how our page will ultimately perform with our entire audience.

And from this birdseye view, the answer to our original question is that statistical analysis is our best tool for predicting outcomes we don’t know using information we do know.

For example, we have no way of knowing with 100% accuracy how the next 100,000 people who visit our website will behave. That is information we cannot know today, and if we were to wait o until those 100,000 people visited our site, it would be too late to optimize their experience.

What we can do is observe the next 1,000 people who visit our site and then use statistical analysis to predict how the following 99,000 will behave.

If we set things up properly, we can make that prediction with incredible accuracy, which allows us to optimize how we interact with those 99,000 visitors. This is why AB testing can be so valuable to businesses.

In short, statistical analysis allows us to use information we know to predict outcomes we don’t know with a reasonable level of accuracy.

The Complexities Of Sampling, Simplified

That seems fairly straightforward, so where does it get complicated?

The complexities arrive in all the ways a given “sample” can inaccurately represent the overall “population”, and all the things we have to do to ensure that our sample can accurately represent the population.

Let’s define some terminology real quick.

AB testing statistics: The Complexities Of Sampling, Simplified

The “population” is the group we want information about. It’s the next 100,000 visitors in my previous example. When we’re testing a webpage, the true population is every future individual who will visit that page.

The “sample” is a small portion of the larger population. It’s the first 1,000 visitors we observe in my previous example.

In a perfect world, the sample would be 100% representative of the overall population.

For example:

Let’s say 10,000 out of those 100,000 visitors are going to ultimately convert into sales. Our true conversion rate would then be 10%.

In a tester’s perfect world, the mean (average) conversion rate of any sample(s) we select from the population would always be identical to the population’s true conversion rate. In other words, if you selected a sample of 10 visitors, 1 of them (10%) would buy, and if you selected a sample of 100 visitors, then 10 would buy.

But that’s not how things work in real life.

In real life, you might have only 2 out of the first 100 buy or you might have 20… or even zero. You could have a single purchase from Monday through Friday and then 30 on Saturday.

This variability across samples is expressed as a unit called the “variance”, which measures how far a random sample can differ from the true mean (average).

The Freakonomics podcast makes an excellent point about what “random” really is. If you have one person flip a coin 100 times, you would have a random list of heads or tails with a high variance.

If we write these results down, we would expect to see several examples of long streaks, five or seven or even ten heads in a row. When we think of randomness, we imagine that these streaks would be rare. Statistically, they are quite possible in such a dataset with high variance.

The higher the variance, the more variable the mean will be across samples. Variance is, in some ways, the reason statistical analysis isn’t a simple process. It’s the reason I need to write an article like this in the first place.

So it would not be impossible to take a sample of ten results that contain one of these streaks. This would certainly not be representative of the entire 100 flips of the coin, however.

Fortunately, we have a phenomenon that helps us account for variance called “regression toward the mean”.

Regression toward the mean is “the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement.”

Ultimately, this ensures that as we continue increasing the sample size and the length of observation, the mean of our observations will get closer and closer to the true mean of the population.

In other words, if we test a big enough sample for a sufficient length of time, we will get accurate “enough” results.

So what do I mean by accurate “enough”?

Understanding Confidence Intervals & Margin of Error

In order to compare two pages against each other in an Ab test, we have to first collect data on each page individually.

Typically, whatever AB testing tool you are using will automatically handle this for you, but there are some important details that can affect how you interpret results, and this is the foundation of statistical hypothesis testing, so I want to go ahead and cover this part of the process.

Let’s say you test your original page with 3,662 visitors and get 378 conversions. What is the conversion rate?

You are probably tempted to say 10.3%, but that’s inaccurate. 10.3% is simply the mean of our sample. There’s a lot more to the story.

To understand the full story, we need to understand two key terms:

  1. Confidence Interval
  2. Margin of Error

You may have seen something like this before in your split testing dashboard.

The original page above has a conversion rate of 10.3% plus or minus 1.0%. The 10.3% conversion rate value is the mean. The ± 1.0 % is the margin for error, and this gives us a confidence interval spanning from 9.3% to 11.3%.

10.3% ± 1.0 % at 95% confidence is our actual conversion rate for this page.

What we are saying here is that we are 95% confident that the true mean of this page is between 9.3% and 11.3%. From another angle, we are saying that if we were to take 20 total samples, we can know with complete certainty that the sample conversion rate would fall between 9.3% and 11.3% in at least 19 of those samples.

The confidence interval is an observed range in which a given percentage of test outcomes fall. We manually select our desired confidence level at the beginning of our test, and the size of the sample we need is based on our desired confidence level.

The range of our confidence level is then calculated using the mean and the margin of error.

The easiest way to demonstrate this with a visual.

Confidence interval example | A/B Testing Statistics

The confidence level is decided upon ahead of time and based on direct observation. There is no prediction involved. In the above example, we are saying that 19 out of every 20 samples tested WILL, with 100% certainty, have an observed mean between 9.3% and 11.3%.

The upper bound of the confidence interval is found by adding the margin of error to the mean. The lower bound is found by subtracting the margin of error from the mean.

The margin for error is a function of the standard deviation, which is a function of the variance. Really all you need to know is that all of these terms are measures of variability across samples.

Confidence levels are often confused with significance levels (which we’ll discuss in the next section) due to the fact that the significance level is set based on the confidence level, usually at 95%.

You can set the confidence level to be whatever you like. If you want 99% certainty, you can achieve it, BUT it will require a significantly larger sample size. As the chart below demonstrates, diminishing returns make 99% impractical for most marketers, and 95% or even 90% is often used instead for a cost-efficient level of accuracy.

In high-stakes scenarios (live-saving medicine, for example), testers will often use 99% confidence intervals, but for the purposes of the typical CRO specialist, 95% is almost always sufficient.

Advanced testing tools will use this process to measure the sample conversion rate for both the original page AND Variation B, so it’s not something you are really going to ever have to calculate on your own, but this is how our process starts, and as we’ll see in a bit, it can impact how we compare the performance of our pages.

Once we have our conversion rates for both the pages we are testing against each other, we use statistical hypothesis testing to compare these pages and determine whether the difference is statistically significant.

Important Note About Confidence Intervals

It’s important to understand the confidence levels your AB testing tools are using and to keep an eye on the confidence intervals of your pages’ conversion rates.

If the confidence intervals of your original page and Variation B overlap, you need to keep testing even if your testing tool is saying that one is a statistically significant winner.

Significance, Errors, & How To Achieve The Former While Avoiding The Latter

Remember, our goal here isn’t to identify the true conversion rate of our population. That’s impossible.

When running an AB test, we are making a hypothesis that Variation B will convert at a higher rate for our overall population than Variation A will. Instead of displaying both pages to all 100,000 visitors, we display them to a sample instead and observe what happens.

  • If Variation A (the original) had a better conversion rate with our sample of visitors, then no further actions need to be taken as Variation A is already our permanent page.
  • If Variation B had a better conversion rate, then we need determine whether the improvement was statistically large “enough” for us to conclude that the change would be reflected in the larger population and thus warrant us changing our page to Variation B.

So why can’t we take the results at face value?

The answer is variability across samples. Thanks to the variance, there are a number of things that can happen when we run our AB test.

  1. Test says Variation B is better & Variation B is actually better
  2. Test says Variation B is better & Variation B is not actually better (type I error)
  3. Test says Variation B is not better & Variation B is actually better (type II error)
  4. Test says Variation B is not better & Variation B is not actually better

As you can see, there are two different types of errors that can occur. In examining how we avoid these errors, we will simultaneously be examining how we run a successful AB test.

Before we continue, I need to quickly explain a concept called the null hypothesis.

The null hypothesis is a baseline assumption that there is no relationship between two data sets. When a statistical hypothesis test is run, the results either disprove the null hypothesis or they fail to disprove the null hypothesis.

This concept is similar to “innocent until proven guilty”: A defendant’s innocence is legally supposed to be the underlying assumption unless proven otherwise.

For the purposes of our AB test, it means that we automatically assume Variation B is NOT a meaningful improvement over Variation A. That is our null hypothesis. Either we disprove it by showing that Variation B’s conversion rate is a statistically significant improvement over Variation A, or we fail to disprove it.

And speaking of statistical significance…

Type I Errors & Statistical Significance

A type I error occurs when we incorrectly reject the null hypothesis.

To put this in AB testing terms, a type I error would occur if we concluded that Variation B was “better” than Variation A when it actually was not.

Remember that by “better”, we aren’t talking about the sample. The point of testing our samples is to predict how a new page variation will perform with the overall population. Variation B may have a higher conversion rate than Variation A within our sample, but we don’t truly care about the sample results. We care about whether or not those results allow us to predict overall population behavior with a reasonable level of accuracy.

So let’s say that Variation B performs better in our sample. How do we know whether or not that improvement will translate to the overall population? How do we avoid making a type I error?

Statistical significance.

Statistical significance is attained when the p-value is less than the significance level. And that is way too many new words in one sentence, so let’s break down these terms real quick and then we’ll summarize the entire concept in plain English.

The p-value is the probability of obtaining at least as extreme results given that the null hypothesis is true.

In other words, the p-value is the expected fluctuation in a given sample, similar to the variance. Imagine running an A/A test, where you displayed your page to 1,000 people and then displayed the exact same page to another 1,000 people.

You wouldn’t expect the sample conversion rates to be identical. We know there will be variability across samples. But you also wouldn’t expect it be drastically higher or lower. There is a range of variability that you would expect to see across samples, and that, in essence, is our p-value.

The significance level is the probability of rejecting the null hypothesis given that it is true.

Essentially, the significance level is a value we set based on the level of accuracy we deem acceptable. The industry standard significance level is 5%, which means we are seeking results with 95% accuracy.

So, to answer our original question:

We achieve statistical significance in our test when we can say with 95% certainty that the increase in Variation B’s conversion rate falls outside the expected range of sample variability.

Or from another way of looking at it, we are using statistical inference to determine that if we were to display Variation A to 20 different samples, at least 19 of them would convert at lower rates than Variation B.

Type II Errors & Statistical Power

A type II error occurs when the null hypothesis is false, but we incorrectly fail to reject it.

To put this in AB testing terms, a type II error would occur if we concluded that Variation B was not “better” than Variation A when it actually was better.

Just as type I errors are related to statistical significance, type II errors are related to statistical power, which is the probability that a test correctly rejects the null hypothesis.

For our purposes as split testers, the main takeaway is that larger sample sizes over longer testing periods equal more accurate tests. Or as Ton Wesseling of Testing.Agency says here:

You want to test as long as possible – at least 1 purchase cycle – the more data, the higher the Statistical Power of your test! More traffic means you have a higher chance of recognizing your winner on the significance level your testing on!

Because…small changes can make a big impact, but big impacts don’t happen too often – most of the times, your variation is slightly better – so you need much data to be able to notice a significant winner.

Statistical significance is typically the primary concern for AB testers, but it’s important to understand that tests will oscillate between being significant and not significant over the course of a test. This is why it’s important to have a sufficiently large sample size and to test over a set time period that accounts for the full spectrum of population variability.

For example, if you are testing a business that has noticeable changes in visitor behavior on the 1st and 15th of the month, you need to run your test for at least a full calendar month.  This is your best defense against one of the most common mistakes in AB testing… getting seduced by the novelty effect.

Peter Borden explains the novelty effect in this post:

Sometimes there’s a “novelty effect” at work. Any change you make to your website will cause your existing user base to pay more attention. Changing that big call-to-action button on your site from green to orange will make returning visitors more likely to see it, if only because they had tuned it out previously. Any change helps to disrupt the banner blindness they’ve developed and should move the needle, if only temporarily.

More likely is that your results were false positives in the first place. This usually happens because someone runs a one-tailed test that ends up being overpowered. The testing tool eventually flags the results as passing their minimum significance level. A big green button appears: “Ding ding! We have a winner!” And the marketer turns the test off, never realizing that the promised uplift was a mirage.

By testing a large sample size that runs long enough to account for time-based variability, you can avoid falling victim to the novelty effect.

Important Note About Statistical Significance

It’s important to note that whether we are talking about the sample size or the length of time a test is run, the parameters for the test MUST be decided on in advance.

Statistical significance cannot be used as a stopping point or, as Evan Miller details, your results will be meaningless.

As Peter alludes to above, many AB testing tools will notify you when a test’s results become statistical significance. Ignore this. Your results will often oscillate between being statistically significant and not being statistically significant.

The only point at which you should evaluate significance is the endpoint that you predetermined for your test.

Terminology Cheat Sheet

We’ve covered quite a bit today.

For those of you who have just been smiling and nodding whenever statistics are brought up, I hope this guide has cleared up some of the more confusing concepts while providing you with a solid framework from which to pursue deeper understanding.

If you’re anything like me, reading through it once won’t be enough, so I’ve gone ahead and put together a terminology cheat sheet that you can grab. It lists concise definitions for all the statistics terms and concepts we covered in this article.

  • Download The Cheat Sheet

    testing-statistics-cheat-sheet
    A concise list of statistics terminology to take with you for easy reference.

What is an A/B Test? How does split testing work? Who should run AB tests? Discover the Conversion Scientists’ secrets to AB testing.

AB testing, also referred to as “split”, A/B test or “ABn” testing, is the process of testing multiple variations of a web page in order to identify higher-performing variations and improve the page’s conversion rate.

As the web has become increasingly competitive and traffic has become increasingly expensive, the rate at which online businesses are able to convert incoming visitors to customers has become more and more important.

In fact, it has led to an entirely new industry, called Conversion Rate Optimization (CRO), and the centerpiece of this new CRO industry is AB testing.

More than any other thing a business can do, AB testing reveals what will increase online revenue and by how much. This is why we recommend it.

What Is An A/B Test in Digital Marketing?

An A/B test is an experiment in which a web page (Page A) is compared against a new variation of that page (Page B) by alternately displaying both versions to a live audience.

The number of visitors who convert on each page is recorded as a percentage of conversions per visitor, referred to as the “conversion rate”. The conversion rates for each page variation are then compared against each other to determine which page performs better.

What Is An A/B Test? How does split testing work? Who should run AB tests? Discover the Conversion Scientists’ secrets to AB testing.

What Is An A/B Test?

Using the above image as an example, since Page B has a higher conversion rate, it would be selected as the winning test and replace the original as the permanent page displayed to visitors.

(There are several very important statistical requirements Page B would have to meet in order to truly be declared the winner, but we’re keeping it simple for the purposes of this article)

How Does Split Testing Work?

Split testing is a conceptually simple process, and thanks to an abundance of high-powered software tools, it is now very easy for marketers to run A/B tests on a regular basis.

1. Select A Page To Improve

The process begins by identifying the page that you want to improve. Online landing pages are commonly tested, but you can test any page of a website. AB testing can even be applied to email, display ads and any number of things..

2. Hypothesize A Better Variation of the Page

Once you have selected your target page, it’s time to create a new variation that can be compared against the original. Your new page will be based on your best hypothesis about what will convert with your target audience, so the better you understand that audience, the better results you will get from AB testing.

3. Display Both Pages To A Live Audience via the A/B Test Tool

The next step is to display both pages to a live audience. In order to keep everything else equal, you’ll want to use split testing software to alternately display Page A (original) and Page B (variation) via the same URL.

4. Collect A/B Test Conversion Data

Collect data on both pages. Monitor how many visitors are viewing each page, where they are clicking, and how often they are taking the desired action (usually converting into leads or sales). Tests must be run long enough to achieve statistically significant results.

5. Select The Winning Page

Once one page has proven to have a statistically higher conversion rate, implement it as the permanent page for that URL. The A/B test is now complete, and a new one can be started by returning to Step #2 and hypothesizing a new page variation.

Who Should Run AB Tests?

Now that you understand what an A/B test is, the next question is whether or not YOU should invest in running A/B tests on your webpages.

There are three primary factors that determine whether AB testing is right for your website:

  1. Number of transactions (purchases, leads or subscribers) per month.
  2. The speed with which you want to test.
  3. The average value of each sale, lead or subscriber to the business.

We’ve created a very helpful calculator called the Conversion Upside Calculator to help you understand what each small increase in your conversion rate will deliver in additional annual income.

Based on how much you stand to earn from improvements, you can decide whether it makes sense to purchase a suite of AB testing tools and experiment on your own or hire a dedicated CRO agency to maximize your results.

Want To Learn More About AB Testing?

Want to learn more about AB testing?


21 Quick and Easy CRO Copywriting Hacks to Skyrocket Conversions

21 Quick and Easy CRO Copywriting Hacks

Keep these proven copywriting hacks in mind to make your copy convert.

  • 43 Pages with Examples
  • Assumptive Phrasing
  • "We" vs. "You"
  • Pattern Interrupts
  • The Power of Three
  • This field is for validation purposes and should be left unchanged.

Nothing gives you confidence and swagger like AB testing. And nothing will end your swagger faster than bad data. In order to do testing right, there are some things you need to know about AB testing statistics. Otherwise, you’ll spend a lot of time trying to get answers, but instead of getting answers, you’ll end up either confusing yourself more or thinking you have an answer, when really you have nothing. An A/A test ensures that the data you’re getting can be used to make decisions with confidence.

What’s worse than working with no data? Working with bad data.

We’re going to introduce you to a test that, if successful will teach you nothing about your visitors. Instead, it will give you something that is more valuable than raw data. It will give you confidence.

What is an A/A Test

The first thing you should test before your headlines, your subheads, your colors, your call to actions, your video scripts, your designs, etc. is your testing software itself. This is done very easily by testing one page against itself. One would think this is pointless because surely, the same page against the same page is going to have the same results, right?

Not necessarily.

After three days of testing, this A/A test showed that the variation identical to the Original was delivering 35.7% less revenue. This is a swagger killer.

This A/A Test didn't instill confidence after three days.

This A/A Test didn’t instill confidence after three days.

This can be cause by any of these issues:

  1. The AB testing tool you’re using is broken.
  2. The data being reported by your website is wrong or duplicated.
  3. The AA test needs to run longer.

Our first clue to the puzzle is the small size of the sample. While there were over 345 or more visits to each page, there were only 22 and 34 transactions. This is too small by a large factor. In AB testing statistics, transactions are more important than traffic in building statistical confidence. Having fewer than 200 transactions per treatment often delivers meaningless results.

Clearly, this test needs to run longer.

Your first instinct may be to hurry through the A/A testing so you can get to the fun stuff – the AB testing. But that’s going to be a mistake, and the above shows why.

An A/A test serves to calibrate your tools

An A/A test serves to calibrate your tools

Had the difference between these two identical pages continued over time, we would call off any plans for AB testing altogether until we figured out if the tool implementation or website were the source of the problem. We would also have to retest anything done prior to discovering this AA test anomaly.

In this case, running the A/A test for a longer stretch of time increased our sample size and the results evened out, as they should in an A/A test. A difference of 3.5% is acceptable for an AA test. We also learned that a minimum sample size approaching 200 transactions per treatment was necessary before we could start evaluating results.

This is a great lesson in how statistical significance and sample size can build or devastate our confidence.

An A/A Test Tells You Your Minimum Sample Size

The reason the A/A test panned out evenly in the end was it took that much time for a good amount of traffic to finally come through the website and see both “variations” in the test. And it’s not just about a lot of traffic, but a good sample size.

  • Your shoppers on a Monday morning are statistically completely different people from your shoppers on a Saturday night.
  • Your shoppers during a holiday seasons are statistically different from your shoppers on during a non-holiday season.
  • Your desktop shoppers are statistically different from your mobile shoppers.
  • Your shoppers at work are different from your shoppers at home.
  • Your shoppers from paid ads are different from your shoppers from word of mouth referrals.

It’s amazing the differences you may find if you dig into your results, down to specifics like devices and browsers. Of course, if you only have a small sample size, you may not be able to trust the results.

This is because a small overall sample size means that you may have segments of your data allocated unevenly. Here is an sample of data from the same A/A test. At this point, less than 300 sessions per variation have been tested. You can see that, for visitors using the Safari browser–Mac visitors–there is an uneven allocation, 85 visitors for the variation and 65 control. Remember that both are identical. Furthermore, there is an even bigger divide between Internet Explorer visitors, 27 to 16.

This unevenness is just the law of averages. It is not unreasonable to imagine this kind of unevenness. But, we expect it to go away with larger sample sizes.

You might have different conversion rates with different browsers.

You might have different conversion rates with different browsers.

Statistically, an uneven allocation leads to different results, even when all variations are equal. If the allocation of visits is so off, imagine that the allocation of visitors that are ready to convert is also allocated unevenly. This would lead to a variation in conversion rate.

And we see that in the figure above. For visitors coming with the Internet Explorer browser, none of sixteen visitors converted. Yet two converting visitors were sent to the calibration variation for a conversion rate of 7.41%.

In the case of Safari, the same number of converting visitors were allocated to the Control and the calibration variation, but only 65 visits overall were sent to the Control. Compared this to the 85 visitors sent to the Calibration Variation. It appears that the Control has a much higher conversion rate.

But it can’t because both pages are identical.

Over time, we expect most of these inconsistencies to even out. Until then they often add up to uneven results.

These forces are at work when you’re testing different pages in a AB test. Do you see why your testing tool can tell you to keep the wrong version if your sample size is too small?

Calculating Test Duration

You have to test until you’ve received a large enough sample size from different segments of your audience to determine if one variation of your web page performs better on the audience type you want to learn about. The A/A test can demonstrate the time it takes to reach statistical significance.

The duration of an AB test is a function of two factors.

  1. The time it takes to reach an acceptable sample size.
  2. The difference between the performance of the variations.

If a variation is beating the control by 50%, the test doesn’t have to run as long. The large margin of “victory”, also called “chance to beat” or “confidence”, is larger than the margin of error, even at small er sample sizes.

So, an A/A test should demonstrate a worst case scenario, in which a variation has little chance to beat the control because it is identical. In fact, the A/A test may never reach statistical significance.

In our example above, the test has not reached statistical significance, and there is very little chance that it ever will. However, we see the Calibration Variation and Control draw together after fifteen days.

These identical pages took fifteen days to come together in this A/A Test.

These identical pages took fifteen days to come together in this A/A Test.

This tells us that we should run our tests a minimum of 15 days to ensure we have a good sample set. Regardless of the chance to beat margin, a test should never run for less than a week, and two weeks is preferable.

Setting up an A/A Test

The good thing about an A/A test is that there is no creative or development work to be done. When setting up an AB test, you program the AB testing software to change, hide or remove some part of the page. This is not necessary for an A/A test, by definition.

For an A/A test, the challenge is to choose the right page on which to run the test. Your A/A test page should have two characteristics:

  1. Relatively high traffic. The more traffic you get to a page, the faster you’ll see alignment between the variations.
  2. Visitors can buy or signup from the page. We want to calibrate our AB testing tool all the way through to the end goal.

For these reasons, we often setup A/A tests on the home page of a website.

You will also want to integrate your AB testing tool with your analytics package. It is possible for your AB testing tool to be setup wrong, yet both variations behave similarly. By pumping A/A test data into your analytics package, you can compare conversions and revenue reported by the testing tool to that reported by analytics. They should correlate.

Can I Run an A/A Test at the Same Time as an AB Test?

Statistically, you can run an A/A test on a site which is running an AB test. If the tool is working well, than your visitors wouldn’t be significantly affected by the A/A test. You will be introducing additional error to your AB test, and should expect it to take longer to reach statistical significance.

And if the A/A test does not “even out” over time, you’ll have to throw out your AB test results.

You may also have to run your AB test past statistical significance while you wait for the A/A test to run its course. You don’t want to change anything at all during the A/A test.

The Cost of Running an A/A Test

There is a cost of running an A/A test: Opportunity cost. The time and traffic you put toward an A/A test could be used to for an AB test variation. You could be learning something valuable about your visitors.

The only times you should consider running an A/A test is:

  1. You’ve just installed a new testing tool or changed the setup of your testing tool.
  2. You find a difference between the data reported by your testing tool and that reported by analytics.

Running an A/A test should be a relatively rare occurrence.

There are two kinds of A/A test:

  1. A “Pure” two variation test
  2. An AB test with a “Calibration Variation”

Here are some of the advantages and disadvantages of these kinds of A/A tests.

The Pure Two-Variation A/A Test

With this approach, you select a high-traffic page and setup a test in your AB testing tool. It will have the Control variation and a second variation with no changes.

Advantages: This test will complete in the shortest timeframe because all traffic is dedicated to the test

Disadvantages: Nothing is learned about your visitors–well, almost. See below.

The Calibration Variation A/A Test

This approach involves adding what we call a “Calibration Variation” to the design of a AB test. This test will have a Control variation, one or more “B” variations that are being tested, and another variation with no changes from the Control. When the test is complete you will have learned something from the “B” variations and will also have “calibrated” the tool with an A/A test variation.

Advantages: You can do an A/A test without stopping your AB testing program.

Disadvantages: This approach is statistically tricky. The more variations you add to a test, the larger the margin of error you would expect. It will also drain traffic from the AB test variations, requiring the test to run longer to statistical significance.

AA Calibration Variation in an AB Test

AA Test Calibration Variation in an AB Test (Optimizely)

Unfortunately, in the test above, our AB test variation, “Under ‘Package’ CTAs”, isn’t outperforming the A/A test Calibration Variation.

You Can Learn Something More From an A/A Test

One of the more powerful capabilities of AB testing tools is the ability to track a variety of visitor actions across the website. The major AB testing tools can track a number of actions that can tell you something about your visitors.

  1. Which steps of your registration or purchase process caused them to abandon your site
  2. How many visitors started to fill out a form
  3. Which images visitors clicked on
  4. Which navigation items were most frequently clicked

Go ahead and setup some of these minor actions–usually called ‘custom goals’– and then examine the behavior when the test has run its course.

In Conclusion

Hopefully, if nothing else, you were amused a little throughout this article while learning a bit more about how to ensure a successful AB test. Yes, it requires patience, which I will be the first to admit I don’t have very much of. But it doesn’t mean you have to wait a year before you switch over to your winning variation.

You can always take your winner a month or two in and use it for PPC and continue testing and tweaking on your organic traffic. That way you get the both worlds – the assurance that you’re using your best possible option on your paid traffic and taking the time to do more tests on your free traffic.

And that, my friends, is AB testing success in a nutshell. Now go find some stuff to test and tools to test with!

About the Author


21 Quick and Easy CRO Copywriting Hacks to Skyrocket Conversions

21 Quick and Easy CRO Copywriting Hacks

Keep these proven copywriting hacks in mind to make your copy convert.

  • 43 Pages with Examples
  • Assumptive Phrasing
  • "We" vs. "You"
  • Pattern Interrupts
  • The Power of Three
  • This field is for validation purposes and should be left unchanged.

You’ve read the blog posts and you’ve heard from the vendors. A/B testing is a lot more difficult than you can imagine, and you can unintentionally wreak havoc on your online business if you aren’t careful.

Fortunately, you can learn how to avoid these awful A/B testing mistakes from 10 CRO experts who tell all in this Content Verve article. Here’s a quick look at some of their greatest pitfalls:

Joel Harvey, Conversion Sciences Worst A/B Testing Mistake

“Because of a QA breakdown we didn’t notice that the last 4-digits of one of the variation phone numbers displayed to visitors was 3576 when it should have been 3567. In the short time that the offending variation was live, we lost at least 100 phone calls.”

Peep Laja, ConversionXL Worst A/B Testing Mistake

“Ending tests too early is the #1 mistake I see. You can’t “spot a trend”, that’s total bullshit.” Tweet

Craig Sullivan, Optimise or Die Worst A/B Testing Mistake

“When it comes to split testing, the most dangerous mistakes are the ones you don’t realise you’re making.” Tweet

Alhan Keser, Widerfunnel.com Worst A/B Testing Mistakes

“I had been allocated a designer and developer to get the job done, with the expectation of delivering at least a 20% increase in leads. Alas, the test went terribly and I was left with few insights.”

Andre Morys, WebArts.de Worst A/B Testing Mistake

“I recommend everybody to do a cohort analysis after you test things in ecommerce with high contrast – there could be some differences…”

Ton Wesseling, Online Dialogue Worst A/B Testing Mistake

“People tend to say: I’ve tested that idea – and it had no effect. YOU CAN NOT SAY THAT! You can only say – we were not able to tell if the variation was better. BUT in reality it can still be better!”

John Ekman, Conversionista Worst A/B Testing Mistake

“AB-testing is not a game for nervous business people, (maybe that’s why so few people do it?!). You will come up with bad hypotheses that reduce conversions!! And you will mess up the testing software and tracking.”

Paul Rouke, PRWD Worst A/B Testing Mistake

“One of the biggest lessons I have learnt is making sure we fully engage, and build relationships with the people responsible for the technical delivery of a website, right from the start of any project.”

Matt Gershoff, Conductrics Worst A/B Testing Mistake

“One of the traps of testing is that if you aren’t careful, you can get hung up on just seeing what you DID in the past, but not finding out anything useful about what you can DO in the future.”

Michael Aagaard, ContentVerve.com Worst A/B Testing Mistakes

“After years of trial and error, it finally dawned on me that that the most successful tests were the ones based on data, insight and solid hypotheses – not impulse, personal preference or pure guesswork.”

Don’t start your next search marketing campaign without the guidance of our free report. Click here to download How 20 Search Experts Beat Rising Costs.


21 Quick and Easy CRO Copywriting Hacks to Skyrocket Conversions

21 Quick and Easy CRO Copywriting Hacks

Keep these proven copywriting hacks in mind to make your copy convert.

  • 43 Pages with Examples
  • Assumptive Phrasing
  • "We" vs. "You"
  • Pattern Interrupts
  • The Power of Three
  • This field is for validation purposes and should be left unchanged.