A fundamental concept in A/B testing is statistical hypothesis testing. It involves creating a hypothesis about the relationship between two data sets and then comparing these data sets to determine if there is a statistically significant difference. It may sound complicated, but it explains how A/B testing works.

Here’s a high-level look at how statistical hypothesis testing works.

Null and Alternative Hypotheses

In A/B testing, you typically start with two types of hypotheses:

First, a Null Hypothesis (H0). This hypothesis assumes that there is no significant difference between the two variations. For example, “Page Variation A and Page Variation B have the same conversion rate.”

Second, an Alternative Hypothesis (H1). This hypothesis assumes there is a significant difference between the two variations. For example, “Page Variation B has a higher conversion rate than Page Variation A.”

Additional reading: How to Create Testing Hypotheses that Drive Real Profits

Disproving the Hypothesis

The primary goal of A/B testing is not to prove the alternative hypothesis but to gather enough evidence to reject the null hypothesis. Here’s how it works in practical terms:

Step 1: We formulate a hypothesis predicting that one version (e.g., Page Variation B) will perform better than another (e.g., Page Variation A).

Step 2: We collect data. By randomly assigning visitors to either the control (original page) or the treatment (modified page), we can collect data on their interactions with the website.

Step 3: We analyze the results, comparing the performance of both versions to see if there is a statistically significant difference.

Step 4: If the data shows a significant difference, you can reject the null hypothesis and conclude that the alternative hypothesis is likely true. If there is no significant difference in conversion rates, you assume the null hypothesis is true and reject the alternative hypothesis.

Example

To illustrate this process, consider an example where you want to test whether changing the call-to-action (CTA) button from “Purchase” to “Buy Now” will increase the conversion rate.

  • Null Hypothesis: The conversion rates for “Purchase” and “Buy Now” are the same.
  • Alternative Hypothesis: The “Buy Now” CTA button will have a higher conversion rate than the “Purchase” button.
  • Test and Analyze: Run the A/B test, collecting data on the conversion rates for both versions.
  • Conclusion: If the data shows a statistically significant increase in conversions for the “Buy Now” button, you can reject the null hypothesis and conclude that the “Buy Now” button is more effective.

Importance of Statistical Significance in A/B Testing

Statistical significance tells you whether the results of a test are real or just random.

When you run an A/B test, for example, and Version B gets more conversions than Version A, statistical significance tells you whether that difference is big enough (and consistent enough) that it likely didn’t happen by chance.

It’s the difference between saying:

“This headline seems to work better…”

vs.

“We’re 95% confident this headline works better—and it’s worth making the change.”

In simple terms:


✅ If your test reaches statistical significance, you can trust the results.
❌ If it doesn’t, the outcome might just be noise—and not worth acting on yet.

We achieve statistical significance by ensuring our sample size is large enough to account for chance and randomness in the results.

Imagine flipping a coin 50 times. While probability suggests you’ll get 25 heads and 25 tails, the actual outcome might skew because of random variation. In A/B testing, the same principle applies. One version might accidentally get more primed buyers, or a subset of visitors might have a bias against an image.

To reduce the impact of these chance variables, you need a large enough sample. Once your results reach statistical significance, you can trust that what you’re seeing is a real pattern—not just noise.

That’s why it’s crucial not to conclude an A/B test until you have reached statistically significant results. You can use tools to check if your sample sizes are sufficient. By making these refinements, the text becomes more concise, clear, and easier to follow.

While it appears that one version is doing better than the other, the results overlap too much.

While it appears that one version is doing better than the other, the results overlap too much.

Additional Reading: Four Things You Can Do With an Inconclusive A/B Test

How Much Traffic Do You Need to Reach Statistical Significance?

The amount of traffic you need depends on several factors, but most A/B tests require at least 1,000–2,000 conversions per variation to reach reliable statistical significance. That could mean tens of thousands of visitors, especially if your conversion rate is low.

Here’s what affects your sample size requirement:

  • Baseline conversion rate – The lower it is, the more traffic you’ll need.
  • Minimum detectable effect (MDE) – The smaller the lift you want to detect (e.g., a 2% increase), the more traffic is needed.
  • Confidence level – Most tests aim for 95% statistical confidence.
  • Statistical power – A standard power level is 80%, which ensures a low chance of false negatives.

Rule of thumb: If your site doesn’t get at least 1,000 conversions per month, you may struggle to run statistically sound tests—unless you’re testing big changes that could yield large effect sizes.

How A/B Testing Tools Work

The tools that make A/B testing possible provide an incredible amount of power. If we wanted, we could use these tools to make your website different for every visitor to your website. The reason we can do this is that these tools change your site in the visitors’ browsers.

When these tools are installed on your website, they send some code, called JavaScript, along with the HTML that defines a page. As the page is rendered, this JavaScript changes it. It can do almost anything:

  • Change the headlines and text on the page.
  • Hide images or copy.
  • Move elements above the fold.
  • Change the site navigation.

When testing a page, we create an alternative variation of the page with one or more elements changed for testing purposes. In an A/B test, we limit the test to one element so we can easily understand the impact of that change on conversion rates. The testing tool then does the heavy lifting for us, segmenting the traffic and serving the control (or existing page) or the test variation.

It’s also possible to test more than one element at a time—a process called multivariate testing. However, this approach is more complex and requires rigorous planning and analysis. If you’re considering a multivariate test, we recommend letting a Conversion Scientist™ design and run it to ensure valid, reliable results.

Primary Functions of A/B Testing Tools

A/B testing software has the following primary functions.

Serve Different Webpages to Visitors

The first job of A/B testing tools is to show different webpages to certain visitors. The person that designed your test will determine what gets shown.

An A/B test will have a “control,” or the current page, and at least one “treatment,” or the page with some change. The design and development team will work together to create a different treatment. JavaScript must be written to transform the control into the treatment.

It is important that the JavaScript works on all devices and in all browsers used by the visitors to a site. This requires a committed QA effort. At Conversion Sciences, we maintain a library of devices of varying ages that allows us to test our JavaScript for all visitors.

Split Traffic Evenly

Once we have JavaScript to display one or more treatments, our A/B testing software must determine which visitors see the control and which see the treatments.

Typically, every other user will get a different page. Visitors are distributed evenly across variations—control, then treatment A, then B, then back to control, and so on—ensuring balanced traffic. Around it goes until enough visitors have been tested to achieve statistical significance.

It is important that the number of visitors seeing each version is about the same size. The software tries to enforce this.

Measure Results

The A/B testing software tracks results by monitoring goals. Goals can be any of a number of measurable things:

  1. Products bought by each visitor and the amount paid
  2. Subscriptions and signups completed by visitors
  3. Forms completed by visitors
  4. Documents downloaded by visitors

While nearly anything can be measured, it’s the business-building metrics—purchases, leads, signups—that matter most.

The software remembers which test page was seen. It calculates the amount of revenue generated by those who saw the control, by those who saw treatment one, and so on.

At the end of the test, we can answer one very important question: which page generated the most revenue, subscriptions or leads? If one of the treatments wins, it becomes the new control.

And the process starts over.

Do Statistical Analysis

The tools are always calculating the confidence that a result will predict the future. We don’t trust any test that doesn’t have at least a 95% confidence level. This means that we are 95% confident that a new change will generate more revenue, subscriptions or leads.

Sometimes it’s hard to wait for statistical significance, but it’s important lest we make the wrong decision and start reducing the website’s conversion rate.

Report Results

Finally, the software communicates results to us. These come as graphs and statistics that not only show results, they help you decide what to implement—and what to test next.

AB Testing Tools deliver data in the form of graphs and statistics.

AB Testing Tools deliver data in the form of graphs and statistics.

It’s easy to see that the treatment won this test, giving us an estimated 90.9% lift in revenue per visitor with a 98% confidence.

This is a rather large win for this client.

Selecting The Right Tools

Of course, there are a lot of A/B testing tools out there, with new versions hitting the market every year. While there are certainly some industry favorites, the tools you select should come down to what your specific businesses requires.

In order to help make the selection process easier, we reached out to our network of CRO specialists and put together a list of the top-rated tools in the industry. We rely on these tools to perform for multi-million dollar clients and campaigns, and we are confident they will perform for you as well.

Check out the full list of tools here: The 20 Most Recommended A/B Testing Tools By Leading CRO Experts

While it’s true we can learn important things from an “inconclusive” AB test, that doesn’t mean we like inconclusive tests. Inconclusive tests occur when you put two or three good options out for an AB test, drive traffic to these options and — meh — none of the choices is preferred by your visitors.

  1. Our visitors like the page the way it is (we call this page the “control”), and reject our changed pages.
  2. Our visitors don’t seem to care whether they get the control or the changed pages.

Basically, it means we tried to make things better for our visitor, and they found us wanting. Back to the drawing board.

Teenagers have a word for this. It is a sonic mix of indecision, ambivalence, condescension and that sound your finger makes when you pet a frog. It is less committal than a shrug, less positive than a “Yes,” less negative than a “No,” and is designed to prevent any decision whatsoever from being reached.

It comes out something like, “Meh” — a word so flaccid that it doesn’t even deserve any punctuation. A period would clearly be too conclusive.

If you’ve done any testing at all, you know your traffic can give you a collective “Meh” as well. We scientists call this an inconclusive test.

Whether you’re testing ad copy, landing pages, offers, or keywords, there is nothing that will deflate a conversion testing plan more than a series of inconclusive tests, especially if your optimization program is young.

Here are some things to consider in the face of an inconclusive test. Or, if you’d like immediate help from skilled Conversion Scientists™, get a free conversion consultation.

1. Add Something Really Different To The Mix

Subtlety is not the split tester’s friend.
Subtlety is not the split tester’s friend. Your audience may not care if your headline is in a 16-point or 18-point font. If you’re getting frequent inconclusive tests, one of two things is going on:

  1. You have a great “control” that is hard to beat, or
  2. You’re not stretching enough

Craft another treatment, something unexpected, and throw it into the mix. Consider a “well-crafted absurdity” as Groupon did in its early days. The idea is to make the call-to-action button really big, offering something you think your audience wouldn’t want.

Or be a little edgy, addressing the unspoken reaction your visitor is likely having to your call to action, like OptinMonster does in this popup.

OptinMonster popup. with edgy messaging

OptinMonster speaks directly to the visitor’s disdain for popups and transforms disinterest into interest.

2. Segment Your Test

We recently spent several weeks of preparation, a full day of shooting, and thousands of dollars on talent and equipment to capture some tightly controlled footage for video tests on an apparel site. This is the sort of test that is “too big to be inconclusive.” However, video is currently a very good bet for converting more search traffic.

Despite these statistics, our initial results showed that the pages with video weren’t converting significantly higher than the pages without video. However, things changed when we looked at individual segments.

New visitors liked long videos, while returning visitors liked shorter ones. Subscribers converted at much higher rates when shown a video recipe with close-ups on the products. Visitors who entered on product pages converted for one kind of video while those coming in through the home page preferred another.

It became clear that, when lumped together, one segment’s behavior was cancelling out gains by other segments.
How can you dice up your traffic? How do different segments behave on your site?

Your analytics package can help you explore the different segments of your traffic. If you have buyer personas, target them with your ads and create a test just for them. Here are some ways to segment:

  • New vs. Returning visitors
  • Buyers vs. prospects
  • Which page did they land on?
  • Which product line did they visit?
  • Mobile vs. computer
  • Mac vs. Windows
  • Members vs. non-members

3. Measure Beyond the Click

Here’s a news flash: We often see a drop in conversion rates for a treatment that has higher engagement. This may be counterintuitive. If people are spending more time on our site and clicking more — two definitions of “engagement” — then shouldn’t they find more reasons to act?

Apparently not. Higher engagement may mean that they are delaying. Higher engagement may mean that they aren’t finding what they are looking for. Higher engagement may mean that they are lost.

If you’re running your tests to increase engagement, you may be hurting your conversion rate. In this case, “Meh” may be a good thing.

In an email test we conducted for a major energy company, we wanted to know if a change in the subject line would impact sales of a smart home thermostat. Everything else about the emails and the landing pages were identical.

The two best-performing emails had very different subject lines but identical open rates and click-through rates. However, sales for one of the email treatments were significantly higher. The winning subject line had delivered the same number of clicks but had primed the visitors in some way, making them more likely to buy.

If you are measuring the success of your tests based on clicks, you may be missing the true results. Yes, it is often more difficult to measure through to purchase, subscription, or registration. However, it really does tell you which version of a test is delivering to the bottom line. Clicks are only predictive.

4. Print A T-shirt That Says, “My Control Is Unbeatable”

Ultimately, you may just have to live with your inconclusive tests.

Every test tells you something about your audience. If your audience didn’t care how big the product image was, you’ve learned that they may care more about changes in copy. If they don’t know the difference between 50% off or $15.00 off, test offers that aren’t price-oriented.

Make sure that the organization knows you’ve learned something, and celebrate the fact that you have an unbeatable control. Don’t let “Meh” slow your momentum. Keep plugging away until that unexpected test that gives you a big win.

Need help making your tests more conclusive? Explore our turnkey CRO services.

This post was originally published on March 1, 2016, and was adopted from an article that appeared on Search Engine Land. It has been updated with current research and examples.

Welcome to the ultimate guide to A/B testing, a comprehensive resource designed to help you master the art of optimizing your digital experiences. Whether you’re a marketer, a product manager, or a business owner, understanding and implementing A/B testing can be a game-changer for your online presence.

Effective CRO services include A/B testing because they’re the only data-driven method for increasing conversions and powering growth.

Key Takeaways

  • Data-Driven Decision Making: A/B testing removes guesswork by providing clear, measurable insights into what works and what doesn’t.
  • Continuous Improvement: Optimization is an ongoing process. Each test builds on previous learnings to create a culture of experimentation and innovation.
  • Understanding Your Audience: Pre-test research and segmentation help you tailor your tests to specific user behaviors and preferences for more impactful results.
  • Statistical Rigor: Achieving statistical significance ensures that your conclusions are reliable and not influenced by chance.
  • Collaboration and Communication: Sharing results with stakeholders and aligning them with business goals ensures long-term success.

Table of Contents

A/B Testing Meaning

A/B testing, also known as split testing or A/B/n testing, is the process of comparing two or more versions of a webpage, app, or marketing asset to determine which one performs better. It takes a scientific approach to optimizing digital experiences, removing the biases inherent in decision-making based on personal preference.

A/B testing randomly divides users into two groups, serving each group a different variation (A or B) to determine which gets the best results — essentially letting your visitors tell you what they prefer. 

What to Expect from This Guide

This guide is designed to be your go-to resource for everything related to A/B testing. Here’s what you’ll learn:

  • The A/B Testing Process: Step-by-step guidance on conducting pre-test research, setting up tests, ensuring quality assurance, and analyzing results.
  • Common A/B Testing Strategies: Discover various testing strategies such as Gum Trampoline, Completion Optimization, Flow Optimization, and more.
  • Tips on Implementing Test Results: How to implement winning variations and document your findings for future improvements.
  • Common Challenges with A/B Testing: Tips on handling timing issues, visitor segmentation, and achieving statistical significance. 

By the end of this guide, you will have a thorough understanding of the entire A/B testing process and a framework for running your own A/B tests. Whether you are just starting out or upleveling your game, this guide will help you optimize your website and drive business growth through data-backed decisions.

Need help right away? Schedule a Strategy Session with the Conversion Scientists.™

Benefits of A/B Testing

A/B testing can significantly improve your online presence and drive business growth. Here are some of the key advantages of incorporating A/B testing into your optimization strategy:

Improved Performance and Engagement

  • Increased Conversion Rates: By comparing different versions of a web page, you can identify which page elements lead to higher conversions. 
  • Reduced Bounce Rates: Testing helps optimize content and design to keep users engaged and on-site longer. 
  • Enhanced User Experience: Through iterative testing, you can tailor your digital assets to user preferences, resulting in more satisfying interactions.

Improved Performance and Engagement

  • Quantitative Insights: A/B tests yield clear, measurable results that can be easily analyzed and presented to stakeholders. 
  • Risk Minimization: By testing changes before full implementation, you can avoid making page or website updates that could have a negative impact on key metrics. 
  • Informed Product Development: Test results can guide future product roadmaps and feature prioritization.

Business Growth and Innovation

  • Increased Revenue: Optimizing user experiences through A/B testing often leads to higher sales and improved return on investment (ROI). Even small improvements in conversion rates can translate into significant revenue increases over time.
  • Competitive Advantage: A culture of continuous testing and improvement can set your business apart from competitors who rely on guesswork. 
  • Deeper Audience Insights: A/B tests reveal valuable information about user behavior and preferences, enabling more targeted marketing strategies.

Operational Efficiency

  • Traffic Optimization: As traffic becomes more expensive, the rate at which online businesses are able to convert incoming visitors becomes more critical. A/B testing helps in optimizing this conversion rate, ensuring that you get the most out of your traffic.
  • Incremental Improvements: A/B testing allows you to make small improvements that are more cost-effective than major overhauls. 
  • Continuous Improvement: Regular testing enables ongoing refinement of your digital assets and marketing strategies. This continuous cycle of testing and improvement ensures that your website or app remains optimized and aligned with user needs over time.

By leveraging A/B testing, you can make informed decisions, improve user experiences, and ultimately drive business growth through data-backed optimizations. This scientific approach to optimization empowers you to move beyond guesswork and subjective opinions, ensuring that your digital strategies are always aligned with user needs and market trends.

The Key to A/B Testing Success

To enjoy the benefits of A/B testing, we need to be confident that our results are valid and can be used to make informed decisions. That’s where statistical significance comes in.

Statistical significance measures the possibility that the observed difference between two variations (A and B) is due to chance rather than a real effect. It is typically expressed as a probability or p-value. When the p-value is below 0.05, it means there is less than a 5% chance that the observed difference occurred randomly.

In practical terms, statistical significance confirms that the changes made in the winning variation do, in fact, perform better than the losing variation and are not random fluctuations in the data. That’s why we always run a test until statistical significance is achieved.

It’s tempting to end a test when one variation begins to outperform the other, but test results will fluctuate through the test. Only when statistical relevance has been achieved can you be sure of the winner. That’s why most A/B testing platforms have built-in calculators. If not, you can find them online — here’s one by VWO.

Keep in mind, statistical significance does not indicate the impact or the practical importance of the change you’re testing. It simply gives you 95% confidence that the test is valid.

Now, Let’s dive into the A/B testing process. We’ll start with a high-level look at the steps to A/B testing and then review each of each step.

The A/B Testing Process: Step-by-Step Guide

The A/B testing process is a systematic and methodical approach to optimizing digital experiences. Here’s the six-step testing process used by CRO professionals:

Here’s the six-step testing process used by CRO professionals:


Step 1: Conduct Pre-test Research. A/B testing starts with assessing your current data. Gather both quantitative data (numbers and metrics) and qualitative data (user feedback and behavior) to gain a comprehensive understanding of your website’s performance and user experience.

Jump to: How to perform pre-test research

Step 2: Generate Hypotheses. Based on your data analysis, create hypotheses about what changes might improve your website’s performance. This step marks the beginning of experimentation, where you identify potential page improvements to test. 

Jump to: How to generate a hypothesis

Step 3: Design A/B Split Tests. Create your control (the original, or A version) and variation (the B version) based on your hypotheses. Ensure that you’re testing one variable at a time to isolate its impact.

Jump to: How to design A/B split tests

Step 4: Run and Monitor Tests. Use A/B testing software to split your traffic between variations until statistical significance is achieved. Monitor the test closely to ensure data accuracy and catch any potential issues early.

Jump to: How to run and monitor tests

Step 5: Analyze Test Results. Analyze your test results for statistical significance. Look beyond just the overall results to understand how different segments performed.

Jump to: How to evaluate test results

Step 6: Implement Results. If you have a clear winner, implement the changes across your website. Continue to monitor performance to ensure the improvements are maintained over time.

Jump to: How to implement results

Now let’s explore each of these steps more deeply:

Step 1: Pre-Test Research

Pre-test research is the process of gathering and analyzing data to understand your visitors, their behaviors, and their needs before you start designing test variations. 

This step is key because, at its core, optimization is about understanding your visitors. To create effective A/B test variations, you need to know who is visiting your website, what they like and don’t like about your existing site, and what they want instead. This insight allows you to formulate hypotheses that have a higher likelihood of improving user experience and driving conversions.

Steps to Conduct Pre-Test Research

Gather Existing Data. Start by analyzing the data you already have. Use tools like Google Analytics, heatmaps, and session recordings to understand user behavior on your website. Look for patterns in:

  • Traffic sources: Where are your visitors coming from?
  • Bounce rates: Which pages are causing visitors to leave quickly?
  • Conversion rates: Which pages or elements are performing well or poorly?
  • User flow: How are visitors navigating your site?

For example, you might discover that visitors from social media platforms have a higher bounce rate on your product pages compared to visitors who click through from search engines. This insight could lead to a hypothesis about tailoring product page content for different traffic sources.

Conduct User Research. Quantitative data tells you what is happening, but qualitative research helps you understand why. These methods can help you gather input from your visitors:

  • Surveys: Ask visitors about their experience on your site.
  • Feedback forms: Collect specific comments about certain pages or features.
  • Usability testing: Observe how real users interact with your site in real time.

For instance, a survey might reveal that customers find your checkout process confusing, leading your team to a hypothesis about simplifying the checkout flow.

Analyze Competitor Data. Study your competitors’ websites to benchmark your own performance and set realistic goals. This can inspire new ideas and opportunities for testing. Look at their:

  • Design and layout
  • Messaging and value propositions
  • Features and functionality

This analysis might show that competitors are using trust badges prominently in their checkout process, suggesting a test idea for your own site.

Identify Key Metrics and Goals. Define the specific metrics you want to improve through A/B testing. For example, for an ecommerce site, you might aim to improve:

  • Conversion rate
  • Average order value
  • Revenue per visit
  • Click-through rate

Ensure these metrics align with your overall business objectives. If your goal is to increase customer lifetime value, you might focus on metrics related to repeat purchases or subscription sign-ups.

Segment Your Audience. Identify different user segments that behave differently or have different needs, such as:

  • New vs. returning visitors
  • Mobile vs. desktop users
  • Customers vs. strangers
  • Traffic source

Understanding these segments allows you to create more targeted tests and analyze results more effectively. For instance, you might find that a certain call-to-action performs better for new visitors but worse for returning customers.

Step 2: Hypothesis Generation

Once you’ve completed your pre-test research, it time to formulate educated guesses, or hypotheses, about what changes might improve your website’s performance. This is where data-driven insights meet creative problem-solving.

What Is a Hypothesis in A/B Testing?

In A/B testing, a hypothesis is a testable idea. It’s typically structured as an if-then statement that includes a proposed change, the predicted outcome if the change is implemented, and how success will be measured:

“If ___, then ___, as measured by ___.”

For example: “If we move the form above the fold, then sign-ups will increase by 10%, as measured by on-page form completions.”

For more examples, read How to Create Testing Hypotheses That Drive Real Profits

The Hypothesis Generation Process

When creating a hypothesis, it’s a good idea to follow this process:

  1. Identify Problems and Opportunities. Pinpoint specific issues that need addressing or areas with potential for improvement.
  2. Analyze Data. Thoroughly review data related to the issue. Look for patterns, anomalies, and areas of underperformance.
  3. Brainstorm Solutions. For each problem or opportunity, brainstorm potential solutions. Think outside the box and collect a wide range of ideas.
  4. Formulate Hypotheses. Turn your proposed solutions into structured hypotheses.
  5. Prioritize. Not all hypotheses are created equal. Each hypothesis should be prioritized according to its potential impact, ease of implementation, and available resources.

Note: While hypothesis creation is listed as step #4 above, in many scenarios, hypotheses are developed as part of the step #1. Data is then collected to better understand the problem and potential solutions. When this approach is taken, you may uncover data or possible solutions that weren’t considered in step #1, invalidating the original hypothesis. When this happens, you can revise and reprioritize the original hypothesis based on the new data.

Tips for Effective Hypothesis Generation

It’s easy to generate hypotheses. The challenge is to create good hypotheses. Here’s how you can ensure your hypothesis pipeline is filled with test-worthy ideas.

  1. Focus on One Change. A test can be designed to test multiple hypotheses, but each hypothesis should be specific and unambiguous.
  2. Be Specific. Clearly state the change you’re making and the outcome you expect.
  3. Make it Measurable. Ensure your predicted outcome can be quantified and measured.
  4. Base it on Data. Your hypothesis should be grounded in research and data analysis, not just hunches.
  5. Keep it Realistic. Keep your predictions within the realm of possibility.
  6. Consider Multiple Factors. Remember that user behavior is complex. Consider various aspects like user psychology, design principles, and industry best practices when forming your hypotheses.
  7. Document Your Reasoning. Always include the “because” part of your hypothesis. This helps you track your thought process and can inform future tests.

Get the Conversion Sciences Hypothesis Tracker here.

Step 3: A/B Split Test Design

An A/B test is often called a “split” test because it involves splitting or dividing website traffic between two versions of a webpage or feature. The term “split” refers to how the audience or traffic is divided between the control version (A) and the variation (B).

A/B split testing is most effective when you know how to design the test to achieve your goals. This includes your testing strategy, objectives, the variables you’ll test, your segmentation plan, and the set-up of your tracking. 

In a minute, I’ll show you how to design a split test, but first, we need to look at seven common A/B testing strategies.

Common A/B Testing Strategies

An A/B testing strategy is a systematic approach to planning, executing, and analyzing experiments. At Conversion Sciences, we use a variety of strategies, each appropriate for different situations. 

Will you test big, disruptive changes to the webpage or make small, incremental tweaks? Will you optimize for general outcomes, such as completion rate or flow, or target specific pain points that create friction for visitors? Your strategy choice is based on risk tolerance, the current performance of the website, the resources available, and your specific goals. 

Most optimizers choose one of these seven testing strategies.

Gum Trampoline

  • What: Focus on small, incremental changes to improve user experience and performance.
  • When: When bounce rates are high, especially from new visitors, or when you need to make continuous, minor adjustments.
  • How: Implement small, iterative tests on elements like button colors, font sizes, or minor layout adjustments to accumulate incremental improvements over time.

Completion Optimization

  • What: Improving the completion rate of a specific process or user journey.
  • When: When a high percentage of people are abandoning a checkout process, lead generation form, or registration form.
  • How: Improve the completion rate of that action by testing variations such as simplified forms, clearer instructions, or reduced steps in the process.

Flow Optimization

  • What: Enhancing the user journey or flow through a website, app, or any digital experience.
  • When: When you notice high drop-off rates or friction points in the user journey.
  • How: Streamline processes, improve navigation and usability, and test different layouts to make the user experience more seamless and efficient.

Minesweeper 

  • What: Identifying and fixing pain points in the user journey.
  • When: When things are broken all over the site, or when there are multiple areas of friction that need immediate attention.
  • How: Use analytics and user feedback to identify critical pain points and then test fixes for these issues to improve overall user satisfaction and reduce drop-offs.

Big Rocks 

  • What: Addressing significant issues that have a large impact on user experience and conversion rates.
  • When: Your site has a long history of optimization and ample evidence that an important component is missing or underperforming.
  • How: Identify key areas that, when improved, can have a substantial impact on performance. Test significant changes such as new features, major layout overhauls, or critical functionality improvements.

Big Swings

  • What: Testing more radical changes to see substantial improvements.
  • When: You are looking for significant improvements and are willing to take calculated risks. This could be during periods of low traffic or when you have a robust testing infrastructure in place.
  • How: Implement bold, innovative changes such as completely new designs, alternative user flows, or experimental features to see if they can drive substantial improvements in key metrics.

Go Nuclear

  • What: Making drastic, fundamental changes to the digital experience.
  • When: When you’re changing the backend platform, rebranding the company or product, or undergoing a major redesign.
  • How: Conduct thorough A/B testing before and after the major change to ensure the new version performs better than the old one. This involves extensive planning, testing of critical components, and careful analysis to mitigate risks and maximize benefits.

Dive more deeply into the 7 core testing strategies essential to optimization.

How to Design an A/B Split Test: 6 Steps

Effective A/B testing follows a structured, scientific approach to ensure objectivity and validity. Here’s how to set up a robust A/B test:

Choose Your A/B Testing Tool. There are several A/B testing platforms on the market, each with its own strengths, so choose one that aligns with your specific needs and technical capabilities. Here are three popular options:

  • Optimizely: A robust platform offering advanced features and integrations, ideal for larger enterprises with complex testing needs.
  • VWO: Known for its user-friendly interface and comprehensive features, VWO is suitable for businesses of all sizes.
  • AB Tasty: Combines A/B testing with personalization features, making it a good choice for businesses focusing on tailored user experiences.

Note: Read pro optimizers’ recommendations for A/B testing platforms here. 

Define Clear Objectives. Start by clearly defining your test goals and hypotheses. Your goals should be specific, measurable, and aligned with your overall business objectives. For example:

  • Goal: Increase newsletter sign-ups by 20%
  • Hypothesis: Changing the CTA button color from blue to green will increase sign-ups by 10%, as measured by form completions because people associate green with “go.” 

Note: Your objectives will inform the type of A/B test you choose from the seven options we reviewed above.

Choose Variables to Test. Select one primary variable to change in your variation. Common test elements include:

  • Headlines or copy
  • Call-to-action buttons (color, text, placement)
  • Images or videos
  • Page layout
  • Form fields

Create Variations. Design your control (original version) and variation(s) based on your hypothesis. Remember to change only one element to isolate its impact.

Determine Sample Size and Test Duration. Calculate the required sample size to achieve statistical significance. Your testing tool likely has a calculator, but you can also use this one. Keep in mind, your sample size depends on several factors:

  • Your current conversion rate: Lower conversion rates usually require larger sample sizes to detect significant changes
  • The minimum detectable effect (MDE): This is the smallest improvement you want to be able to detect. Smaller effects require larger sample sizes.
  • Statistical significance level: To achieve at least 95% confidence that the results are not due to chance, you’ll need larger sample sizes.
  • Statistical power: Usually set at 80%, this is the probability of detecting a real effect. Higher statistical power requires larger sample sizes.
  • Population size: For smaller target populations, you’ll need a larger sample size to ensure representativeness. (If you don’t have enough traffic, we don’t recommend A/B tests. Listen to this episode of the Two Guys on Your Website podcast to learn conversion optimization techniques that work for small populations.)

Plan for Segmentation. Consider how you’ll analyze results for different user segments (e.g., new vs. returning visitors, mobile vs. desktop users). 

Set Up Tracking. Accurate tracking ensures meaningful results. Here are the key tracking elements that need to be in place:

  • Analytics: Set up Google Analytics event tracking for key user actions, and configure goals and conversions relevant to your test objectives.
  • A/B testing tool integration: Make sure your chosen testing platform is connected to Google Analytics, your content management system (CMS) and customer relationship management (CRM) systems, and any development tools you use.
  • Data validation: Before launching your test, perform an A/A test to validate your tracking setup is working correctly and not introducing any bias or errors.

If prioritizing hypotheses and designing tests seems challenging, consider outsourcing it to the CRO experts at Conversion Sciences.

Step 4: Running an A/B Test

Once you’ve designed your A/B test, you’re ready to set up and run your A/B test. This process involves careful planning, execution, and monitoring to ensure reliable results.

  1. Create the test versions. The original page is your control (Version A). Create a variation (Version B) to test your hypothesis. 
  2. Set up the test in A/B testing software. The software will split your traffic between these versions. In most cases, each page gets 50% of the traffic, but you can set the percentages in the software. 
  3. Run the test until you reach statistical relevance. Most tests run for at least two weeks, but it may go longer if you haven’t reached statistical relevance yet. 
Metrics graphic from a dashboard snapshot

AB testing software VWO test report showing the winning variation.

How to Ensure Quality Assurance

Quality assurance (QA) safeguards the integrity of your experiments and the reliability of your results, minimizing errors and data discrepancies. These QA processes address the technical user experience (UX) and data integrity aspects of a successful A/B test:

  1. Pre-test Validation. Before launching a test, thoroughly check both the control and variation(s) across different devices, browsers, and operating systems to ensure they display and function correctly.
  2. Consistent User Experience. Verify that users have a consistent experience regardless of which version they see. This includes checking that all links work, forms submit properly, and there are no visual glitches.
  3. Data Accuracy and Tracking. Data is key to valid A/B tests. Double-check that your analytics tools are correctly set up to track the metrics you’re testing.
  4. Fallback and Error Handling. Prepare for potential technical issues by implementing fallback options. If the variation fails to load, ensure users still see the control version to avoid lost conversions.
  5. Continuous Monitoring. Once the test is live, continuously monitor its performance. Look for any anomalies in the data or unexpected user behavior that might indicate a problem with the test setup.

Step 5: Test Results Analysis

Proper analysis ensures that you draw accurate conclusions and make data-driven decisions. Here’s a comprehensive guide on how to evaluate your A/B test results:

Statistical Significance. The first and most crucial aspect of evaluating test results is determining whether you reached statistical significance. This indicates whether the observed differences between variations are likely due to chance or represent a real effect.

  • Confidence Level: Typically, a 95% confidence level is used in A/B testing. This means there’s only a 5% chance that the observed difference is due to random variation.
  • P-value: Look for a p-value of less than 0.05, which corresponds to the 95% confidence level. A lower p-value indicates stronger evidence against the null hypothesis.
  • Sample Size: Ensure your sample size is large enough to detect meaningful differences. Smaller sample sizes can lead to inconclusive or misleading results.

Note: Take a deep dive into statistical significance in this guide on statistical hypothesis testing.

Practical Significance. While statistical significance tells you if results are reliable, practical significance determines if the change is worth implementing.

  • Effect Size: Consider the magnitude of the improvement. A 0.5% increase might be statistically significant but may not justify the effort of implementation.
  • Business Impact: Translate the percentage improvement into actual revenue or other key business metrics to assess the real-world impact.

Segment Analysis. Don’t just look at overall results. Analyze how different segments performed:

  • Device Types: Compare results across desktop, mobile, and tablet users.
  • Traffic Sources: Examine how organic, paid, and direct traffic responded to changes.
  • New vs. Returning Visitors: These groups often behave differently and may respond differently to variations.
  • Geographic Locations: If applicable, look at performance across different regions.

Secondary Metrics. While focusing on your primary conversion goal, it’s also important to evaluate secondary metrics:

  • Engagement Metrics: Time on page, bounce rate, pages per session.
  • Micro-conversions: Newsletter signups, add-to-carts, or other steps in the funnel.
  • Long-term Metrics: Customer lifetime value, retention rates (if data is available).

Validity Checks. Ensure your test results are valid:

  • Test Duration: Confirm the test ran for an appropriate length of time, typically at least two weeks and covering full business cycles.
  • External Factors: Consider any external events (e.g., marketing campaigns, seasonality) that might have influenced results.
  • Technical Issues: Verify there were no technical problems during the test that could have skewed results.

Analyzing Inconclusive Results. If your test is inconclusive:

  • Review Segments: Look for any segments where there were significant differences.
  • Power Analysis: Determine if you need a larger sample size to detect an effect.
  • Refine Hypothesis: Consider if your initial hypothesis needs adjustment based on the data collected.

Documentation and Learning. Regardless of the outcome:

  • Document Findings: Record all aspects of the test, including hypothesis, variations, results, and insights gained.
  • Share Insights: Communicate results with stakeholders, explaining both what happened and potential reasons why.
  • Iterate: Use learnings to inform future test ideas and hypotheses.

By thoroughly evaluating your A/B test results using these methods, you can ensure that your optimization efforts are based on solid data and insights, leading to more effective improvements in your digital experiences.

Step 6: Implementing Test Results

After conducting a successful A/B test, the next crucial step is to document the results, update stakeholders, and, if you have a winner, implement it effectively. 

Documenting and Learning

Proper documentation of your A/B tests builds institutional knowledge and informs future optimization efforts. That’s why optimizers always record test results and insights, including:

  • Background and context
  • Problem statement and hypothesis
  • Test setup and methodology
  • Results and analysis
  • Insights gained, including any unexpected findings

Documentation should be comprehensive yet concise. We try to strike a balance between providing essential information and avoiding unnecessary details. Use spreadsheets to prioritize test ideas, track experiments, and calculate test results. Docs are preferred for an in-depth analysis of each test — they allow you to capture your insights and learnings and share them with stakeholders.

Insights from previous tests will fuel future optimization efforts. Consider creating a testing roadmap that builds on your learnings. For instance, if highlighting reviews was successful, your next test might explore different ways of presenting these reviews or testing review-based elements on other pages of your site.

Updating Decision-Makers

To ensure the long-term success of your optimization program, it’s crucial to communicate your results effectively to stakeholders. Here’s how to do that:

Communicating Results and Plans to Stakeholders. Prepare a clear, concise presentation of your test results, focusing on the metrics that matter most to your business. For example, instead of just reporting a 15% increase in click-through rate, translate this into estimated revenue impact.

Linking Results to Business Goals. Show how your A/B testing program aligns with and contributes to broader business objectives. For instance, if a key company goal is to increase customer lifetime value, demonstrate how your optimizations in the onboarding process are leading to higher retention rates.

Proposing Next Steps. Based on your results and learnings, present a clear plan for future tests and optimizations. This might include a prioritized list of upcoming tests, resource requirements, and projected impacts.In many organizations, only 10% to 20% of experiments generate positive results, with just one in eight A/B tests driving significant change.

As mentioned above, if a winning test won’t drive significant conversion rate improvements, you may opt not to implement the change — it simply isn’t worth the effort. However, when you do implement a change, you need to be sure the improvements you saw in the test translate into real-world performance gains. Here’s how to do that.

Implementing the Winning Version

Once you’ve identified a statistically significant winner in your A/B test, it’s time to implement this version across your entire user base. You can do this in one of two ways:

Option 1: Replacing the Original with the Winning Variant

This step involves updating your website or application to reflect the changes tested in the winning variation. For example, if your test showed that changing a call-to-action button from “Sign Up” to “Start Free Trial” increased conversions by 15%, you would update all instances of this button across your site.

Option 2: Gradual Rollout

In some cases, especially for high-traffic websites or significant changes, it’s wise to implement the winning version gradually. Initially, you would roll out the change to a small percentage of users (say, 10%) and slowly increase the percentage of visitors who see it until it’s fully launched. This approach allows you to monitor for any unexpected issues that might not have shown up during the test.

Monitoring Post-Implementation Performance

After implementing the winning version, continue to monitor performance to be sure the improvements seen during the test phase are maintained in the long term. Make sure you’re tracking key metrics such as conversion rates, engagement levels, and revenue figures.

False positives occur for a variety of reasons. A test might fail to reach statistical relevance or include too many variations. Or maybe it falls prey to the history effect — something in the outside world that affects people’s perception of your test. We’ve seen this happen in holiday seasons, a news event that has captured people’s attention, and even a competitor launching a new product.

If you implement a change and see a drop in site metrics, you need to respond quickly. First, revert to your control page so you can stop the bleeding. Then try to identify the issue:

  • Study your post-implementation data to confirm that the drop in metrics was due to the change and not other factors. 
  • Review the original test to find issues that might have led to a false positive.
  • Check for external factors that could have influenced the test: seasonal trends, marketing campaigns, competitor actions, or technical issues.
  • Consider running a follow-up test to verify the original findings and explore alternative hypotheses.

By following these steps, you ensure that the insights gained from your A/B tests are effectively implemented, documented, and leveraged for ongoing optimization. This systematic approach not only improves your current performance but also builds a foundation for continuous improvement in your digital experiences.

Common Challenges in A/B Testing

A/B testing is a powerful tool for optimization, but it comes with its own set of challenges. Understanding and addressing these challenges empowers you to conduct effective tests and obtain reliable results.

Timing Issues and External Factors

As mentioned above, we need to be aware of timing issues and external factors that could deliver fast positives or negatives. Here are a few of the external issues that can impact the validity of an A/B test:

Seasonal Variations: User behavior often changes based on seasons, holidays, or specific times of the year. For example, an e-commerce site might see drastically different conversion rates during the holiday shopping season compared to other times of the year.

Market Fluctuations: Economic changes, industry trends, or sudden market shifts can impact user behavior and skew test results. A stock market crash, for instance, could significantly affect conversion rates on a financial services website, and we all experienced the impact of the pandemic and the uncertain economy that followed.

Competitor Actions: User behavior is easily impacted by competitor promotions or new product launches. If a major competitor starts a significant discount campaign during your test, it could affect your results.

Media Events: News cycles, viral content, or major world events can temporarily change user behavior and attention. A breaking news story related to your industry could suddenly increase or decrease engagement with your site.

Technical Issues: Unexpected technical problems, such as server downtime or slow loading speeds, can impact user experience and skew test results if they occur unevenly across variations.

Marketing Campaign Changes: Changes in your own marketing efforts, such as starting or stopping an ad campaign, can lead to different types of traffic arriving at your site, potentially affecting test results.

Timing issues can potentially lead to false positives or negatives, but in most cases, A/B testing mitigates them by simultaneously testing our variations. In an A/A test, we compare changes made at different times. Different external conditions exist in each test period. A/B testing ensures both the control and treatment groups are exposed to the same external factors during the testing period, allowing you to isolate the impact of the elements you’re testing.

Visitor Segmentation

Another significant challenge in A/B testing is ensuring your results accurately represent your entire audience. This challenge results from the complexity and diversity of user behaviors across different segments of your audience. For example: 

Audience Diversity. Your website attracts a wide variety of visitors. Some are tire kickers, students, or researchers. Only a percentage are potential customers, and even these are not a homogeneous group. They can vary significantly based on factors such as:

  • Demographics (age, gender, location)
  • Device types (desktop, mobile, tablet)
  • Traffic sources (organic search, paid ads, social media, email)
  • User intent (browsing, researching, ready to purchase)
  • New vs. returning visitors

Each of these segments may interact with your website differently and respond to changes in unique ways.

Sample Representation. When conducting an A/B test, you’re typically working with a sample of your overall audience. It’s challenging to ensure the test sample accurately represents your entire user base. If your sample is skewed towards a particular segment, your test results may not apply to your broader audience.

Time-based Variations. User behavior can vary based on time of day, day of the week, or season. For example, B2B websites might see different behaviors during business hours compared to evenings or weekends. Ecommerce sites often experience seasonal fluctuations. These temporal variations can impact test results if not properly accounted for.

Geographic and Cultural Differences. For websites with a global audience, cultural differences and regional preferences can significantly impact user behavior. A change that resonates with users in one country might not have the same effect in another. 

Here’s how to ensure your A/B tests accurately represent your entire audience: 

  1. Use proper segmentation in your analysis to understand how different user groups respond to changes.
  2. Run tests long enough to capture a representative sample of your audience.
  3. Consider running separate tests for significantly different audience segments.
  4. Use stratified sampling techniques to ensure all important segments are proportionally represented in your test.

Here are some common ways to segment your traffic:

  • Traffic source (e.g., organic search, paid ads, social media, email campaigns, direct)
  • Device type (desktop, mobile, tablet)
  • New vs. returning visitors
  • Geographic location
  • Time of day or day of the week
  • Customer lifecycle stage (e.g., first-time visitor, repeat customer, loyal customer)
  • Referring websites
  • Landing pages
  • User demographics (age, gender, income level, etc.)
  • Browser type
  • Operating system

By analyzing these segments separately, you can uncover valuable insights about how different groups interact with your site and tailor your optimization efforts accordingly.

Creating a Culture of Testing

For more than a decade, we’ve advocated for a culture of optimization — equipping everyone to optimize their results through experimentation and A/B testing — because the data it provides can quickly scale growth and profits.

It’s easy to fall into the trap of limiting A/B testing to professional conversion optimizers. But practitioners aren’t the only people capable of running successful tests, especially with today’s technology. The key is to give your team the training, tools and workflows they need.

Booking.com is a good example. For more than ten years, they’ve allowed anyone to test anything, without requiring approval. As a result, they run more than 1,000 rigorous tests at a time, which means at any given time, two visitors will get two different variations of the page.

Comparison of two booking.com pages

I opened Booking.com from two computers at the same time and got two variations of the page.

Experimentation has become ingrained in Booking.com’s culture, allowing every employee to test their ideas for improved results. Booking.com keeps a repository of tests, both failures and successes, so people can verify the test hasn’t been previously performed.

Conclusion

For optimizers, A/B testing is one of our most powerful strategies for improving website performance, driving conversions, and making data-driven decisions. In this guide, we’ve walked through the entire A/B testing process — from understanding the statistical foundations of hypothesis testing to running tests effectively and overcoming common challenges. 

Next Steps

Now that you have a comprehensive understanding of A/B testing, it’s time to put this knowledge into action. Start by identifying areas of your website that could benefit from optimization. Conduct thorough pre-test research, formulate hypotheses based on data, and use the tools and strategies outlined in this guide to run effective tests.

Remember, A/B testing is not a one-time activity — it’s a continuous process. By committing to regular experimentation and learning from every test, you can stay ahead of competitors, delight your users, and drive meaningful business growth.

Whether you’re just starting out or looking to refine your existing program, this guide serves as a roadmap for mastering A/B testing. The journey of optimization is ongoing, but with the right approach, tools, and mindset, the rewards are well worth the effort. 

Need help implementing A/B testing on your website? At Conversion Sciences, we have a proven mix of conversion optimization services that can be implemented for sites in every industry. 

To learn more, book a conversion strategy session today.

A well-crafted landing page is one of the most powerful tools in digital marketing, capable of turning casual visitors into loyal customers. But if you’ve ever created a landing page, you know how much strategic planning and execution they require. 

At Conversion Sciences, after years of perfecting the art and science of landing page optimization, we’ve identified six essential elements that can transform an ordinary landing page into a high-performing asset. This guide will explore these elements in detail, share actionable tips, and provide real-world insights to help you master landing page optimization.

What Makes a Successful Landing Page?

Landing pages are stand-alone web pages designed to perform two jobs:

  1. Keep the promise made in the ad, email, social post, or link that preceded the page.
  2. Ask your visitor to do something, like filling out a form, downloading a resource, or purchasing. 

Unlike homepages, which serve multiple purposes, landing pages focus on a single goal. This specificity makes them one of your most effective tools for converting visitors into leads or customers.

Need some inspiration? Browse our Lead Generation Landing Page Examples.

No time to learn it all on your own? Check out our turn-key Conversion Rate Optimization Services and book a consultation to see how we can help you.

The Benefits of Well-Optimized Landing Pages

We’ve seen the difference even small improvements in landing page performance can make. Here are three of the top outcomes you’ll experience when your landing pages are doing their job.

1. Maximize ROI

Most web pages don’t convert — they’re not designed to. But landing pages are designed to convert. They outperform popups, signup boxes, and wheels of fortune for opt-ins, with a 23% conversion rate.

landing page performance compared to popups, signup boxes, and wheels of fortune

Landing pages outperform other lead generation tactics with a 23% conversion rate.

 Across all industries, the average landing page conversion rate is 2.35%, according to Wordstream. But the top 10% of landing pages achieve conversion rates of 11.45% or higher, depending on the landing page type.

Landing Page Conversion Rate Benchmarks

Benchmarks for landing page conversion rates vary widely by industry, but these are the averages.

2. Simplify Decision-Making

The human brain craves simplicity, especially when faced with choices. High-converting landing pages capitalize on this by reducing cognitive load. They use clear calls-to-action (CTAs), concise copy, and a streamlined design to guide visitors toward a single desired outcome.

Imagine landing on a page that bombards you with navigation menus, multiple CTAs, and walls of text. It’s overwhelming, right? Now compare that to a page with a single, bold CTA — like “Sign Up Now” — and enough information to convince you to act. This simple page removes distractions, making it easier for visitors to decide.

You can thank Hick’s Law for this insight. It tells us that the more choices you offer, the longer it takes for someone to make a decision. And it explains why a simplified landing pages can move visitors quickly down the conversion funnel. 

💡 In one of our projects, simplifying the layout of an eCommerce landing page led to a 15% reduction in bounce rates and a corresponding increase in sales.

3. Enhance Engagement

Grabbing and holding a visitor’s attention is no small feat. High-converting landing pages achieve this by delivering tailored content that speaks directly to the visitor’s needs and interests.

Take, for instance, an online retailer promoting a seasonal sale. A landing page promoting winter apparel with targeted messaging — like “Stay Warm and Save Big This Winter” — is far more engaging than a generic sales page. Add to that personalized content, such as product recommendations based on browsing history, and the page becomes a magnet for conversions.

Engagement isn’t just about content, though. The design and user experience play crucial roles. Eye-tracking studies show that users focus on areas with clean layouts, contrasting colors, and compelling visuals. By optimizing these elements, you can create a page that captivates visitors and keeps them moving toward your goal.To increase engagement, design and the user experience are key. For example, when we improved UX/UI for Boingo, they saw a 2x increase in sales and a 38% lift in overall revenue.

Boingo landing page

Optimizing UX/UI gave Boingo a 2x increase in sales and a 38% life in overall revenue.

Creating a Successful Landing Page

It’s easy to obsess over landing page best practices: Is this the right font? Is the color theme too loud, not loud enough? Too many options? Not enough options? What is with this border?

But bottom line, a landing page is ultra-simple. It keeps a promise and entices visitors to take action. That’s it. So let’s break down the basic components of a landing page and what you need to think about for each.

1. A Compelling Offer

Your offer is the foundation of your landing page. It’s the “why” behind the action you want visitors to take. To make your offer irresistible:

  • Make it About Them: Focus on your visitor and what they want.
  • Address Pain Points: Let them know you understand their challenges.
  • Present a Solution: Give them a painless way to solve their problem.

A compelling offer is clear and compelling. At its most basic, it tells you what you’ll get if you take action. For example, a SaaS company might offer a free trial with messaging like, “Get 14 days of unlimited access—no credit card required.”

Look at how simply Netflix presents its offer:

  • What you want/get: Unlimited movies, TV shows, and more
  • Cost: “Starts at $6.99.”
  • Risk reduction: “Cancel anytime.”
Netflix landing page, which highlights an offer, cost, risk reduction, and optin form.

Netflix has a streamlined landing page that makes it easy to say yes.

2. A Simple Form

Forms are where conversions happen, but they can also be where visitors drop off. To increase form completion rates:

  • Keep It Minimal: Ask only for essential information. An email address is often enough to start a relationship.
  • Provide Context: Use microcopy to explain why you’re asking for certain details.
  • Use an “invitational” CTA on the button: “Get Started” creates less friction than a transactional CTA like “Buy Now.”

Pro Tip: Our clients have seen conversion rates increase by up to 40% after reducing their forms from six fields to three. The more form fields your visitor has to fill out, the more friction they feel, and the less likely they are to convert. 

If in doubt, look at Google’s homepage:

Google's plain, white landing page

Google’s landing page is ultra simple with no friction

The offer is implied: search for anything. The form has just one field. And its buttons, “Google Search” and “I’m Feeling Lucky,” are invitational. 

This simple white page makes a complete landing page. But of course, for your landing pages, you’ll want a few more conversion elements, so let’s explore those elements and how they will help you create a successful landing page.

3. Persuasive Copy

Not all visitors arrive ready to convert. Even slight objections can prevent them from taking action. Clear, compelling copy answers objections, builds trust, and explains why taking action today is smarter than waiting.

Short, text-based descriptions of the landing page offer can make a difference. Keep in mind persuasive copy doesn’t focus on your business or even the product. Always talk about what’s in it for the visitor. 

  • Speak directly to their concerns: use empathetic language to acknowledge potential doubts.
  • Build confidence: highlight benefits with stats or testimonials.
  • Emphasize simplicity: reassure them that completing the action is quick and easy.

At Conversion Sciences, we’ve seen as much as a 42% lift in conversions after increasing a landing page’s persuasive power. (Here are some of our persuasive writing techniques.)

But persuasive copy isn’t your only option. Media can also make a difference. Studies have found that 9 out of 10 people want companies to create more videos, and 38.6% of marketers report that videos positively impact their conversion rate.

Whether you’re using text or video, try to create urgency so people take immediate action. Phrases like “Limited Time Offer” or “Only 5 Spots Left” can help — just make sure the urgency is real. If you’re selling a digital product, you can’t run out of inventory, but you can make your offer available only until midnight.

Quick Tip: Generic statements don’t drive action. To persuade, you need to be ultra-specific. Notice how the numbers in these examples draw the eye:

  • A fitness coach promoting a meal plan: “Discover the secret to losing 10 pounds in 30 days—no fad diets required.”
  • A trainer promoting an online course: “Join 11,377 professionals who’ve leveled up their careers with our training.”

4. Proof and Trust Elements

Trust is the linchpin of online conversions. Without it, even the most compelling offer can fail. To establish trust:

  • Leverage Social Proof: Add testimonials, case studies, or customer reviews.
  • Show Trust Badges: Highlight SSL certificates, security seals, or payment logos.
  • Use Recognizable Logos: Display well-known client or partner brands to boost credibility.

Example: After adding trust elements to an ecommerce client’s product pages and optimizing key elements of their value proposition, they saw an 18% lift in revenue per session, a 12% growth in customer acquisition rate, and a 6% increase in average order value. Proof and trust can significantly improve a landing page’s performance.

c

Galeton ecommerce landing page

This Galeton ecommerce landing page highlights quality ratings to prove the quality of their products and increase trust.

5. Relevant Images

Visuals aren’t just about aesthetics; they play a critical role in conveying value and enhancing understanding. Effective images:

  • Illustrate the Offer: Use screenshots, product photos, or mockups to make intangible offers feel tangible.
  • Enhance Emotional Appeal: Choose images that resonate with your audience’s aspirations or pain points.
  • Avoid Stock Clichés: Generic visuals can undermine trust and authenticity.

It doesn’t matter whether you sell a physical or digital product — an image makes it “real.” Rendering downloadable resources, like eBooks or white papers, as physical items in images helps users perceive them as tangible and valuable.

Example: Amy Porterfield presents screenshots from her online business courses on desktops, tablets, and phones. This quickly communicates the depth and accessibility of her training.

Amy Porterfield's landing page with a graphic of her course in different sized devices

Even if you sell a digital product, you must “make it real” with graphics

6. Clean and Focused Design

Design can make or break your landing page. A cluttered layout confuses visitors, while a clean design keeps them engaged. Key principles include:

  • Use Contrast for CTAs: Make your call-to-action button stand out with bold colors.
  • Optimize for Mobile: With over half of web traffic coming from mobile devices, responsive design is a must.
  • Reduce Distractions: Remove unnecessary navigation or external links that might lead visitors away.

Pro Tip: Even if you love your design, your visitors may not. Take Jaguar’s logo redesign.

Jaguar's 2024 logo redesign

Jaguar’s new logo met with resistance from their loyal fans

As it enters “The New Era” of electric cars, Jaguar felt a redesign was in order. Many consumers disagree. LinkedIn posts call it a “car crash” and “downgrade branding” and predict harm to their brand:

Most landing pages don’t get the backlash Jaguar’s rebrand got, but it’s a good illustration of why we design pages for our visitors, not our own preferences.

To understand what visitors like (and don’t like), our Conversion Scientists® use heatmaps and user testing tools like Hotjar. If people avoid any part of the page, there’s likely something wrong with the layout or design, and these tools tell them where they can refine the landing page design to improve conversions. 

Why guess when you can know? CRO tools provide the insights and data you need to make smart optimization decisions. 

Here’s our scientific breakdown of these elements:

Here are the steps you can take to create a controlled Landing Page reaction in your digital laboratory.

The Chemistry of a Successful Landing Page Infographic

Please include attribution to our website with this graphic.

Landing Page Best Practices: Expert Tips for Landing Page Mastery

Creating a landing page that converts is only the first step. To maximize results, you need to continually refine and optimize every aspect of the page. This second half of the article dives deeper into the strategies and tools used by top marketers to fine-tune their landing pages for peak performance.

Optimization requires regular testing, data analysis, and a willingness to iterate based on user behavior and industry trends. The difference between a good landing page and a great one often lies in the details — small adjustments that have a disproportionate impact on conversions. From mastering A/B testing to understanding how different metrics influence your strategy, this section equips you with the knowledge and tools to take your landing pages to the next level.

Note: A/B testing isn’t always the most appropriate CRO technique. Listen in as our Scientists discuss the different types of CRO for different needs.

Landing Pages for Different Goals

Not all landing pages are created equal. Tailoring your approach to specific objectives can enhance performance:

  • Lead Generation: Offer a clear value exchange, like an eBook or webinar for minimal information.
  • eCommerce: Showcase product images, highlight key features, and include trust signals like secure payment icons.
  • Affiliate Marketing: Focus on relevance and clarity to drive click-throughs and conversions.

Example: One of our clients in the SaaS space increased lead generation by 35% after optimizing their landing page with a simplified form and enhanced trust elements.

Metrics That Matter

Short description of six marketing metrics: impressions, clicks, visits, bounces, abandons, and conversions

To measure the success of your landing page, track these key performance indicators (KPIs):

  • Conversion Rate: The percentage of visitors who complete the desired action.
  • Bounce Rate: The percentage of visitors who leave without taking action.
  • Time on Page: A higher duration indicates strong engagement.

We recommend using tools like Google Analytics, Optimizely, or Crazy Egg can help you monitor these metrics and identify areas for improvement.

Tools and Resources for Landing Page Success

Some basic tools that will help you optimize your landing pages are:

  • Unbounce: For creating and testing landing pages without coding.
  • Hotjar: To analyze user behavior with heatmaps and session recordings.
  • VWO: For A/B testing to see what resonates with your audience.

When you’re ready to up your conversion game, check out our list of 20 AB testing tools recommended by CRO pros.

Best Practices for Ongoing Optimization

Creating a successful landing page isn’t a one-time task. Continuous improvement is key. Here’s how to stay ahead:

  • Regular Testing: Use A/B testing to evaluate headlines, CTAs, and layouts.
  • User Feedback: Gather insights from surveys or heatmaps to understand user behavior.
  • Content Updates: Keep your page fresh by aligning it with current campaigns or trends.

We’ve seen the impact of incremental improvements on a landing page’s performance. But if you’re curious to know how it will improve your results, submit the Conversion Optimization Upside Calculator, and we’ll do the calculations for you. 

Final Thoughts: Creating a Landing Page That Converts

Creating a high-performing landing page doesn’t happen by chance — it requires strategy, testing, and attention to detail. By focusing on the six essential elements outlined above, you can craft simple landing pages that consistently deliver results.

If you’re ready to take your landing pages to the next level, let us help you. Level up your skills with our Conversion Rate Optimization Training. Or ask about our fully-managed CRO services.

Contact us today for a free consultation.

Every business owner I know is unhappy with their website conversions. They invest in quality traffic but struggle to convert it into leads and sales. 

Usually, when they ask us for conversion rate optimization services, they’re worried about how persuasive their messaging is or whether the design of their website is hurting results. 

What they don’t realize is that there’s another, more critical key to unlocking conversions.

We humans have won the evolution lottery in many ways (thank you, opposable thumbs), but we’ve never overcome our tendency to take shortcuts. And of all the shortcuts we take, confirmation bias is probably the biggest.

Let’s take a closer look at what confirmation bias is, how it affects us as humans, and more particularly, as marketers. Then we’ll uncover the only marketing approach that will beat our biases and improve our website conversion rates.

Learn how our fully managed CRO services can remove the biases that impact your website’s performance.

What Is Confirmation Bias?

Confirmation bias is the tendency to search for, favor, and recall information that supports our existing beliefs or hypotheses — while disregarding or undervaluing evidence that contradicts them — so we can confirm that we’re right or process information faster.

Confirmation bias isn’t intentional. It’s more of a mental management system than deliberate self-deception.

Why does confirmation bias occur? Why are we so prone to it?

This marketing statistic may provide some insight: We are exposed to anywhere from 6,000 to 10,000 ads a day. Add to that our busy lifestyles. We’re constantly juggling careers, families, friends, and self-care. Our brains are processing so much information at any given moment, we need decision-making shortcuts, or heuristics, to avoid overwhelm.

We like to be right. So we seek information that confirms and supports our personal beliefs and habits while ignoring innformation we disagree with.

Three Types of Bias

There are three types of confirmation bias:

  1. Biased search for information
  2. Biased interpretation of information
  3. Biased recall of information

Biased search for information

People tend to test hypotheses, or ideas, by searching for evidence that’s consistent with their current beliefs. They phrase their search queries to find information supporting their expectations and gather data to prove their preexisting ideas are true. 

We see this a lot in conversion optimization. When running an AB test for the first time, a marketer is likely to design the test to prove their idea right. By contrast, experienced optimizers know to design AB tests to disprove their hypothesis.

Interestingly, when we’re asked for information, the way the question is phrased can influence the way we answer. If asked, “Are you happy with your job?” we’re more likely to answer positively. If asked, “Are you unhappy with your job?” we’ll be more likely to disclose the things we don’t like.

Biased interpretation of information

We often use logic to defend illogical beliefs, and you can see that in the way we interpret information. For example, a 1979 Stanford study found that, when given compelling evidence for and against capital punishment, people used the data to support their original viewpoint and gave more credence to information that supported their beliefs.

This study also found that “disconfirmation bias” makes us more resistant to information or viewpoints that contradict our existing beliefs. We set a higher standard of proof for any hypothesis that contradicts our current expectations. We also work harder to disconfirm evidence by questioning the validity of the source or looking for flaws in the argument.

Biased recall of information

You’ve likely heard the term selective recall. It exists because our brains have pre-existing folders to store that information.

Studies have found that any information that aligns with prior expectations and beliefs is easier to store and recall than information that does align with our beliefs. As a result, we tend to remember information that reinforces our expectations. 

Confirmation Bias Examples

When your brother calls you for advice but only accepts it if you tell him what he wants to hear, you’re seeing confirmation bias in action. He believes he found the solution and wants your stamp of approval.

You see confirmation bias in business when a company cuts a sponsorship deal with a controversial figure, and the brand is boycotted. Like Nike’s 2018 ad campaign featuring Colin Kaepernick, the NFL quarterback who, two years earlier, began kneeling for the national anthem in pre-game ceremonies to protest racial injustice.

Example of confirmation bias: Nike's ad campaign featuring Colin Kaepernick with a close-up of Colin's face and the words, "Believe in something, even if it means sacrificing everything.

Nike’s ad campaign featuring Colin Kaepernick

Now, let’s look at some examples of how confirmation bias impacts our ability to process and parse data — which is key to our ability to parse your website data and understand your visitor’s behavior.  

We all know how easy it is for bad actors to distort or manipulate data. What we don’t realize is how often we do this to ourselves, interpreting information in a way that distorts our own understanding. 

Remember, we like to take shortcuts, so when we see this graph of ice cream sales compared to forest fires, what do we conclude?

Graph of ice cream sales compared to forest fires, showing a high correlation

Ice cream sales and forest fires are so highly correlated, ice cream must cause forest fires.

Meanwhile, looking at this next graph of ice cream sales vs. weight gain, you might assume ice cream will help you lose weight.

Ice cream sales compared to weight loss - showing no correlation

Ice cream may be the dieter’s dream dessert!

This is a ridiculous assumption, so we don’t take it seriously. But what happens when our conclusions are less obvious? Bias affects our ability to see the truth. It leads us to ignore gaps in our data. And it leads to poor decision-making.

For example, both of our ice cream graphs ignore a third issue, which is seasonality. In the summer, we tend to watch our weight so we’ll look good in our bathing suits, and it’s hot, so we love to eat ice cream. 

With all of this in mind, let’s apply this same tendency to our marketing and conversion rate optimization process.

The Effects of Confirmation Bias on Your Marketing Results

The Cognitive Bias Codex, rendered by John Manoogian, lists 188 cognitive biases, grouped into four categories.

a circular graph listing many cognitive biases, sorted into 4 categories

The Cognitive Bias Codex illustrates the number of biases we can fall prey to.

According to the codex, cognitive bias tends to kick in in four scenarios:

  • We’re processing too much information
  • We don’t have enough context or meaning for that information
  • The information we’re processing has a higher priority or is used frequently
  • We need to make a decision or act quickly

Show me one marketer who isn’t processing too much information and making quick decisions without full context! Our work is most definitely the product of unconscious confirmation bias.  

And it may explain why we put things on our sites that support what we believe instead of what our customers need to hear to convert. The typical marketing project includes researchers, copywriters, designers, and stakeholders, all with their own preferences and biases. In a worst-case scenario, this can be disastrous:

Researchers who gather information about the market and competitor products have a tendency to search for the information they expect to find, confirming their preexisting beliefs.

The copywriter may or may not evaluate the research, but if they do, they look for evidence that supports their preconceived ideas. Then, when drafting the copy, they choose words and phrases that speak to their learning styles and convey their biases. 

The designer creates a visual design based on their own preferences about color and fonts. Then they bring in an executive and team members to review the design — who will all ask for changes based on their biases.

This is a worst-case scenario, but it does show the potential for confirmation bias to dampen your website conversions. You and your team are not immune. You have a tendency to favor information that supports your individual biases. And each of you works in a way that feels comfortable to you. 

But these biases can kill a good landing page. They affect your organization at the deepest levels. And they can keep you from achieving the results you know are possible — which is why our Conversion Scientists® rely on science and the scientific method.

The Only Way to Avoid Confirmation Bias

Because everyone is susceptible to confirmation bias, we need a workaround. We need a way to get our biases out of the way so we can make better decisions and get better results.

For marketers, that means discovering what’s really going on with our visitors by thinking like a behavioral scientist. 

The good news is you’re already doing it. Every time you post something on social media and check how many likes, comments, and shares you have, and then do more of those posts, you’re using behavioral science.

You’re using other people’s behaviors to determine what you’re going to do next. We just want to formalize that process to ensure our biases don’t override our natural tendencies. 

That’s where the scientific method comes into play. When you come up with an idea for a landing page, instead of creating three mock-ups and choosing the one you like best or is most similar to your competitor’s page, you test which one works.

A scientific approach to conversion optimization, or CRO, removes personal biases, opinions, and preferences from the process and forces us to make data-driven decisions. It involves formulating hypotheses, conducting experiments, and analyzing results to draw conclusions. 

Here’s what the scientific process looks like when applied to the CRO process:

  • Do some research.
  • Generate and prioritize some ideas. 
  • Research an idea.
  • Design a landing page that proves our promising idea wrong. 
  • Run tests. 
  • Listen to what the data says.
  • Evaluate.
  • Iterate: Generate more ideas, investigate, test, listen, repeat.

When we follow this process, we can keep ourselves honest. Instead of adopting the highest paid opinion in the room or implementing web designs that may or may not work, we can use data to increase conversions on the website or marketing campaign.

It’s time to remove the biases that interfere with your website conversions. Get started today by scheduling a free consultation with one of our Conversion Scientists®.

Have you ever rebuilt a landing page or updated a website, only to realize that web conversions are low or non-existent?

It’s more common than you think, which is why we’re sharing the exact conversion rate optimization process and CRO strategies we use at Conversion Sciences to optimize ecommerce pages, conversion funnels, digital marketing campaigns, and more. 

In this guide, you’ll learn core elements of the conversion rate optimization process:

  • The biggest reason you’re struggling to get people to take the desired action you ask for on your site (It’s not what you think!)
  • The steps of conversion rate optimization (CRO), and why they matter
  • How to generate test ideas (or hypotheses, as we like to call them) that can improve pages with lower conversion rates
  • How to prioritize those ideas based on their ability to drive results
  • The exact process our conversion experts use to develop messaging and web designs that convert
  • Tips for experimenting and split testing to ensure your optimization efforts pay off

Note: If you’re looking for marketing strategies, how to set up Google Analytics, a list of CRO tools, or even the benefits of conversion rate optimization, we have those for you as well. But in this guide, we’re inviting you into the lab so you can see how experienced optimizers do their job: generating ideas that are worth testing and then getting those ideas ready for AB testing. 

After learning the optimization strategies we reveal here, you’ll know how professional optimizers design pages that persuade people to fill out your forms, improve your checkout process, or simply take the desired action for that page. 

Of course, if this feel like too much work and you’d prefer fully-managed conversion optimization services, we can help with that as well.

Why Website Conversion Rates Aren’t What They Ought to Be

In a minute, I’ll walk you through the website design process we use to improve conversions for our clients. But first, we need to acknowledge the elephant room: confirmation bias.

Confirmation bias, is just one of the biases our brains use, causing us to favor information that aligns with our existing beliefs or hypotheses while disregarding or undervaluing evidence that contradicts them. CRO is designed to combat this.

An elephant, labeled confirmation bias, walked through an office, labeled conversion lab, illustrating our need for the conversion rate optimization process.

Confirmation bias is everywhere, even the conversion lab. That’s why we need the conversion rate optimization process.

We struggle to build websites, landing pages, and digital experiences that convert because we like to take shortcuts.

Our brains are always looking for the shortest route possible when making decisions. As marketers, that means we tend to put things on our sites that worked for us in the past and to avoid things that didn’t work in the past, regardless of what our customers need and want. 

We need to find a way to circumvent our biases so we can include the page elements our visitors want and exclude the elements they don’t want — so we can get the results we’re looking for. 

That’s where science comes in

For centuries, science — including conversion optimization — has been used to get our biases out of the way so we can make better decisions and drive higher conversion rates. 

For marketers, that means discovering what’s really going on with our visitors. We do that by thinking like a behavioral scientist — using people’s behaviors to determine what their pain points are and what they like and don’t like.

We just need to formalize that process to keep confirmation bias out of the picture — which is where the scientific method comes into play. Instead of designing a page we think will convert, we run experiments and tests to build a page that actually converts. 

Here’s what the scientific process looks like when applied to the CRO web design process:

  • Do some research.
  • Generate and prioritize some ideas. 
  • Select an idea.
  • Design a change to our landing page to test our promising idea. 
  • Run the experiment. 
  • Listen to what the data says.
  • Evaluate and learn.
  • Iterate: Generate more ideas, investigate, test, listen, repeat.

When we follow this process, we can keep ourselves honest. Instead of adopting the highest paid bias in the room or relying on the biases of an expert designer, we can experiment to increase conversions on the website or marketing campaign.

Now, let’s look at how our Conversion Scientists® do that.

The (Science-Based) CRO Process for High Converting Websites

Most people think conversion optimization and conversion focused web design is about AB testing. And yes, optimizers love AB testing. It’s a core feature of the conversion optimization process. But in reality, we deal with ideas.

When a page is not performing as expected, the first step is to identify why. We have to generate ideas and conduct research to identify the underlying issues.

The insights you gain from this process help you understand the website visitor on a deeper level. You come away with better ideas for how you can improve the user experience for them. You also understand what it will take to improve conversions and sales. 

I’ve condensed the scientific process into five actionable steps you can take to improve your average conversion rate — erasing any worries about confirmation bias or mental shortcuts.

The Optimizer’s Process: How to Optimize Your Website or Landing Page

  • Generate Optimization Ideas
  • Prioritize Ideas
  • Develop the Messaging
  • Design the Page for Conversions
  • Test and Optimize

Step 1. Generate Optimization Ideas

Improving a webpage starts with ideas about what’s not working right or what can be improved on the website or web page. For example:

  • Put a call to action at the top of [this landing page].
  • Redesign [this page] because it’s too cluttered.
  • Let’s produce videos for all of our products to show how they get used.

We call these ideas hypotheses, and you want to collect as many ideas as you can that could potentially boost conversions. Here are some sources for generating test ideas:

Data You Already Have

Use data from ad platforms and paid search to understand what language and offers get attention.

A/B Testing

Conduct A/B tests to determine if an idea will improve conversions.

Before and After Testing (BA Testing)

Compare performance of changes made to a page or site to a similar period before the change. This method doesn’t control for external factors like market changes.

Online Focus Groups

Use online focus groups to get input from a larger number of people. Use this to narrow down messaging and design options.

Surveys

Conduct surveys of customers or prospects to answer important questions about their motivations, needs, and questions.

Analytics

Analyze website analytics to discover problem areas, evaluate traffic sources, grade landing pages and much more.

Site Feedback

Collect feedback from your website visitors to help identify why they are struggling.

Chat Transcripts

Review chat transcripts to discover common questions that your website could perhaps answer better.

Talk to Salespeople and Customer Service Reps

Salespeople and service reps can tell you the kinds of questions customers and prospects are asking when they talk.

Authoritative Blogs

Look at industry blogs for ideas that have worked for others and for research.

Customer Knowledge

Use your own experience and knowledge of your customers to generate ideas.

By collecting ideas in this way, you will void the brainstorming sessions that provide only limited ideas and hypotheses. These sessions can help generate ideas, but suffer from the biases of the group. Instead, use a structured CRO approach to choose the right ideas to research and test.

As you gather ideas, add them to a spreadsheet like this one. 

spreadsheet filled with testing ideas

Download Conversion Sciences’ Hypothesis Prioritization Framework spreadsheet here:

Start by capturing your ideas in the “Hypothesis” column.

For each idea, record:

  • The page or section it appears on
  • The design element on the page you want to address (component)
  • Write the idea in hypothesis format.

The Hypothesis Format

If I [hypothesis], I expect [behavior] to change as measured by [metric].

Then put that idea into one of five buckets that describe its impact on the page’s performance:

  1. Messaging
  2. Layout/UX
  3. Credibility
  4. Social proof
  5. Security
  6. Fix it (e.g., the page is broken)

You should have ideas for each of these five buckets. 

Step 2. Prioritize Your Ideas

Once you’ve collected your ideas, evaluate them on a scale of 1 to 5 based on the following criteria:

  1. Evidence: How much evidence supports the idea.
  2. Impact: The potential impact of the idea.
  3. Effort: The level of effort required to implement the idea.
  4. Traffic: How much traffic is affected.
  5. ROI: The potential return on investment.

Being consistent with the way you rate each of these ideas is more important than being accurate with your rating.

Time is an issue when you’re optimizing website performance. We can’t test everything, and in truth, not all good ideas are worth pursuing. So how do you identify the ideas you don’t want to spend time on? We’ll start by deprioritizing the ideas that aren’t worth testing

Here are four reasons to kill a good idea:

1. Too Few People Will See It: If not enough traffic sees the change, it’s not worth testing. 

  • It’s on a page that doesn’t get a lot of traffic
  • It’s in the footer, and heat maps tells you that people don’t scroll that far

To rise to the top of your list, the change should be very visible or in a key location.

If an idea is not visible, score it lower.

2. It’s too Much Work: Ideas that require a lot of website design, video production, or development pull a lot of resources before you know they will work. 

If the idea requires too much preparation, score it lower.

3. It’s too Small of an Idea: The idea needs to have a significant impact to score highly.

Changing one word in a headline may not have enough of an impact. However, there are small changes that could make a large impact. For example, on this product page for the Paul Frederick website, we added a little guarantee statement next to the “Add to Bag” button. 

control and variation of a product page. Variation has a guarantee statement next to the Add to Bag button.

A small change can have a big impact.

This looks like a small change, but it’s next to the CTA, which is a high-impact design element. Visually, it appears to be low impact, but it delivered an 11% lift in an AB test.

If an idea has the potential to make a big impact, score it higher.

4. You don’t have any supporting Data: You need evidence that you’re addressing a real problem that, if addressed, could improve conversion rates. As a result, you need to find data that supports your hypothesis and justifies an experiment.

Turn to your conversion rate optimization tools, customer research, and competitive research for this:

  • Analytics
  • Customer surveys
  • Site feedback
  • Chat transcripts
  • Sales conversations
  • Support conversations
  • Competitor websites and campaigns 

The more evidence you can find to support your idea, the higher it will rise on your list.

Step 3. Select a high-ranking idea and design a test

Once you’ve prioritized your ideas, you’re ready to begin experimenting. Having the right messaging and value proposition is table stakes for any persuasive website. This is a good place to start.

Example: Testing Copy

Keep in mind, testing long versus short copy will only get you so far. Instead, test different ways of writing copy to engage the greatest number of visitors.  

There are four personalities that you need to optimize for, and they align with four research modes outlined in Brian and Jeffrey Eisenbert’s book, Waiting for Your Cat to Bark? I talk in depth about these buyer personas here, but let’s look at a quick overview:

Competitives 

  • Decision making: quick and logical
  • Expectation: Help them make a smart decision quickly. 
  • Their learning mode: “I need to know what’s in it for me.”
  • Tip: Put key information at the top of the page.
  • Myers-Briggs equivalent: NT

Methodicals 

  • Decision making: deliberate and  logical
  • Expectation: Give them enough information to make their own decision. 
  • Their learning mode: “I understand the processes and details.”
  • Tip: Include a logical navigation that helps them find additional pages with more information.
  • Myers-Briggs equivalent: SJ

Humanists

  • Decision making: deliberate and emotional
  • Expectation: To know, like, and trust you, as a company. 
  • Their learning mode: “I want to know how I will feel if I take action.”
  • Tip: Use social proof and trust symbols on the page.
  • Myers-Briggs equivalent: NF

Spontaneous 

  • Decision making: quick and emotional  
  • Expectation: The basic information they need to take action, and an obvious way to respond quickly. 
  • Their learning mode: “I tend to just give things a try.”
  • Tip: Put the call-to-action form in the hero area.
  • Myers-Briggs equivalent: SP

Be aware, when you craft your web copy, you will be battling your own biases. I’m a very humanist writer. My head of content is more of a competitive. Both of us write copy from our own quadrants.

But there are workarounds that we use to design messaging that appeals to everyone who visits the website. 

How to Optimize a Page to Speak to All Modalities

Here’s an example of a page from our website that attempts to speak to all four types.

a web page with language from each modality highlighted in a different color

Unfortunately, when we try to speak to everyone, we end up speaking to no one.

Conversion-optimized messaging will appeal to the personality types of your target buyers. So start by evaluating your ideal buyers against the four personality types I listed above. That may be all four types or just a few. For example, at Conversion Sciences, our clients don’t align with the Myers-Briggs “SP” personality, so we don’t optimize for that.

Once you’ve identified the personalities you need to engage with, you can use this process to rewrite the copy to align with other modalities.

1. Write the web as you normally would.

As I mentioned above, your copywriter will likely write from their own modality. That’s okay. Just make sure it’s persuasive copy that can drive visitors to take a desired action.

2. Rewrite the copy in another voice.

This has always been a challenge for writers. We tend to write in our own style and have difficulty writing for the specific modes.

With AI, however, this is easy. Since AI recognizes the Myer’s Brigg’s personality types, you can ask it to rewrite your copy in another voice to ensure it appeals to these modalities:

  • Competitive: NT
  • Methodical: SJ
  • Humanist: NF
  • Spontaneous: SP

For instance, if you’re a humanist writer (an NF) like me, you’ll ask AI to rewrite the copy for another modality. 

“Please rewrite this copy for a digital marketer who has an SJ Myers Briggs Type.”

Two variations, one that is written to appeal to everyone (as shown by highlights in multiple colors), the other that speaks to SJ personaity types.

Note: One of the best GPTs for copywriting is Claude, though Google’s Gemini was recently found to outperform ChatGPT. AI is evolving quickly and “what’s best” is likely to change. So use whatever tool you’re most comfortable with.

3. Rewrite the copy again for each personality.

This gives you substantially different versions of the same web copy, and that’s exactly what you want when we’re AB testing messaging. We want very different results, so we can get make big impacts.

4. Create a variation of the copy that could appeal to all personalities.

Take elements from the winning variations of the copy you generated above and consolidate them into one version that speaks to all personalities.

This process helps you create messaging that’s specifically designed to increase the percentage of people who convert, since it speaks to every visitor’s personality type. Once you have conversion-optimized copy, you’re ready to focus on the design of the page.

Note: You need to check AI outputs because they can hallucinate, making claims in your copy that aren’t true. Use a diff checker like Editpad to see where the AI is making changes to your copy.

Step 4. Designing landing pages that convert

When a visitor opens a landing page, we want them to know two things:

Number one, we want them to know they’re in the right place. So the landing page has to keep the promise that was made in the ad, email, or social media post that brought them to the page.

Number two,  we want to give them a reason to read on. Highlight something that is unique or unexpected about your product or brand. Make them curious about how you solve a problem or make a product unique.

We also want them to know we’re asking them to do something. It needs to be clear, visually that there’s a next step in their journey with us.

Good web design ensures nothing on the page distracts users from achieving the goal you’ve set for them on the page, whether that’s to fill out a form, make a purchase, or click a button.

But when most people hear the term “designing landing pages” or “website design,” they tend to focus on the layout, colors, fonts and images. They aren’t thinking about the conversion potential of a page.

Remember, we want a conversion-focused website design that draws the visitors’ to the information that builds the value proposition of the page or site. Designers use techniques that create a visual hierarchy, leading the eye through the page in a way that highlights key messages and calls to action. 

We’ve trained our design teams to design for the business executives, not the business’s customers.

Keep in mind, like the rest of the optimization team, designers have their own confirmation biases. They’re likely to create a visual design based on their own preferences. When they’re done, they’ll bring in an executive and other people on the team who review the landing page or website’s design — who will ask for changes based on their biases.

At the end of the day, this process will result in a webpage that works for the organization, but not necessarily the end user. 

Conversion designers, like conversion copywriters, know how to design for conversion. They understand their own biases. They understand the importance of the user experience and to use design to help you meet your conversion goals. 

Here are some of the tools designers will use to create a successful visual hierarchy for a conversion-focused page design. This will provide a better user experience and boost your conversion rate:

  • White space – can make an element stand out. Often used around your call to action, so it bumps up in the visual hierarchy
  • Negative space – providing blank spaces that guide the user’s eyes.
  • Font size and coloring – can be used to communicate key messages when visitors scan the page. 
  • Juxtaposition – putting design elements together in a way that amplifies the message
  • Color – color is a powerful way to make page elements like buttons “pop.” 
  • Highlights – making the pieces of your value proposition and messaging that drive conversions stand out

But conversion-focused web design doesn’t stop there. You also need to think about how you can visually communicate credibility, authority, and trustworthiness.  Here are a few ideas:

Credibility signals:

  • The number of years you’ve been in business
  • The number of products you’ve sold
  • Your experience
  • Your awards
  • Membership in industry associations 
  • Trust organizations, such as the Better Businses Bureau

Social proof signals:

  • Testimonials from happy customers
  • Ratings and reviews
  • Mentions by media outlets, like Forbes or Inc.
  • Customer logos

Risk reversal elements:

  • Links to your privacy policy
  • A lock symbol on your order button
  • A guarantee or return policy

Value proposition:

  • A navigation bar that answers, “Am I in the right place?”
  • Logo, company name, and tagline in the top banner
  • Make key product/service categories visible in the navigation bar

Step 5. Test and Optimize

By now, you’ve already done the hardest parts of an optimizer’s job. You’ve created some ideas for optimizing the performance of the page. You’ve developed messaging that speaks to every personality and learning modality represented by your ideal customer. And you’ve used used a design approach that can lift landing page conversion.

You’re ready to start running experiments and AB tests and letting your data guide your decisions. 

Illustration of AB testing, which is key to the conversion rate optimization process

The conversion rate optimization process at work

To help, we’ve created this guide that covers everything you need to know about AB testing. Read it next if you want to starting experimenting — or if you’d prefer to get some professional help, explore our conversion optimization services.

Before starting any tests, however, it’s important to adopt the right mindset. So I’m going to give you a quick look inside the optimizer’s brain. (Scary, I know!) 

Optimizers are only concerned with two questions:

  1. Am I in the right place for what I’m trying to do or for the problem I’m trying to solve?
  2. Is there a reason for me to keep reading?

When your visitors arrive at your website, they immediately scan the page to answer the first question. This is why you website design needs to make it easy to understand what your website is about. 

It’s also why functional headlines work better than clever headlines. For example, which of these headlines makes you understand where you are?

  • “A Place of New Beginnings”
  • “Addiction Torments the Addict and Their Loves Ones”

You might find the first headline on any number of websites: from a hospital’s maternity page to a home builder’s website. The second one is obviously from an addiction treatment center.

That orients the visitor. They either leave or stay because they know they’re in the right place. Now they try to answer question 2: Is there a reason to keep reading?

To answer that question, they scan your headlines, subhead, and navigation labels. They’re looking for that one part of your value proposition that says, “There’s something different here, something you need to understand.” 

This is why your unique selling proposition should be clearly communicated on every page of your website. It should also be subtly conveyed through your messaging and graphics. 

For example, at Conversion Sciences, our use of the scientific method sets us apart from other CRO agencies. Because of that, we infuse every page with scientific ideas and the scientific method. Science is in our DNA. And if you spend enough time on our website, you’ll understand that. We even call our optimizers Conversion Scientists and have trademarked the name.

Your visitors are looking for your DNA, so you need to communicate it on every page of your website. That means you’ll avoid “lazy design.” By that, I mean:

  • Copy that’s written by AI alone, without human editing
  • Landing page builders with generic templates
  • Stock photos
  • Novel design trends
  • Design services that focus primarily on how the page looks

Stock photos are a big issue for me. Most websites show images of people smiling, walking through the park, or typing on a laptop. These are “lifestyle images” that don’t move the value proposition forward. 

They don’t express your unique DNA.

I could talk for hours about this (and I do in my workshops). But I’ll save it for another article.

For now, I want you to start thinking like an optimizer. To do that, you’ll prioritize the two questions above. You’ll evaluate your website by how well it answers those two questions. And you’ll redesign your website and build new pages with those same two questions in mind.

Then, when running experiments and tests for conversion-focused web design, follow these two tips to get better conversion rates:

Increase Sample Size: Ensure your experiments have a large enough sample size for reliable results. I talk more about that in this article on behavioral data.

Increase the Quality of Experiments: Focus on conducting high-quality experiments with impactful hypotheses and a methodology that keeps your program from getting derailed.

Optimize for the right things: You want to improve the number of conversions, or actions taken by your customers. But the best way to do that is to optimize your conversion rate (the number of conversions as a percentage of the total sample size). 

The Conversion Rate Optimization Process: Key to High Converting Websites 

Conversion rate optimization is an ongoing process that requires a scientific approach. Otherwise, you’ll fall prey to confirmation biases that lower your conversion potential.

By following the CRO process I’ve shared here, you can significantly improve your conversion rates, website performance, and business growth.

Embrace the scientific method, leverage available data, and continually test and refine your strategies to achieve the best results. Ready to take a data-first approach to your business growth? Let’s talk about how our Conversion Scientists® can apply our proven approach to conversion optimization to your website. Schedule your free consultation here.

Google’s document leak uncovered surprising connections between conversion rate optimization (CRO), search engine optimization (SEO) and user experience (UX). Listen in as Conversion Scientists® Joel Harvey and Brian Massey talk about these connections and what they mean for optimizers. 

Logo and title of the post


Subscribe to the Podcast

iTunes | Spotify | RSS

All Episodes

TLDR Summary

  • The Interplay of CRO and SEO (01:00 – 04:00)
  • Fundamental Building Blocks of SEO and CRO (04:00 – 06:00)
  • Strategies for Great Content and User Experience (06:00 – 11:00)
  • Balancing Personal Voice with SEO Requirements (11:00 – 14:00)
  • Differences Between Web Design and UX Design (14:00 – 19:00)
  • Importance of User Research in UX Design (19:00 – 23:00)
  • The Holistic Approach to User Experience (23:00 – 26:00)
  • Summarizing the Conversation (26:00 – 28:00)

***

Conversion Rate Optimization (CRO) and Search Engine Optimization (SEO) are often seen as separate entities. But there’s a surprising amount of overlap between the two: Both aim to improve user experience (UX) and deliver great content, ultimately leading to higher engagement and conversions. 

Google’s document leak made this abundantly clear. In fact, we’re excited about the connection between CRO, SEO, and user experience.

Let’s explore how these disciplines intersect and how you can leverage their synergy to boost your online performance.

The Interplay of CRO and SEO

When considering the relationship between CRO and SEO, think of them as two sides of the same coin. CRO is SEO. The things that fundamentally improve your SEO are also the things that fundamentally help you to improve your conversion rate.

Google’s recent revelations make this undeniable. The core elements of successful SEO are great content and an excellent user experience. It’s not about keyword stuffing; it’s about quality.

There was a time when SEO was all about exact match domains and keyword stuffing. But those days are long gone. 

Today, SEO is about understanding and meeting user needs, which is precisely where CRO comes into play. 

“It’s not just about keyword stuffing. It’s about having the best content and a great user experience. Those are the real fundamentals of SEO and CRO.”

Fundamental Building Blocks of SEO and CRO

At the heart of both SEO and CRO is a deep understanding of user needs and behaviors. Whether you’re offering content, a product, or a service, the key is to provide something valuable that addresses a problem or fulfills a desire. 

Without this fundamental understanding, your optimization efforts will only go so far. The era of gaming the system with keyword tricks is long gone. Genuine engagement is now the cornerstone of success.

This approach applies to both SEO and CRO. To succeed today, you must adopt a user-centric mindset.

“If people don’t like the content, no matter what you’ve done from the keyword and link perspective, it probably isn’t going to work anyway, because other people aren’t going to be talking about it,” Brian emphasizes.

Strategies for Great Content and User Experience

Creating great content and a seamless user experience requires a balanced approach. On one hand, you need to be yourself and communicate authentically. On the other, you must adhere to the data-driven demands of SEO, such as keyword density and topic coverage. Reconciling these strategies can be challenging, but it’s essential.

Consider this advice from Anne Handley’s newsletter: “Be yourself, be your brand, and talk the way you talk.” 

This encourages a more relaxed, authentic approach to content creation. However, there’s also the technical side of SEO, which often requires precise keyword usage and structured content to rank well.

Start by embracing your unique voice and passion for the subject. Write as if you’re speaking directly to your audience, sharing your insights and experiences in a way that feels natural. 

Joel captures this balance well: “The argument for writing with your own voice is that it has energy and passion. The content is fun. By contrast, whenever you’re writing for parameters to feed an SEO algorithm, it isn’t fun.”

Once you have your core content, refine it to incorporate SEO best practices. This means integrating relevant keywords naturally, ensuring the content flows well and remains reader-friendly. 

By doing so, you’re not only creating content that is optimized for search engines but also maintaining the authenticity and flow of your original message.

Balancing Personal Voice with SEO Requirements

Balancing a personal, authentic voice with the technical requirements of SEO is one of the biggest challenges in content creation. 

Content infused with passion and personality is more engaging and resonates more deeply with users. While SEO is crucial for driving traffic, it shouldn’t overshadow the need for genuine, compelling content.

As Brian says, “If you’re letting SEO lead it completely, that is the tail wagging the dog.” 

Instead, aim for a harmonious blend where SEO insights inform but don’t dictate your content. 

Use data to inform and influence your decisions. Not only will you be able to maintain an authentic voice, you’ll also build a stronger connection with your audience.

Differences Between Web Design and UX Design

Understanding the distinction between web design and UX design is critical. While both aim to enhance user interaction with a website, they do so in fundamentally different ways. 

Web design often centers around aesthetics and layout, focusing on how the site looks and feels. This involves creating visually appealing elements, choosing color schemes, and ensuring the site is attractive to visitors.

In contrast, UX (User Experience) design delves deeper into how users interact with and experience your site. A UX designer’s role involves continuous research and testing to ensure every element on the site meets user expectations and enhances their experience. 

As Brian explains, the UX designer is “designing to the content.” A web designer is generally laying out a page and leaving space for images and copy to be added after they’ve done their job.  

For example, a web designer might create a visually stunning homepage, but a UX designer will take it further by testing how users navigate that page, identifying friction points, and making adjustments based on user feedback. Their process ensures that the design is not only attractive but also functional and user-friendly.

Importance of User Research in UX Design

User research is a cornerstone of effective UX design. It’s not just about creating visually appealing designs, it’s about ensuring every interaction aligns with user needs and expectations. 

User research helps identify and rectify any friction points in the user journey, leading to a smoother and more satisfying experience.

Think about the difference between designing a beautiful website and designing a website that users find intuitive and enjoyable. The latter requires a deep understanding of your users, which comes from thorough research

By gathering insights into user behavior, preferences, and pain points, you can design experiences that are not only visually appealing but also highly functional and user-friendly. 

“User research isn’t just a one-time activity. It’s an ongoing process that involves continuously gathering feedback and making iterative improvements,” Brian emphasizes.

By continuously optimizing each touchpoint, you create a cohesive and engaging journey that fosters loyalty and drives conversions.

For instance, conducting user surveys, interviews, and usability testing can reveal valuable insights about how users interact with your site. These insights can then inform design decisions, leading to a more intuitive and satisfying user experience.

The Holistic Approach to User Experience

Think of user experience like the role of a flight attendant. A flight attendant’s job isn’t just about serving drinks or demonstrating safety procedures. It encompasses the entire journey of the passenger, ensuring comfort, safety, and a pleasant experience from the moment they board to the time they disembark.

User experience works the same way. It’s not just about avoiding errors; it’s about creating delightful, memorable interactions at every touchpoint. 

From the initial website visit to the final purchase, every interaction should enhance user satisfaction. This involves addressing potential issues, eliminating friction, and finding opportunities to delight users and exceed their expectations.

“Nothing exists in a vacuum,” says Harvey. “Nothing exists without its own context. So experience is a holistic thing. Everything you do and show and say to people, as well as how it makes them feel — that’s user experience.”

It’s like a flight attendant who is attentive to small details, like remembering a passenger’s preference or providing reassurance during turbulence. When optimizers pay attention to details in UX design — providing intuitive navigation, fast load times, and personalized content — it can significantly impact user satisfaction and conversion rates.

Your Takeaways

Understanding the deep connections between CRO and SEO is crucial for any digital marketer. Here are the key takeaways:

  • Great Content and User Experience: Focus on delivering valuable, engaging content and a seamless user experience.
  • Authentic Voice: Balance SEO requirements with authentic, passionate content creation.
  • User Research: Incorporate user research into UX design to ensure every interaction meets user expectations.
  • Holistic Approach: Treat user experience as a comprehensive journey, from first interaction to final conversion—just like the holistic care a flight attendant provides throughout a passenger’s journey.

By implementing these principles, you can enhance your digital marketing strategy and achieve better results. Stay tuned for more insights in our next episode! Optimize your user experience: Get a free conversion consultation.

Conversion rate optimization (CRO) is important at every stage of your business. But if you have a low-volume site, you may not be able to do the A/B testing that is the hallmark of so many CRO projects. Here are conversion optimization techniques that work no matter where you are on the CRO spectrum. 

2 Guys title slide - episode 1


Subscribe to the Podcast

iTunes | Spotify | RSS

All Episodes

TLDR Summary

  • Different types of CRO: Pre-post testing vs. A/B testing (00:00 – 5:03)
  • Challenges with low traffic sites and optimizing for them (5:03 – 11:55)
  • Importance of understanding your data and setting expectations (11:55 – 18:31)
  • Role of heuristic analysis and its limitations (18:31 – 24:06)
  • Value of session recordings and heatmaps (24:06 – 28:37)
  • Knowing what and where to test (28:37 – 32:05)
  • Importance of having a conversion strategist and the right team (32:05 – 37:07)
  • Emphasis on systematic experimentation and continuous improvement (37:07 – 45:04)

Download the Transcript

***

Conversion rate optimization is the process of improving your website to increase the percentage of visitors who take a desired action. Whether it’s making a purchase, filling out a form, or signing up for a newsletter, effective CRO can dramatically enhance your online performance. Here are your best options for optimizing your website.

The Spectrum of Optimization

Brian and Joel explain that conversion rate optimization can be viewed on a spectrum, ranging from low-volume sites to advanced data-driven strategies. Each stage requires different approaches and considerations. In this podcast, they explore the different conversion optimization techniques that work for each stage of the spectrum.

Optimizing a Low-Volume Site

On one end of the spectrum is the lower volume website. To optimize these sites, you have to turn up your risk tolerance dial. Since you don’t have a big enough sample size to run accurate tests, you run into the optimization paradox: you have less data to understand how to bring about meaningful change, but you have to drive meaningful change in order to find detectable change.

The conversion rate is essentially a ratio. The smaller your sample size, the more subject the ratio is to fluctuation.

The truth is, it’s very difficult to make changes that win on any website. Even for the best in the business, the batting average is three out of ten. Four out of ten are winners to the dollar. Nobody knows exactly what’s going to work. That’s the puzzle of it. You have to fail systematically to uncover what’s going to work. For smaller websites, that’s even more challenging.

Conversion Optimization Techniques for Low-Volume Sites

Before and After Testing (BA Testing): Change something on the website and wait to see whether it improves results. It’s important to know which tools are needed for the occasion and how many tools you can be using at the same time. And, of course, if you aren’t measuring results, it’s not really optimization.

Home Run Testing: Also known as big swings, where you run an A/B test but apply your results to before and after testing. With this approach, you’re looking for signature wins of 50% to 70% lift. With a relatively small sample size, the math works out because the lift is so big. But you have to be willing to make an optimization error, calling a test a loser because it only had 20 or 30% lift — even if it could have improved things if you had been able to run the test long enough to get the right sample size.

Data-Informed Gut Decisions: On a lower volume website, you have to know your customers. You can have 40 to 30 conversions with the conversion rate showing a delta, but it’s still low volume. You don’t have statistical significance, but if there is no evidence that this is going to hurt you and there is evidence that the change will help you, then do it. Do it and move on.

Choosing an Agency for Low-Volume Sites

If you’re a smaller volume website, there’s bad news. Our full-team approach probably isn’t the best fit for you. The ROI won’t be there, and we don’t do deals unless we feel we can provide measurable value. 

Optimization is about peeling back layer after layer of the onion — different types of onions and different types of data. You have to discover your customers’ preferences. Then you make meaningful changes for them.

Heuristic Analysis

Moving along the spectrum, Brian and Joel discuss heuristic analysis, which involves evaluating a site based on established best practices and design principles.

Heuristic analysis is useful for identifying low-hanging fruit and common issues in website design.

Fix the things that are broken: There’s only one rock-solid best practice. Make sure your site is not broken in a way that prevents people from taking the action you want them to take. 

Context matters: Each website is unique, and what works for one may not work for another. Contextual understanding is essential.

Heuristic analysis has its limitations. What seems like an objectively good idea will have secondary effects. Fixing one issue may break the experience for your visitors. 

Heuristic CRO is also more project-oriented.  Over the long-term  you must maintain constant upward pressure on the conversion rate, because everything else is putting downward pressure on you — ad costs, competition, market fluctuations, new technology. Optimizers watch your data and recognize what’s working, what’s not, and keep moving forward.

Heuristics Plus Message Testing

To take heuristics to the next level, you can use online focus groups or online survey services that let you test your designs. An example is the five-second test. For this type of testing, you design two to four versions of a page and put each variation in front of 25 people for five seconds. Then you ask questions like:

  • Do you know what this company does? 
  • What would you do if you wanted to take action? 
  • Do you think this company is credible?

You’re trying to understand whether people can understand your message at a glance.

The challenge with this type of testing is that people aren’t always honest. They give you the answers they think you want to hear. We like to stick with questions about how well we’re communicating rather than how well we’re presenting the product. That distinction makes sense.

Collecting Data on the Site

As you move further along the spectrum, the focus shifts to more advanced, data-driven strategies. You want to flesh out your heuristics ideas with data, which will reveal things that tend to lead to the most meaningful hypothesis or ideas.

Here, everything starts with asking good questions and running them through various sources of data. You can leverage session recordings, and for higher traffic sites, you can do heat map reports, which tell you where visitors are clicking and how far they’re scrolling on the page. 

This gives you feedback on where problems exist and gives you pointed ideas at the highest level:

  • Where are people clicking on things that aren’t clickable?
  • Where aren’t people clicking that are clickable?

You can use this data to build a better visual hierarchy. This is important because the most important element on the page may be halfway down the page, and only 50% of visitors actually scroll that far. Or you may be overthinking things and making the page more complicated than it needs to be. 

Hypothesis-Driven Testing

Once your site qualifies for AB testing — you can get a reasonable sample size for testing at least one good variation in the space of four to six weeks — you can layer heuristics and data for a full-blown conversion audit assessment. 

A conversion optimization audit reviews your website through the eyes of the visitor. It’s notoriously difficult to put yourself in someone else’s shoes, which is one of the advantages of having an external CRO agency.

An in-house CRO team is prone to test the wrong things. They let group-think interfere with their optimization efforts. 

At Conversion Sciences, we:

  • Follow the scientific method
  • Do a lot of research
  • Collect ideas, score them, and rank them 

A systematic, scientific approach to optimization — experimenting, trying things, and using data to fuel your hypotheses — minimizes risk. The most risky thing is to do nothing. The second most risky thing is to apply group-think to your testing.

The Optimization Spectrum

On one side of the spectrum, you have individual contributors and consultants who do a heuristic review. These people are often able to optimize the low-hanging fruit.

At Conversion Sciences, we work with businesses that have already picked their low-hanging fruit. Because of that, signature wins of 50% to 60% are rare. Our tests generally have 10% to 15% lifts. At higher volumes, we’re able to achieve 3% to 5% lifts that drive constant upward pressure. 

With more sophisticated clients who are already using data effectively, we often discover blind spots. There are things they can’t see or answer questions about. Our Conversion Scientists® can see into those blind spots, and our development team can implement the technical solutions.

The challenge for any optimizer is to balance data collection with forward movement. It’s important to keep testing.

Knowing What and Where to Test

Optimization isn’t just about running tests; it’s about choosing the right elements to test and focusing on high-impact areas. It’s better to get a 2% increase on 100% of conversions than a 100% increase on 1% of the traffic or conversions.

When running an AB test, you want to keep the velocity up. Make sure you’ve always got tests in the water. That’s why it’s helpful to have a CRO agency that can handle development, design, and analytics. 

The Importance of a Conversion Strategist

It’s hard to maintain an in-house optimization program. There are very few conversion strategists — we call them “Conversion Scientists® — who are actually good at optimization.

A successful CRO program requires a skilled team, and the Conversion Scientist is at its core. This person synthesizes data, develops hypotheses, and leads the optimization efforts.

  • Conversion Scientist: The hub of your CRO efforts. Needs to be experienced and data-driven.
  • Support Team: Includes developers, designers, and copywriters who are adept at testing and iterating based on data.

When we do a conversion consultation, we aim to understand where folks are with their business from a conversion volume perspective. We help them understand what their best path forward is, which may or may not involve us. 

If it doesn’t involve us, we take the time and be very present in the moment and give them the best possible advice we can. 

If there is a fit, we do our due diligence to determine how we can provide value. We talk about our core values and our North Star word: longevity. This is key because we can only achieve longevity by working with people we truly believe we can help and putting constant upward pressure on their conversion rates.

We have client relationships that have lasted six years or more. Often, by the time we’ve worked together for six years, we are affecting them internally. They begin to use data and experimentation on other parts of their business. That’s longevity.

Your Takeaway

Your optimization strategy exists on a spectrum. In early stages, you can evaluate the data to understand where friction exists. Perhaps you can perform simple tests. But when you’re ready to hire a CRO agency, you need to find an agency that uses the scientific method, understands how to ask the right questions and run the right tests, and seeks a long-term relationship. 

Stay tuned for more ground rules from Two Guys on Your Website.

AB testing tactics can tell you whether your website changes are having a meaningful effect on your visitors. Before and after testing, sometimes referred to as BA testing, is similar, and it’s one of the easiest ways to learn how your design changes are impacting your visitors

In this guide, you’ll learn what BA testing is, how it works, and how to get reliable results from before and after analysis.

What Is Before and After Testing?

Every change you make to your website is a test. Yet, changes are often made without analyzing the impact. Even small changes can have a material impact on your conversion rates.

Before and after testing (BA testing) is a type of conversion testing that evaluates the impact of changes to your website.  As the name implies, it means comparing the performance of a new website or webpage to the previous version, so you know whether conversions increased or decreased after the change was pushed to production.

This can be as simple as looking at the number of conversions before the new webpage was launched and comparing it to the conversions after.

However, there are some problems with this approach that can lead you to the wrong conclusions. You might decide that the new design is an improvement, when in fact it materially lowered sales or leads. Alternatively, you may not notice a winning design because it looked like conversions went down when you launched it.

We are going to show you how to minimize the chances of making a bad call by using AB testing tactics for analyzing your changes.

Read the Complete AB Testing Guide here.

The Problem with Before and After Testing

The problem with before and after testing is that the results can be influenced by things that have nothing to do with the design change itself.

Here are some of the reasons leads or sales might drop after publishing a change to your website.

Your new design is not as good as the old one.

Thanks to AB testing, we have learned that even small changes to the layout of a page, copy, or images of a page can have a surprising effect on your visitors.

Your new design may not seem as credible to your visitors, making them hesitant to buy or submit a form.

You may tighten up your copy, eliminating information you felt was insignificant. Yet that information may have been important to a large segment of your visitors, leaving an important question unanswered for them.

Seasonality caused a drop in leads.

Many businesses have seasonal increases and decreases in their conversion rates. For example, most ecommerce websites enjoy more eager traffic during the holiday shopping season.

Businesses selling to other businesses (B2B) may experience an increase in leads at the end of the month, end of the quarter, or end of the year due to budgetary influences.

If you release your new design as these seasonal changes are happening, you may decide the new design is performing more poorly, when in fact the change was not the reason for the drop.

Your business changed its traffic mix.

We’d love to believe that the teams bringing traffic to the site are in close communication with the web development team. We know this is not always true.

For example, your paid search agency may change the bidding strategy, add keywords, or change ad copy at the same time you release your new design. This can have a material effect on the quality of the traffic, reducing your conversion rates.

Your email team may have changed the email schedule, reducing this highly qualified traffic.

Again, the change in the performance was not related to your design.

Your business ended a promotion.

It is not uncommon for new designs to be released at the beginning and end of promotional periods. This muddies the water when trying to ascertain if a new layout is truly worse, or if a discount was the reason for a change in sales.

A competitor increased their ad spend or started a promotion.

Your competitors will impact your traffic mix, and you may never know. If a competitor is siphoning off your prized ad traffic, you would expect your conversion rates to drop.

If this happened at the same time as your new design launched, you might conclude the new design is inferior to the original.

There was a technical error on the new design.

Sometimes a great new design is crippled by a bug, a glitch, or a longer load time. Your visitors may have preferred the new design and copy, but wouldn’t tolerate the slow or broken page.

AB Testing vs. BA Testing

The solution to the problems listed above is AB testing. In an AB test, the new design is shown to half of the traffic and the other half of the traffic sees the original. This ensures that any seasonal effects, changes in traffic, promotions, and competitor shenanigans impact both versions equally.

AB tests are designed to ensure that there are no differences between the original and new design. We can use the same AB testing tactics to give us more confidence in our before and after analysis.

With a little discipline, we can evaluate a new page against its predecessor with greater confidence in the conclusions.

Why Not Just Do an AB Test?

There are many reasons we will do a before and after test instead of an AB test.

  • We do not have the tools and team to do AB testing.
  • There are too many other AB tests running.
  • We don’t think a change is impactful enough for an AB test.
  • The change is temporary, but we still want to learn from it.
  • The site doesn’t get enough traffic to run effective AB tests.

The Benefits of Before and After Analysis

There are some serious advantages to doing before and after analysis.

Before and after tests don’t require sophisticated testing tools.

The benefit of a BA test is that you don’t need to use any fancy AB testing tools. Your Analytics tool (such as Google Analytics) has the data you need.

Before and after tests don’t require sophisticated planning and setup.

While we are going to apply AB testing tactics to our before and after analysis, before and after testing doesn’t require all of the planning and setup required by an AB test.

You can go back and evaluate past changes.

As long has you know the dates of changes made to your website, and the specifics of the changes, you can determine how those changes are impacting your conversion rates. This is not true of AB testing.

Before and after analysis uses analytics reports, and you probably have data going back a year or more.

AB Testing Tactics to Use for Before and After Analysis

To get the best analysis of the change, we want to follow the rules that an AB test follows.

  • The sample sizes should be similar.
  • The traffic should be similar.
  • We want to control for seasonality.
  • We want to compare conversion rates, not conversions.

Create a segment for your change.

Not every change will be a sitewide change. Often your change will affect only a single page or a page template.

To increase the accuracy of the analysis, only consider traffic that has seen the pages on which the change was made.

For example, if you change the layout of the product page template for your ecommerce website, you will want to create a segment of visitors that includes only those sessions that saw at least one product page. If you changed something in the design of your site’s header, you will use all of the traffic in your analysis.

Select the proper parameters for your analysis.

You are going to compare the performance of a page or website for a period before and a period after the launch of a change. If you’ve just launched a design change to production, you will have to wait to generate data for the new design.

How long?

Ideally, you would wait a period of time long enough to generate at least 100 conversions. Since before and after testing is prone to errors, you may wish to double this number.

For example, if you had 80 conversions before your BA test and 120 conversions afterwards, it appears that the changes have had a positive impact on the conversion rate. But with such a small sample size, it’s hard to be sure. By waiting until you reach 200 conversions after your changes, you may see a different result. At the very least, it minimizes the margin of error in before and after testing.

Select appropriate before and after periods.

To gain reliable insights from BA testing, you must run the test for a sufficient duration. Select a period of time that’s long enough to capture a significant amount of data and account for any short-term fluctuations. 

Consider selecting durations of at least a month or longer to ensure an adequate sample size of conversions.

1. Compare similar timespans.

The length of the time that you choose for the original “before” version of the page should be the same as for the new page.

Use similar time periods in BA testing
Use similar time periods in BA testing

Assuming that traffic is pretty consistent, this should give you a similar sample size for the before and after analysis.

Use similar number of conversions in BA testing
Choose similar timeframes that have a similar number of conversions.

BA Testing Tips

Evaluate similar traffic.

The kind of traffic you are receiving and the quality of the traffic coming to your changed page will impact your before and after analysis.

In the following example, the company began sending email at the same time that changes were launched. Email traffic typically converts at rates several times the site average. (Hint: Are you doing enough with email?)

Make sure traffic is consistent in a BA test
Make sure traffic is consistent in a BA test

Because of this, it will appear that the change made to the page caused a jump in conversions.

The solution here is to use only those traffic channels that remained consistent. In Google Analytics, use the Channels report to see if any traffic source changed.

Watch out for intra-week and intra-month seasonality.

Google Analytics 4 (GA4) allows you to compare two periods quite easily. But, you should be aware that there is “seasonality” within each week and, often, within a month.

In the following example, the before period has five weekend days. The after period only has four. Since conversions typical drop on weekends, this may skew your analysis in favor of the original design.

Two calendars with 18 days highlighted on each. One contains four weekend days and the other contains five.

In this example, the before period covers the end of the month while the after period covers the beginning of the month. This can cause skew in certain kinds of businesses.

Beware of year-over-year analysis.

Year-over-year analysis is one way to control for seasonality when doing a before and after test. With this analysis, you choose the period from last year that maps onto the period from this year that includes your site changes.

Be sure to take into account big changes in site traffic and buyer behavior, such as those we saw in the pandemic. Visitor behavior changed when we were encouraged to stay at home. And as the virus surged and waned, that behavior changed.

These types of behavioral changes lead us to put less faith in year-over-year analysis.

Compare conversion rates, not conversions.

While we hope to minimize changes in traffic when selecting time periods for our before and after test, there will be changes. A drop in traffic may reduce conversions, even as changes to our pages makes the conversion rate go up.

We don’t want to make this mistake.

Track conversion rate instead of looking at conversions in BA testing
Track conversion rate (dark blue line) instead of conversions (light blue line) in BA testing

Be sure to choose the conversion rate to evaluate your change, not just the number of conversions.

For ecommerce businesses, you should evaluate Revenue per Visit. This metric combines changes in conversion rate plus changes in average order value (AOV) that your change may have impacted.

Look for BIG changes in performance.

Even with all of this careful selection of time periods, we are victims of changes over time that we cannot control. As I’ve pointed out, we cannot know if changes in competitor behavior or the sentiment of visitors have changed.

As a result, we cannot put faith in small changes from our before and after analysis.

For a well-designed A/B test, we set our maximum P-value at 0.05. This ensures we have a 95% or better confidence that any change we’re seeing between the two periods is not due to random chance. For a before-and-after analysis, I recommend setting the P-value to 0.01. This means that we want to be 99% sure that a change in conversion rate (or other relevant metric) is not just random chance.

How can we calculate this P-value? We’ll use an A/B test calculator.

6. Use an A/B Test Calculator for before and after analysis.

To get a statistical calculation of how our change affected our conversion rate, we can use one of the free A/B Test Calculators available on the web. We like the CXL Test Calculator.

A screen shot of the CXL AB Test Calculator with the tab "Test Analysis" highlighted.

Since we are analyzing existing data as if we had run an A/B test, we’ll choose “Test Analysis.”

We can then go to Google Analytics and create a report with the date ranges we’ve selected for the before and after periods. It might look something like this:

GA4 report showing Users and Conversions for two periods of time.

In this case, our report shows Conversions (purchases), Users and the conversion rate for purchases.

We will plug these numbers into our A/B test calculator.

A screenshot of the test calculator with the data from the GA4 report entered into it.

We can see in the report that:

  1. We have a 15.8% lift from the original page (control) to the new page (variation). Is this a believable lift?
  2. Our P-value is less than 0.001, which is a confidence interval higher than 99%.

In this case we would be proud of our change and would keep the new variation on the site.

Here’s an example of a test that went the other way:

Screen capture of the AB test calculator. In this case the Control was worse than the Variation.

In this case, the change we made had a negative implact on the purchase conversion rate. We can switch to a “two-tailed” test to see if the variation is statistically worse. In this case the P-value is less than 0.001, or against the control with 99% confidence.

We would recommend that you go back to the original page.

What if the P-value isn’t < 0.01? For instance, what if it was 0.04? What if the Control was better with a P-value of 0.04?

We call this “inconclusive.” There is not enough evidence that the variation is worse than the control to eliminate it. In such a situation you can choose to keep the variation or roll back to the original. Even if the variation had a lower conversion rate, if it wasn’t statistically significant, either could be the long-term winner.

However, most of the time, people feel safer keeping the one with the higher conversion rate.

When to Use Before and After Testing

Before and after testing, or pre-post analysis is like having design insurance. Designers and IT people are all to eager to make changes to the website. However, many of those changes will inadvertently decrease marketing performance. This is a statistically valid way to ensure others aren’t working against the business.

You can learn something about your visitors from these tests. When you understand what increases (or decreases) the business performance of the website, you can infer the preferences of your visitors and customers.

Finally, before and after testing lets you “sign the flowers,” a way of saying that you can take credit for conversion-improving changes. If you’ve earned it, take it.

Get Help from the Conversion Scientists®

BA testing, AB testing, and designing experiments can be complicated and confusing. With Conversion Sciences, you have a skilled team of Conversion Scientists who can work with you or provide a done-for-you conversion optimization program

Looking for AI optimization services from people who understand the critical importance of data? When you partner with Conversion Sciences, you get better conversion rates as well as a well-optimized process. Let’s chat.

Here are the 20 top CRO-worthy AB testing software you must consider to help you increase your conversion rates in 2024.

There are a ton of AB testing tools on the market right now, and that number is only going to increase. When evaluating these tools for use in your own business, it can be difficult to wade through the marketing rhetoric and identify exactly which tools are a good fit. That’s why we reached out to our network of CRO specialists in order to bring you a comprehensive look at the best AB testing tools on the market.

Our goal here isn’t necessarily to give you a complete review of each tool, but rather, to show you which split testing tools are preferred by full-time CRO experts — people whose businesses depend completely on the results they are able to deliver to their clients.

We’ll cover two primary categories of tools:

  1. Tools for running the actual AB tests: Most Recommended AB Testing Tools
  2. Tools for collecting data in order to make good hypotheses: 12 Tools For Gathering Data

At the end of the day, the “right” tool is going to vary depending on the business. As Paul Rouke, former Founder and Director of Customer-Centricity at CX agency PRWD, explains:

We see it time and time again: companies sign up to multi-year contracts for feature rich, enterprise level tools which have a fantastic looking client list, and it ends up burning through their entire CRO budget. Companies invest without considering the need for resource and skills, or they are simply sold on the tool’s ‘ease of use’.

Many companies don’t have the internal skills in place yet to actually utilize this tool, and so the all-singing, all-dancing tool hardly gets used. Also, people using the tool don’t understand the need for or cost of customer research, data, psychology, design, UX principles, etc., meaning they’re ultimately testing the wrong things.

The tools that in my experience deliver the most long-term value are those which are reasonably priced, allowing companies to spend more of their budget on making sure they are testing intelligently and developing an effective testing process.

No tool on this list will be the right fit for every business. That said, without breaking up our list into tiers, we would like to note 4 tools that came up very consistently from the experts we queried.

The two most popular AB testing tools by a wide margin were Optimizely and VWO. These are the most common AB testing tools used by Conversion Sciences clients, and virtually every single expert we chatted with is using both of these tools on a regular basis.

Another two tools that came up on our original poll (in about a third of responses), were Convert Experiences and UsabilityHub. Both of these tools received consistently strong reviews from the experts who used them and fill key needs in the CRO space, which we’ll discuss in their respective entries.

Without further ado, let’s talk a look at our list of recommended AB testing tools.

What We Use at Conversion Sciences

Our use of AB testing tools allows us to do post-test analysis of different segments.

Our post-test analysis stack:

So important is this stack to us that we’ve created a Google Sheets add-on for Google Analytics 4 (GA4).

Most Recommended AB Testing Tools

Now, let’s look at our experts’ recommended options for running AB tests. We’ve listed these in order of frequency with which they were mentioned by our experts. This is not to be confused with a quality ranking.

  1. Optimizely
  2. VWO
  3. Convert Experiences
  4. SiteSpect
  5. AB Tasty
  6. Evolv
  7. Kameleoon
  8. Qubit
  9. Adobe Target
  10. Marketing Tools With Built-In Testing

1. Optimizely

20 best AB testing tools CRO experts conversions.
Optimizely is the leading A/B testing tool.

Optimizely is basically the big kid on campus. It’s our experts’ go-to choice for working with enterprise level clients, and despite the significant price increases over the years, it remains the king.

It’s also reasonably user friendly for such a complex tool, as Shanelle Mullin summarizes:

Optimizely is the leading A/B testing tool by a fairly large margin. It’s easy to use – you don’t need to be technical to launch small tests – and the Stats Engine makes testing easier for beginners.

Here at Conversion Sciences, we use this tool every single day, so I asked them to give me a few thoughts on what they like and dislike about it.

According to the team, Optimizely offers some of the following benefits.

  • Easy editing access through the dashboard
  • Retroactive filtering (e.g., IP addresses)
  • Intuitive data display and goal comparison
  • Saved Audiences (not available in VWO)
  • Great integration with 3rd-party tools
AB testing software Optimizely dashboard with AB Test Experiments highlighted and Edit highlighted.
AB testing software Optimizely dashboard with AB Test Experiments highlighted and Edit highlighted.

On the flip side, Optimizely is a bit lacking in these ways:

  • Test setup is not as intuitive compared to other tools
  • Slow updates for saved changes to the CDN
  • Doesn’t carry through query params/cookies within a certain test
  • Targeting is more difficult

Optimizely’s multivariate testing setup is simple and intuitive, and it’s the leading split testing tool for a reason. For businesses with the budget and team to utilize Optimizely to its fullest potential, it is clearly a must-own.

2. VWO

A screen capture of the AB Testing Tool VWO Dashboard
AB Testing Tool VWO Dashboard.

Coming in just behind Optimizely in the AB testing pantheon is Visual Website Optimizer (VWO). VWO is incredibly popular in the marketing space, and in addition to serving as a top choice for businesses with smaller budgets, it is also frequently used in conjunction with Optimizely by businesses who run complex testing campaigns.

Our Conversion Scientists® feel VWO offers some of the following benefits as compared to Optimizely:

More intuitive interface with color coding

  • Faster updates
  • Easier goal setup
  • Easier to download data
  • Better customer support

On the flip side, VWO is lacking in the following areas:

  • Can’t view goal reports all at once, which makes them harder to compare
  • No saved targeting, so you must start fresh with each test unless you clone
  • No cumulative CR graph if you have low traffic (or what VWO considers low traffic). Instead it gives CR ranges. You must export the data to get any usable information.

This perspective is mostly shared by the ConversionXL team as well, as explained by Shanelle Mullin:

VWO is very easy to use, especially with its WYSIWYG editor. They have something similar to Optimizely’s Stats Engine called Smart Stats, which is based on Bayesian decisions. VWO also offers heatmaps, clickmaps, personalization tools and on-page surveys.

Overall, VWO is in intriguing solo option for smaller to midsized businesses and also works very well in conjunction with Optimizely for enterprise clients.

3. Convert Experiences

Screenshot from Convert Experiments shos testing dashboard for the Etsy Product Page.
AB testing tool Convert Experiences.

While Optimizely and VWO were the tools most commonly mentioned, Convert Experiences received some of the most effusive praise from those who had worked with it.

It seems to have hit a sweet spot for SME/SMBs, combining an exceptional power-to-price ratio with an intuitive interface and highly regarded customer support.

We are platform agnostic, so if our client already has a tool in use, then we try to use that.  But in cases where the client has never done any testing before, we typically look first to use Convert (convert.com).  I like Convert for a number of reasons.  From the very beginning, it has been one of the easiest tools to integrate with Google Analytics.  Also, for tricky variations, I’ve had better luck with Convert than others (Optimizely) at getting the variation to display just the way we want.  And the support at Convert has always been excellent—again, better than most of their competitors.

We focus on small to medium size clients, and Convert is excellent for that segment with flexible pricing.  It’s a great solution for small businesses doing in-house conversion optimization, but it can also work very well for agencies.

– Tom Bowen, Web Site Optimizers

Convert Experiences also stood out as the type of tool that catches new fans wherever it’s discovered, leading me to believe that it will continue to grow and pick up market share.

We have come across convert.com more and more in recent months working on client campaigns.  If you are a true marketer and want actionable data then they are a good choice.  The user interface is actually pretty good and you can actually understand the data they give you on experiments.  They run on the typical drag and drop style experiment setup engine that most others do and can be manipulated even if you aren’t a technical wizard.

The price isn’t too bad either as they fall somewhere in the middle of Optimizely and VWO.  I would recommend them to someone who has a bit of budget constraints but wants a bit more testing power.  We have used them on multi million dollar per month campaigns with much success.

– Justin Christianson, Conversion Fanatics

Convert Experiences is known for having some of the most robust multivariate testing options in it’s class. At the same time, it is also one of the few tools in its class to not offer any sort of email split testing capabilities.

Overall, it’s a highly recommended AB testing tool that is worth trying out.

Convert has great customer support (via live chat) and is easy to use. We’d recommend it to the same people who are considering using Optimizely and VWO.

– Karl Blanks, Conversion Rate Experts

4. SiteSpect

AB testing software SiteSpect screenshot. AB testing tools 2021.
AB testing software SiteSpect report.

SiteSpect initially distinguished itself as one of the first server-side testing solutions on the market, and it has remained a top choice for more technically sophisticated users and security-conscious clients.

For a long period, SiteSpect was one of the few platforms offering a server-side solution. This has given them a huge advantage by allowing more complex testing, by adapting to newer JavaScript technologies, and by accommodating security-conscious clients. – Stephen Pavlovich, Conversion

SiteSpect has the advantage that it works in a different way. It’s tag-free. SiteSpect edits the HTML before it even leaves the server, rather than after it has hit the user’s browser. It tends to be popular with companies that want to self-host and are technically sophisticated. – Karl Blanks, Conversion Rate Experts

As a server-side testing solution, SiteSpect avoids many of the issues that can arise with the more typical browser-based testing platforms that utilize javascript tags.

  • Tag-based solutions typically charge by the number of tag calls you make, even if those tags don’t end up being used.
  • Tag-based solutions often require third-party cookies, which certain browsers or browser settings might not support, causing you to lose the ability to test a large percentage of traffic.
  • Tag-based solutions can have imprecise reporting because the javascript doesn’t always fire.

While this value proposition won’t be the deciding factor for many businesses, for those requiring a server-side solution, SiteSpect is one of the best options on the market.

5. AB Tasty

ab tasty dashboard
AB testing software ABTasty reports screen capture

AB Tasty is a solution for testing, re-engagement of users, and content personalisation, designed for marketing teams. Paul Rouke had a good bit to say here, so I’ll let him take it away.

The tools that in my experience deliver the most long-term value are those which are reasonably priced, allowing companies to spend more of their budget on making sure they are testing intelligently and developing an effective testing process. I talk about this in-depth in my article The Great Divide Between BS and Intelligent Optimization.

On this note, my favorite tool would be something like AB Tasty, which is priced sensibly, yet has a powerful platform that facilitates a wide range of testing, from simple iterative tests through to innovative tests, along with strategic tests which can help evolve a business proposition and market positioning.

I would recommend AB Tasty (and similarly Convert.com) to the following types of companies:

(1) Companies just starting to invest in conversion optimisation – they won’t break the bank, they won’t overwhelm you with add-ons you will never use as you’re starting out, but they have the capability to match your progress as you scale up your testing output

(2) Companies who have been investing in conversion optimisation but who want to start using a higher portion of their budget (75% or more) on people, skills, process and methodology in order to deliver a greater impact and ROI

(3) Companies frustrated at investing significant amounts of money in enterprise testing platforms, which aren’t being used anywhere near their potential and are taking away from the budget for investing in people, skills and developing an intelligent process for strategic optimisation.

6. Evolv AI

AB Testing Software Ascend showing results on a computer screen and a mobile phone screen.
AB testing software Ascend uses machine learning.

Evolv brings advanced machine learning algorithms to the CRO space, helping you identy exactly why your customers aren’t converting, how to fix it, and the potential revenue impact. It was one of the first conversion optimization apps to leverage AI, and it’s becoming exponentially more precise over time.

This is important because it speeds up multivariate testing. Evolutionary, or genetic algorithms do a better job of finding optimum combinations, isolating the richest local maximum for a solution set.

Our team of scientists love being able to assemble our highest rated hypotheses and throw them in the mix to have the machine sort them for us. This really is the future of conversion optimization.

7. Kameleoon

Kameleoon is a web experimentation tool that offers some of the most well-thought-out reporting of any tool we’ve used. They offer features for websites and apps with a dash of AI to identify segments and predict conversions.

Kameleoon makes it easy for our product managers and marketing teams to build experiments. It fits into our tech stack and our existing product release process. Developers get feature flagging and we get to experiment without taking up all their time.

Alexandre Suon, Head of Experimentation, Accor Group

8. Qubit

Screen capture of testing platform Qubit with sample reports shown.
Testing Platform Qubit Example Screen Capture

Qubit is a testing platform focused primarily on personalization. Accordingly, it has some of the strongest segmentation capabilities of any tool on this list.

Qubit has a strong focus on granular segmentation – and the suite covering analytics through to testing gives it an advantage. They’ve now broken out of their traditional retail focus to become a strong personalisation platform across sectors.

– Stephen Pavlovich, Conversion

If advanced segmentation or personalization are a priority for your business or clients, Qubit is a tool worth checking out.

9. Adobe Target

AB Testing Software Adobe Target Screen Capture
AB testing software Adobe Target

Long known for being the most expensive AB testing tool on the market, we’ve found that Adobe Target works best with sites that already use Adobe Analytics.

If your business is already paying for Adobe Analytics, adding Adobe Target is virtually a no-brainer. If your business is not using Adobe Analytics, ignoring Adobe Target is virtually a no-brainer.

Here’s how Stephen Pavlovich feels about it:

I like Adobe Target. The integration of Adobe Analytics and Target is strong – especially being able to push data two-ways. And the fact that Target is normally an inexpensive upsell for Analytics customers is a bonus.

2 Marketing Software Tools With Built-In AB Testing

In addition to dedicated AB testing tools, there are some great marketing software out there that include built-in split testing capabilities. This is fairly common with tools like landing page builders, email service providers, or lead capture solutions.

As Justin Christianson explains, there are some positives and negatives to relying on these built-in tools:

Most page builders out there such as LeadPages and Instapage have split testing capabilities built into their platforms.  The problem is you don’t have much control over the goals measured and the adaptability to test more complex elements.  The good thing is they are extremely easy to setup and use for those quick and dirty type tests.  I recommend the use of this to just get some tests up and running, as constantly testing is extremely important.  If you are currently using a platform with these native testing capabilities then this is a good place to start.

1. Unbounce

One particular tool that was highlighted by several of our experts was Unbounce, one of the web’s more popular landing page builders.

I also like Unbounce, and not just because I like Oli Gardner. It seems most everyone there lives and breathes landing pages, so the expertise that comes with the tool is virtually unmatched.  Their support is also excellent.  Unbounce works really well when we’re creating a new landing page from scratch and want to try different variations, since it’s so easy to create brand new pages using the tool.

– Tom Bowen, Web Site Optimizers

Unbounce is an excellent tool for A/B testing your landing pages. While many landing page tools also offer A/B testing, I think Unbounce has the best and most flexible page editor when creating variations of your pages to be tested, and their landing page templates have the most CRO best practices included already.

Unbounce is outstanding for online marketing teams that want the most flexibility when creating and A/B testing their landing pages – many other landing page tools are limited to a fixed grid system which makes it much harder to make changes.

Rich Page

2. OptinMonster

Another popular tool was OptinMonster, which began as a popular popup tool and has evolved into a more fully featured lead generation software.

Optin Monster is an outstanding tool that lets you easily A/B test visitor opt-in incentives to see which converts best – not only headlines, images and CTAs, but also which types perform best (like a discount versus a free guide). In particular it offers great customization options and many popup styles, and exit intent popups.

Optin Monster is particularly useful for the many website marketers who don’t have enough traffic to do formal A/B testing (using tools like Optimizely or VWO) but still want to get a better idea of their best performing content variations. It has great pricing options suitable for online businesses on a low budget.

– Rich Page

12 Tools For Gathering Data

As every good split tester knows, your AB tests are only as good as the hypotheses you are testing. The following tools represent our experts’ favorite choices for collecting data to fuel effective AB tests.

  1. UsabilityHub
  2. Google Analytics
  3. Crazy Egg
  4. UserTesting.com
  5. UserZoom
  6. ClickTale
  7. HotJar
  8. Mouseflow
  9. Inspectlet
  10. SessionCam
  11. Lucky Orange
  12. Adobe Analytics

1. UsabilityHub

User testing platform UsabilityHub Screen Capture
User testing platform UsabilityHub

UsabilityHub was by far the most frequently mentioned analytics tool by our group of CRO experts. UsabilityHub is a collection of 5 usability tests that can be administered to visitors in order to collect key insights.

UsabilityHub is great for clarity testing and getting quick indications of potential improvements. It is also great for uncovering personal biases in the creation of page variations. I would recommend it to anyone doing conversion optimization or even basic usability testing.

– Craig Andrews, allies4me

While many of the tools on this list deal primarily with quantitative data, UsabilityHub offers uniquely efficient ways to collect valuable qualitative data.

Once I’ve identified underperforming pags, the next step is to figure out what’s wrong with those pages by gathering qualitative data. For top landing pages, including the homepage, I like to run one of UsabilityHub’s “5 Second Tests” to gauge whether people understand the product or service offered. The first question I always ask is “what do you think this company sells?”. I’ve gotten some surprisingly bad results, where large numbers of respondents gave the wrong answer. In these cases, running a simple A/B test on a headline and/or hero shot to clarify what the company does is an easy win.

– Theresa Baiocco, Conversion Max

It also can be a cost-effective alternative if your website doesn’t get enough traffic to facilitate use of an actual split testing tool.

UsabilityHub is essential if you want to do A/B testing but your website doesn’t have enough traffic to do so. Instead it enables you to show your proposed page improvements to testers (including your visitors) to get their quick feedback, particularly using the highly useful ‘Question Test’ and ‘Preference Test’ features.

UsabilityHub can be particularly useful for the many website marketers who don’t have enough traffic to do formal A/B testing (using tools like Optimizely or VWO) but still want to get a better idea of their best performing content variations.

– Rich Page

2. Google Analytics

Analytics platform Google Analytics Screen Capture
Analytics platform Google Analytics Screen Capture

To the surprise of exactly no one, Google Analytics was high up on the list of recommended analytics tools. Yet despite its popularity, very few marketers or business owners are using this free tool to its full potential.

Theresa Baiocco makes the follow recommendations for getting started:

There’s so much data in Google Analytics that it’s easy to suffer from paralysis by analysis. It helps to have a few reports you use regularly and know what you’re looking for before jumping in. The obvious reports for finding the most problematic pages in your funnel are the funnel visualization and goal flow reports. But I also like to look at top landing pages, and using the “comparison” view, I see which of them have higher bounce rates than average for the site. Those 3 reports together are a good starting point for identifying which pages to work on first.

When it comes to applying Google Analytics to your AB testing efforts, John Ekman of Conversionista offers some advice:

Most of the AB testing tools provide an easy integration with Google Analytics. Do not miss this opportunity in your AB testing setup!

When you integrate your testing tool with GA it means that you will be able to break down your test results and look at A vs. B in all dimensions available in GA. You will be able to see behavior segmented by device, returning vs new visitors, geography etc.

For example: if you are using Enhanced Ecommerce setup for GA you will be able to compare your E-commerce funnel for the A version vs. the B version. Maybe the A version gets more add to carts, but then that effect withers off and the result in the checkout is the same?!

Example of Google Analytics ecommerce report for AB test variation.
Example of Google Analytics ecommerce report for AB test variation.

Word of warning: as soon as you start segmenting your data you might lose statistical significance in the underlying segments. Even if your AB test results are statistically significant on the overall level that does not mean that the deviations you see in smaller segments of your test data are significant. The smaller the data sample size, the harder it is to reach significance. What you think is a strong signal is just some data noise.

For those interested in tapping into the full potential of Google Analytics, here’s some resources you may need..

3. Crazy Egg

User intelligence tool Crazy Egg confetti report screen capture.
User intelligence tool Crazy Egg confetti report screen capture.

Crazy Egg is one of the more popular heatmap and click-tracking tools online, thanks to an attractive interface, an affordable price point, and a deceptively powerful feature set.

Our Conversion Scientists not only use Crazy Egg, we highly recommend it. Here’s what they says about it:

Crazy Egg offers tools to help you visually identify the most popular areas of your page, help you see which parts of your pages are working and which ones are not, and give you greater insight as to what your users are doing on your pages via both mobile and desktop sites.

4. UserTesting.com

User testing platform UserTesting Screen Shot
User testing platform UserTesting.com

UserTesting.com is a unique service that provides videos of real users in your target market experiencing your site and talking through what they’re thinking.

This service is recommended by Craig Andrews, who had the following to say:

UserTesting.com is great for hypothesis generation and uncovering personal biases. It is an absolutely fantastic tool for persuading clients on the reality and importance of certain site issues, and I would recommend it to anyone doing conversion optimization or even basic usability testing

5. UserZoom (formerly Validately)

UserZoom (formerly Validately) user testing video. One of the top 20 AB testing tools for CRO 2021.
UserZoom user testing video.

UserZoom provides a complete online user testing service.

For a somewhat less expensive alternative to UserTesting.com we have found UserZoom to be an effective solution. The quality of the panel members is good, and their panel is large enough that user tests are completed quickly.

6. ClickTale

Heatmapping and session recording tool ClickTale dashboard screen capture
Heatmapping and session recording tool ClickTale dashboard

Clicktale is a cloud-based analytic system that allows you to visualize your customer’s experience on your website from their perspective. It’s an enterprise-level tool that combines session recording with click and scroll tracking, and while it comes with an enterprise price tag, it’s made some significant quality strides over the last few years.

As Dieter Davis summarized recently for UX Magazine:

There has been a huge improvement in Clicktale over the past three years, in tracking, reporting and accuracy. If you want “any old session recording JS”, boxed-product application out there, there are a variety of options. If you want accurate rendering that is linked to your existing analytics and a company that will help you tune as your own website evolves, then Clicktale is a good choice. It’s the one I’ve chosen as I wouldn’t want to risk the privacy of my customers or risk degrading the performance of my website. Clicktale also gives me a representative sample that is accurate by resolution and responsive design.

7. Hotjar

Hotjar offers heatmap reports, session recordings, polls, surveys and more
Hotjar offers heatmap reports, session recordings, polls, surveys and more.

HotJar is a jack of all trades type tool: an all-in-one tool that does heatmaps, scroll tracking, recordings, funnel tracking, form analysis, feedback polls, surveys, and more.

And from what a few of our Conversion Scientists have seen so far, it does all of those things about as well as you would expect from a jack of all trades.

On the plus side, Hotjar has prioritized creating an exceptional user experience, so if you are a solo blogger wanting a feature-rich, easy-to-use toolkit in one place with a reasonable price tag, Hotjar might be the perfect choice for you.

Stephen Esketzis had the following to say about his experience with the tool:

So overall, HotJar really is a great tool with a lot of value to offer any online business (or website in general at that). There’s not many businesses that work online I wouldn’t recommend this tool to.

With a no-brainer price point (and even a free plan) it’s pretty hard to go wrong.

8. Mouseflow

Mouseflow is another Swiss army knife of user intelligence. The service bundles screen recording, heatmap reports, on-site surveys, funnel tracking and form analysis.

Mouseflow user behavior analytics tool.
Mouseflow user behavior analytics tool.

We like it because it provides advanced segmentation. Filters include traffic source, platform, location, and more. It also supports segmentation by custom variables.

On the Intended Consequences podcast, Evan Hill said of the power of data:

“So I would I think that’s one of the most exciting things for a marketer who finally grabs this tool installs it, because they’re about to get the data they need to have really really interesting meetings.”

9. Inspectlet

Session recording software Inspectlet screen capture
Session recording software Inspectlet.

Inspectlet is primarily a session recording tool with additional heatmaps as well. Here’s what Anders Toxboe had to say about it in a recent review:

Inspectlet is simple to use. It gets out of the way in order to let the user do what he or she needs. The simple funnel analysis and filtering options is a breeze to use and covered my basic needs.Inspectlet does what it does good with a few minor glitches. It doesn’t have the newer features that have started appearing lately such as watching live recordings, live chatting, surveys, and polls.

In other words, Inspectlet is an easy-to-use, budget-friendly session recording tool that might be right for you depending on your needs.

10. SessionCam

Session recording software SessionCam offers a Suffer Score.
Session recording software SessionCam offers a Suffer Score.

SessionCam is a session recording tool that has also added heatmaps form analytics to its offering. It’s a classic example of a tool that combines better-than-average functionality with a more-difficult-than-average user interface.

Peter Hornsby had the following to say in his review for UXmatters:

SessionCam provides a lot of useful functionality, but its user interface isn’t the easiest to learn or use. Getting the most out of it requires a nontrivial investment of time.

And later:

UX designers have long known that, where there is internal resistance to change, showing stakeholders clear evidence of users experiencing problems can be a powerful tool in persuading them to recognize and address issues. SessionCam meets the need for a tool that provides this data in a much more dynamic, cost-effective way than using traditional observation techniques.

SessionCam [also] manages [to protect user data] effectively by masking the data that users enter into form fields, so you can put their concerns to rest.

If you are looking for a more robust session recording and form analytic tool that keeps user data safe, SessionCam is worth checking out.

11. Lucky Orange

Lucky Orange is kind of like Crazy Egg with a bit of UserTesting.com, a bit of The Godfather, and a bit of a hundred other things. It’s a surprisingly diverse package of conversion features that make you start to believe their claim as “the original all-in-one conversion optimization suite”, despite the incredibly low price point.

Despite the hundred new tools that have popped up since Lucky Orange hit the market, Theresa Baiocco still swears by the original:

No testing program is complete without analyzing how users behave on the site. Optimizers all have their favorite tools for gathering this data, and while the newest and hottest kid on the block is Hotjar, I still like using my old go-to: Lucky Orange. Starting at just $10/month, Lucky Orange gives you visitor recordings, conversion funnel reports, form analytics, polls, chat, and heat maps of clicks, scroll depth, and mouse movements – all in one place.

12. Adobe Analytics

Screen capture of Analytics platform Adobe Analytics Site Overview.
Analytics platform Adobe Analytics site overview.

Adobe Analytics is a big data analysis tool that helps CMOs understand the performance of their businesses across all digital channels. It enables real time web, mobile and social analytics across online channels, and data integration with offline and third-party sources.

In other words, Adobe Analytics is a $100k+ per year, enterprise level analytics tool that has some serious firepower. Here’s what David Williams of ASOS.com had to say about it:

After a thorough review of the market, we chose Adobe Analytics to satisfy our current and future analytics and optimization needs. We needed a solution that could scale globally with our business, improve productivity, and offer out-of-the box integration with our key partners to deliver more value from our existing investments. Adobe’s constant pace of innovation continues to deliver value for our business, and live stream (the event firehose) is the latest capability that opens up exciting opportunities for how we engage with customers.

AB Testing Tools Conclusion

Well that’s that: 20 of the most recommended AB testing tools from a diverse collection of the web’s leading CRO experts.

Have you used any of these tools before? Do you have a favorite that wasn’t included? We’d love to hear your thoughts in the comments.

And if you are looking for a quick way to calculate how a conversion lift could increase your bottom line, be sure to check out our Optimization Calculator.

© Copyright 2007 - 2024 • Conversion Sciences • All Rights Reserved
Conversion Sciences® and Conversion Scientist® are
federally registered trademarks of Conversion Sciences LLC.
Any unauthorized use is expressly prohibited.