multivariate testing best practices

Get Scientific With These Multivariate Testing Best Practices

Multivariate testing can empower marketers to discover effective content that drives KPIs, but only when they follow best practices and scientific rigor.

Multivariate testing provides a powerful tool for marketers willing to dive into experimental design and optimization best practices.

Because marketers aren’t mind readers, they need tools and methodologies to guide their approach to optimizing conversion rates for online campaigns. They might find common advice for conversion rate optimization, such as “red CTA (call to action) buttons get more clicks than blue ones,” but how do they verify whether this is true for their particular brand, audience, and use case? Furthermore, how do they know that it was the red color vs. the CTA copy that drove incremental performance?

The answer is to get scientific. By comparing test and control groups to their hypotheses, marketers can gather data to confirm or reject their assumptions. With the data generated from this experimentation, they can move forward with a high likelihood of reaping their intended gains.

From a testing standpoint, marketers are pretty familiar with A/B testing, where two nearly identical online marketing assets have one variable changed, such as the headline copy, to see which one drives better results. However, marketers can go further using multivariate testing, which tests multiple variables and variable combinations at once. It may seem more complicated, but multivariate testing is very useful, and 53% of marketers use it.

Yet, these marketers might not be utilizing multivariate testing in the right way, hurting their ability to reveal an optimal design. Those struggling with this issue can implement the following best practices to structure their tests more effectively and obtain quality results to guide marketing decisions.

Develop a “Learning Agenda”

Before getting started, be sure multivariate testing is the correct approach or if a simple A/B test would better suit your needs. Patrick Robertson, our Director of Experience Management at Annalect, explains that marketers must first begin by developing a “learning agenda” to define what they hope to learn from the campaign. The agenda acts as a type of blueprint, whereby brands establish their hypotheses, determine what variables they would like to test for which audiences, and prioritize learning objectives accordingly. This comes in handy down the road when ensuring that adjustments made to the test — such as dropping images or headlines in the case of low-impression volume — emphasize the appropriate content components.

Don’t Test Every Possible Combination Just Because You Can

The first truth of multivariate testing is that marketers should exercise restraint in selecting variables. Otherwise, the number of creative permutations will add up quickly.

Say, for example, that a display ad has four possible images, three possible headlines, and two possible button colors, totaling up to 24 variations of the ad. Each variant tested splits the total available audience so 24 variations means that each ad gets 4.2% of the total traffic. A campaign ad that would normally get 100,000 impressions now gets 4,200 impressions per variant combination.

Each variant also needs a set number of impressions for a representative sample to generate statistically significant results. The more you split your audience, the longer it will take to reach this point. If not, you increase the risk that your results for each variant do not show what would happen if a single variant was selected.

Additionally, some combinations may not make sense from a design standpoint. A red image with a red button may not read well on a mobile screen, for instance. Use good judgement to select an attainable number of variants and set thresholds to eliminate specific combinations. When choosing trust signals on a landing page, for example, marketers should have a limit so the page is not too cluttered.

Keep Statistical Significance, Margin of Error, and Other Experimental Factors in Mind

The key with experimental design is to have scientific rigor. Those without knowledge of rigorous experimental design may observe a preference trend between two variants when, in fact, the difference is not considered significant. In statistical analysis, “significance” means that the results shown are likely to be repeated and not the effect of randomness. In multivariate testing, a 10% lift in conversions is a generally accepted significance point.

Additionally, statistical testing invokes the concept of confidence intervals. A 95% confidence interval — a generally accepted standard for scientific testing — means that 95% percent of the tests will find results that fall within the stated range. So if a test that found a particular ad variation has a 36% lift in conversions compared to the original version and there was a +/-3% margin of error, then 95% of the re-tests conducted would find improvements between 33% to 39%.

For a 95% confidence interval and a low margin of error, your test requires a certain sample size, making constraints on the number of variants tested all the more important.

Generate Ideas from Data for Greater Experimental Variety Aligned With Relevancy

While you don’t want to test every possible idea, you also don’t want to ignore possibilities that could impact conversion rates. To determine variations worth sampling, generate ideas from multiple data sources, including:

1) First-party audience data on segment demographics, interests, and behaviors

2) Third-party data from data providers for additional audience information such as transactional data, purchase behaviors, or industry-specific data

3) Historical performance based on previous campaigns targeting similar audiences

Start Killing Low Performers Once the Minimum Sample Size is Reached

Your multivariate testing experiment need not end the moment you have an adequate sample, but it should for non-performing variations. Shut down variants that have 0-8% movement compared to the control group once they have reached the needed representative sample size. This will increase the percentage of impressions going to higher-performing variants, allowing you to optimize for higher quality results, faster.

Break Out High-Performing Combinations for Further Testing

Once you have identified a few potential variants, restructure testing to fine-tune variable elements. If a certain headline on a homepage is outperforming others handily, come up with fresh set of variations around that headline.

You can even arrange for discrete A/B/n trials with limited experimental groups to produce more informative results in shorter time.

“When you learn something from an experiment, you can apply that concept to other elements of your website,” one expert group recommends. “Testing is not just about finding more revenue. It is about understanding your visitors.”

With this information in hand, your marketing team can approach multivariate testing with a higher degree of success, yielding results that help you drive conversions and meet metric goals.

1 comment

The content is good and relevant, but if would have look great with some more graphic images/graphs with data point/s.

Leave a Reply to Ryan Govindan Cancel reply

Your email address will not be published. Required fields are marked *