Right side up logo

digital

strategy

digital

strategy

Marketing Incrementality Testing: A Complete Guide to Measuring What Matters

Published

June 2, 2025

Updated

In today's complex marketing landscape, understanding the true impact of your marketing efforts has never been more challenging or more important. Traditional attribution models often paint an incomplete and sometimes misleading picture of campaign performance. That's where incrementality testing comes in.

What is Incrementality in Marketing?

At its heart, incrementality testing answers one fundamental question: Would this conversion have happened anyway, without my marketing?

It's a simple question, but the answer can completely change how you view your marketing performance. Think about it: How many of your customers would have purchased even if they never saw your ad? How many would have visited your website regardless of that email campaign?

Incrementality is defined as the lift in desired outcomes that wouldn't have occurred without a specific marketing intervention. In other words, it's the true impact of your marketing. We can think of it as a simple equation:

Incrementality = (Results with marketing) - (Results without marketing)

This sounds straightforward, but the tricky part is figuring out what would have happened without your marketing—and that's what incrementality testing helps us determine.

Correlation vs. Causation

We've all heard this one before, but it's worth repeating: Correlation is not causation! Just because someone saw your ad and then made a purchase doesn't mean the ad caused the purchase.

Here's a classic example: Ice cream sales and drownings both increase in summer, but ice cream doesn't cause drownings—the hot weather influences both! In marketing, we face this challenge all the time. Did your retargeting ad cause that sale, or was that person already planning to buy anyway? Traditional attribution gives credit to your retargeting ad, while incrementality testing helps reveal the truth.

Marketing Has an Attribution Problem

The Multi-Touch Reality

Today's customer journey is anything but simple. Before making a purchase, customers might see your search ad, open your email, view a social media post, click on a display ad, visit your store... the list goes on. Each of these touchpoints plays some role in the final conversion.

But here's the big question: How much credit should each touchpoint get? This is what we call the attribution problem, and it's one of the biggest challenges in marketing measurement.

The Credit-Claiming Game

A customer often interacts with your brand multiple times before converting. Their journey could look something like this:

  • Day 1: Person sees a Meta ad, clicks, but doesn't convert.
  • Day 3: Person does some comparison shopping and clicks on a Google ad, but doesn't convert, though you capture their email address.
  • Day 5: Person clicks on an email and converts with an offer.

[Related: Get Katie’s latest analysis on Meta and Google CPMs to optimize your campaigns.]

In traditional attribution, each of these channels might claim credit for that conversion. Your email platform shows the sale in its dashboard. Facebook claims the conversion. Google takes credit too. Each platform is essentially saying, “Look at the great results I delivered!”

But they can't all be responsible for the same sale, right? This creates the attribution problem. When everyone claims credit, how do you know who really deserves it?

Several Attribution Models Attempt to Solve the Problem

Over the years, marketers have developed various attribution models to address this problem:

  • Last-click gives all credit to the final touchpoint before conversion—it is simple but deeply flawed.
  • First-click does the opposite, crediting the touchpoint that started the journey.
  • Linear attribution spreads credit equally.
  • Time decay gives more weight to more recent touchpoints.

These models give you some idea of what's working, but they're still just rule-based guesses and don’t take into account channels that aren’t click-based. They don't actually measure the causal impact of each touchpoint; they just distribute credit based on assumptions. That's where incrementality testing comes in: It measures actual impact through experiments rather than relying on arbitrary rules.

The "Would Have Happened Anyway" Problem

Here's another critical issue: Many of your conversions would have happened regardless of your marketing. Think about your most loyal customers, who might buy from you whether or not they see your ads. Yet your marketing platforms happily take credit for these conversions!

Traditional measurement can't distinguish between customers who would have converted anyway and those who needed your marketing to convince them. Incrementality testing can help you see which marketing efforts are actually creating new conversions versus which ones are just along for the ride.

Last-Click Limitations

Many companies still rely on last-click attribution, which is like giving all the credit to the last runner in a relay race. Sure, they crossed the finish line, but could they have done it without the other runners?

Last-click typically overvalues bottom-of-funnel channels like retargeting or branded search, while completely ignoring the impact of awareness campaigns or content marketing. It's a bit like only measuring the last piece of a puzzle and ignoring all the other pieces that made the completion possible.

Why Incrementality Testing?

Beyond Attribution Models

So why should we move beyond attribution models to incrementality testing? Attribution models are fundamentally backward-looking—they take events that already happened and assign credit after the fact. Incrementality testing is forward-looking and uses scientific experiments to determine what actually causes conversions.

It's the difference between guessing what would have happened versus measuring what actually happens under controlled conditions. Attribution is like a courtroom where every channel claims credit after the fact. Incrementality testing is like a scientific lab where we set up experiments to determine what actually causes what.

The Business Value

Incrementality testing delivers three big business benefits:

  1. Better Budget Allocation: When you know which channels are truly driving incremental conversions, you can shift money toward what's working and away from what's not.
  2. True ROAS Measurement: Instead of inflated ROAS figures where multiple channels claim credit for the same conversion, you'll see the genuine return on your spending.
  3. Focus on What Works: You can focus your strategy on what actually moves the needle. Incrementality testing often reveals surprising insights – channels you thought were performing well might not be, while underappreciated channels might be driving significant incremental value.

[Related: Learn how to define, choose, and optimize your CAC and ROAS metrics.]

Without incrementality testing, you're essentially flying blind, making decisions based on incomplete or misleading information.

The Incrementality Shift

Let me give you a real-world example of the incrementality shift. In traditional attribution, retargeting campaigns often look amazing, showing ROAS of 10X or even higher! But when measured through incrementality testing, many of these campaigns reveal much lower true impact—maybe 1.5X or even negative in some cases.

Why? Because they're targeting people who were likely to convert anyway! Conversely, upper-funnel prospecting campaigns might look mediocre in attribution but show strong incremental lift when properly tested.

This shift in perspective can completely change where you allocate your budget and how you evaluate success. What looks best isn't always what works best.

Common Surprising Findings

When companies start implementing incrementality testing, they often encounter some surprising findings:

  1. Retargeting: Typically shows lower incremental impact than attribution metrics suggest
  2. Branded Search: Often gets too much credit—many people searching for your brand would have found you anyway
  3. Awareness Campaigns: Frequently deliver more incremental value than they're credited for in traditional models

These revelations can feel uncomfortable at first, especially if you've been optimizing toward metrics that suddenly look less meaningful! However, embracing these insights ultimately leads to better marketing decisions and improved results.

Methodologies & Approaches

Testing Overview

There's no one-size-fits-all approach to incrementality testing; you can select the right methodology based on the channels you're running and each platform's specific capabilities:

  • Digital Advertising: Randomized experiments (PSA/Ghost ads)
  • Email Marketing: Audience holdouts (RCT implementation)
  • Paid Search: Geo testing
  • Social Media: Platform lift studies (RCT-based)
  • Traditional Media: Match market experiments

As we explore each method, think about which of your channels would benefit most from incrementality testing and what approaches align with their technical capabilities.

Randomized Experiments

Randomized experiments are considered the gold standard of incrementality testing. The concept is simple: You randomly divide your audience into two groups, a test group that sees your marketing and a control group that doesn't.

Because the assignment is random, the only difference between the groups should be exposure to your marketing. Any difference in conversion rates can then be attributed to your marketing efforts.

This randomized approach can be implemented in different ways across channels, from audience holdouts in email to conversion lift studies in social media. The core principle remains the same though: random assignment creating comparable groups.

PSA/Ghost Ads

PSA tests or ghost ads are a specialized form of randomized control trial that's popular in digital advertising. Instead of simply not showing ads to your control group, you show them a public service announcement or a non-branded ad.

This approach has a key advantage: Both groups are still shown some type of advertising, which helps control for the mere presence of an ad. For example, Meta's Conversion Lift tool implements this type of test automatically for you. The platform randomly holds out a portion of your target audience, shows them alternative content, and measures the difference in conversion rates.

Geographic Testing

Geographic testing is exactly what it sounds like: You run your marketing in some geographic areas but not others, then compare the results. For example, you might run TV ads in Phoenix and Denver, but not in similar cities like Portland and Sacramento. Then you compare sales lift between these markets.

This approach works well for marketing that's difficult to randomize at the user level, like outdoor advertising, radio, or TV. The key challenge is selecting comparable geographic areas; you want markets that would perform similarly if there were no marketing intervention. That's why we often look at historical data to identify markets that have behaved similarly in the past.

Match Market Tests

Match market tests are a refined version of geographic testing where we carefully pair test and control markets based on their similarities. Think of it as finding “twin” markets that have historically performed very similarly.

For instance, you might pair Denver with Sacramento and Phoenix with Portland based on population size, demographics, historical sales patterns, and other factors. By comparing highly similar markets, you increase your confidence that any differences in performance are due to your marketing, not other factors.

Audience Holdout Tests

Audience holdout tests are one of the most practical implementations of randomized experiments and a great place to start. The idea is simple: You reserve a small portion of your audience—maybe 10-20%—and don't show them your marketing.

Then you compare how this “holdout” group performs compared to those who did see your marketing. Many platforms now make this easy. For example, you can create a 10% holdout in your email marketing platform by randomly excluding some subscribers from a campaign.

Pre/Post Testing Approaches

Pre/post testing is the simplest approach but also the least rigorous. Here, you compare performance before, during, and after your marketing intervention. For example, you might look at sales for the three weeks before your TV campaign, the three weeks during, and the three weeks after.

While this approach is straightforward, it comes with significant limitations. Namely, it doesn't control for other factors that might have changed during your campaign period, like seasonality, competitor activities, or economic conditions. I recommend pre/post testing only when other approaches aren't feasible, and always with a healthy dose of caution when interpreting the results.

Channel vs. Audience vs. Campaign Type Testing

There are different dimensions of incrementality testing you can explore:

  • Channel incrementality measures the unique impact of different marketing platforms or channels (e.g., Meta vs. Criteo)
  • Audience incrementality tests different audience segments (e.g., remarketing vs. prospecting)

You can also test various audience segmentation approaches:

  • Email engaged vs. unengaged
  • Prospecting score
  • Buyer propensity score
  • Time since user has taken action (e.g., last 7d vs. > 14d ago)
  • Received direct mail vs. no direct mail

Both dimensions are valuable to test, and they answer different strategic questions about where and how to focus your marketing efforts.

Simple Approach Examples

The good news is that many platforms now offer built-in tools that make incrementality testing much easier than it used to be:

  • Meta: RCT
  • Display: Ghost Ads
  • Search: Geo Testing
  • Email: Holdout Testing
  • OOH: Match Market

You don't need sophisticated custom technology to get started. These platform tools, though imperfect, are a great place to begin your incrementality journey.

Interpreting Results

Key Metrics to Measure

When running incrementality tests, there are three key metrics you'll want to focus on:

  1. Incremental lift: The percentage improvement in your test group compared to your control group. For example, if your test group converts at 3% and your control at 2%, that's a 50% incremental lift. (Sample goal: 50% incremental lift)
  2. Incremental conversions: The actual number of extra conversions generated by your marketing. (Sample goal: 100 incremental conversions)
  3. Incremental ROAS: Your return on ad spend based on true incremental revenue, not attributed revenue. (Sample goal: 1.45X incremental ROAS)

These metrics give you the true picture of your marketing's impact, rather than the potentially inflated view that traditional attribution might provide.

Understanding Statistical Significance

One of the most important concepts when interpreting test results is statistical significance. In simple terms, this tells you whether your observed lift is likely to be real or could just be random chance.

Think of it like rolling dice. If you roll a six once in just a few rolls, that doesn't tell you much—it could simply be luck. But if you consistently roll more sixes than expected across hundreds of rolls, you'd suspect the dice might be loaded.

Similarly, if your test group performs 5% better than your control group in a small sample, that might be due to your marketing, or it might just be random variation. Statistical significance gives you confidence that your results reflect actual impact.

Sample Size Matters

When it comes to incrementality testing, size matters! The larger your test and control groups, the more reliable your results will be. With small samples, random chance can have a big impact—like how flipping a coin 10 times might give you 7 heads, but flipping it 10,000 times will get you much closer to 50% heads.

In marketing terms, if you only have 100 people in each of your test and control groups, a few random conversions can dramatically skew your results. As a general rule, aim for at least a few thousand people in each group if possible.

Common Pitfalls to Avoid

Let's talk about some common pitfalls to avoid when interpreting your incrementality test results:

  • Stopping tests too early: Give your tests enough time to collect sufficient data before drawing conclusions
  • Ignoring external factors: Things like seasonality, competitor actions, or market changes can impact your results
  • Conflating correlation with causation: Just because two things happen together doesn't mean one caused the other
  • Cherry-picking favorable results: It's tempting to focus only on the positives, but honest assessment of all results leads to better decisions

Being aware of these pitfalls will help you interpret your results more accurately and make better marketing decisions.

From Insights to Action

The whole point of incrementality testing is to drive better marketing decisions. Here's a simple framework for turning your test insights into action:

  1. Identify high-incremental channels: These are your marketing workhorses.
  2. Shift budget toward what works: Gradually move spending from low-incremental to high-incremental channels
  3. Test new hypotheses: Maybe certain creative approaches or audience combinations will deliver even better results.
  4. Rinse and repeat: Create a virtuous cycle of continuous testing and improvement.

Remember: Incrementality testing isn't a one-and-done activity, but an ongoing process that helps you refine and improve your marketing over time.

Getting Started

Ready to embark on your incrementality testing journey? Here are some practical next steps:

  1. Identify one area of your marketing where you suspect traditional metrics might be misleading.
  2. Choose a simple methodology like a holdout test to start with.
  3. Set clear metrics for what success looks like.
  4. Learn from your results and iterate.

Your first test doesn't need to be perfect; the important thing is to start measuring what really matters. The insights you gain will likely more than pay for the effort involved.

Ready to learn more about incrementality testing and how to unlock growth through better measurement? Contact Katie at katie.freiberg@rightsideup.co or visit rightsideup.com/contact.

Katie Freiberg is a growth marketer with over 12 years of experience leading teams and building best-in-class marketing strategies. Most recently, she was an operating partner at TSG Consumer where she advised their portfolio of D2C companies, including well-known brands like Corepower Yoga, Backcountry, VICI Collection, and Rough Country. Prior to that, her experience included ThirdLove, MachineZone, Thumbtack, One Kings Lane, and more. In her free time, you can find her playing ice hockey or working on a woodworking project.

Let's talk growth

Get in touch

Let's talk growth

Get in touch

Let's talk growth

Get in touch