Why Attribution Alone Isn't Enough
Marketing attribution tells you a useful but incomplete story. It tracks which touchpoints a customer interacted with and assigns credit based on a model. But it can't answer the most important question: Did those ads actually cause the conversion?
Consider this scenario: A user visits your website, browses products, leaves, and then sees a retargeting ad the next day. They click the ad and purchase. Last-click attribution gives 100% credit to retargeting.
But what if that user was already planning to buy? What if they would have come back and purchased regardless of the retargeting ad? Attribution can't distinguish between conversions caused by ads and conversions that would have happened anyway.
This distinction matters enormously. If 60% of your retargeting conversions are non-incremental (would have happened without the ad), your actual ROAS on that channel is dramatically lower than reported.
Incrementality testing provides the answer by using controlled experiments to isolate the true causal impact of advertising.How Incrementality Testing Works
The concept borrows from scientific experimentation. You split your audience into two groups:
- Test group: Sees your ads (business as usual)
- Control group: Does NOT see your ads (or sees a placebo)
After the test period, you compare conversion rates between the two groups. The difference represents the incremental lift—conversions directly caused by your advertising.
The Math
Incremental lift = Test group conversion rate - Control group conversion rate Incrementality rate = (Test conversions - Control conversions) / Test conversions Example:- Test group: 1,000 people, 50 conversions (5.0% conversion rate)
- Control group: 1,000 people, 30 conversions (3.0% conversion rate)
- Incremental lift: 2.0 percentage points
- Incremental conversions: 20 out of 50
- Incrementality rate: 40% (meaning 40% of attributed conversions are truly incremental)
In this example, the ad channel is driving real value—but 60% of conversions credited to it would have happened anyway. Your true ROAS is less than half what your attribution model suggests.
Types of Incrementality Tests
1. Geo-Based Holdout Tests
How it works: Select matched geographic regions. Run ads in test regions and suppress ads in control regions. Compare conversion rates between the two. Example setup:- Test markets: Dallas, Phoenix, Atlanta (ads running normally)
- Control markets: Houston, Denver, Charlotte (ads turned off)
- Duration: 3-4 weeks
- Measurement: Compare conversion rates, revenue, and other KPIs between test and control markets
- Doesn't require user-level tracking
- Privacy-compliant (no individual-level data needed)
- Works for any channel, including offline
- Easy to explain to stakeholders
- Requires sufficient conversion volume in each market
- Geographic differences can introduce noise
- You lose some conversions in control markets during the test
- Harder to match markets perfectly
- Match markets by historical conversion rates, demographics, and seasonality
- Use at least 3-4 markets per group for statistical reliability
- Account for spillover effects (someone in a control market might see an ad while traveling)
- Run during stable periods (avoid major holidays or promotions)
2. Audience-Based Holdout Tests
How it works: Within a platform, randomly split your target audience. Show ads to the test group and suppress ads for the control group. Example setup on Meta:- Create a test campaign with a 90/10 split
- 90% of the audience sees your ads (test group)
- 10% sees nothing or a PSA (control group)
- Compare conversion rates between groups
- More precise than geo-based (same geography, same time)
- Easier to achieve statistical significance
- Can test specific campaigns or ad sets
- Requires platform support for holdout groups
- Only measures incrementality within that platform
- Can be contaminated by cross-device exposure
- Some platforms charge for impressions in the control group
3. Platform Conversion Lift Studies
How it works: Major ad platforms offer built-in tools for incrementality measurement. Meta Conversion Lift:- Randomly assigns users to test and control groups
- Measures incremental conversions attributed to Meta ads
- Requires minimum spend thresholds (typically $5,000+ per test)
- Results available in 2-4 weeks
- Available for YouTube and Display campaigns
- Uses geo-based or user-based holdouts
- Requires partnership with Google team for setup
- Free to run but requires minimum spend levels
- Professional-grade methodology
- Built into the platform (easy setup)
- Free (no additional cost beyond ad spend)
- Platform-controlled (potential bias toward showing positive results)
- Minimum spend and volume requirements
- Limited customization options
- Only measures that specific platform's incrementality
4. Ghost Ads / PSA Tests
How it works: Instead of completely hiding ads from the control group, you show them a different ad (a public service announcement or unrelated ad). This controls for the effect of seeing any ad, isolating the impact of your specific creative/offer. Advantages:- Controls for the "ad exposure effect" (some people convert simply because they're reminded of the brand)
- More rigorous than simple holdout tests
- Better for measuring creative impact
- More complex to set up
- Requires creating and serving placebo ads
- May still have some control group contamination
5. Switchback Tests
How it works: Alternating ads on and off in the same market over multiple time periods. Ads run one week, off the next, on again, etc. The pattern of conversion changes reveals incremental impact. Advantages:- Each market serves as its own control
- Accounts for market-specific factors
- Good for channels with quick impact (like paid search)
- Requires longer time periods
- Doesn't work for channels with delayed conversion impact
- Harder to analyze due to carryover effects
Running Your First Incrementality Test
Step 1: Choose What to Test
Start with the channel or campaign where you have the most uncertainty or the highest spend. Common first tests:
- Retargeting campaigns — Often the most over-attributed channel
- Branded search — Are you paying for clicks you'd get organically?
- Largest prospecting channel — Validate your biggest investment
Step 2: Define Success Metrics
Before running the test, define what you'll measure:
- Primary metric: Conversions or revenue (must be measurable independent of ad platform)
- Secondary metrics: Website visits, lead quality, average order value
- Confidence level: Typically 90-95% statistical significance
Step 3: Size Your Test
Use a power analysis to determine how long the test needs to run and how large your test/control groups need to be. Key inputs:
- Baseline conversion rate — Your current conversion rate
- Minimum detectable effect — The smallest lift you want to be able to detect (typically 10-20%)
- Statistical significance level — Usually 95% (p < 0.05)
- Power — Usually 80% (probability of detecting a real effect)
Online calculators like Evan Miller's A/B test calculator can help. As a rule of thumb:
- 2-4 weeks for most tests
- 500+ conversions per group for reliable results
- 90/10 or 80/20 split if you can't afford to suppress ads for half your audience
Step 4: Run the Test
During the test period:
- Don't change anything else — Keep creative, targeting, and budgets constant
- Monitor daily for any anomalies but don't end the test early
- Track your measurement independently — Use your CRM/order system, not the ad platform, to count conversions
Step 5: Analyze Results
After the test concludes:
- Calculate the conversion rate for test and control groups
- Determine the incremental lift (test rate minus control rate)
- Calculate the incrementality rate (what percentage of attributed conversions are truly incremental)
- Calculate incremental ROAS — Revenue from incremental conversions divided by ad spend
- Assess statistical significance — Is the result reliable or could it be random noise?
Step 6: Act on the Results
The results should directly inform your budget allocation:
High incrementality (70%+):- The channel is driving real value
- Consider increasing investment
- Attribution models are roughly aligned with reality
- The channel provides some value but less than attributed
- Adjust your ROAS targets to reflect true incremental performance
- Look for ways to improve incrementality (better targeting, creative)
- Most attributed conversions would happen anyway
- Reduce spend and reallocate to higher-incrementality channels
- Consider whether the brand awareness value justifies continued investment
Incrementality Benchmarks by Channel
Based on published studies and industry data, here are typical incrementality rates:
| Channel | Typical Incrementality Rate | Notes |
|---------|---------------------------|-------|
| Prospecting display | 20-40% | Many impressions reach unlikely converters |
| Social prospecting | 30-50% | Better targeting improves incrementality |
| Paid search (non-brand) | 50-70% | High intent means more incremental conversions |
| Paid search (brand) | 10-30% | Many would have found you via organic search |
| Retargeting | 10-30% | Users were already considering purchase |
| Email marketing | 40-60% | Highly dependent on list quality and offer |
| Connected TV | 30-50% | Growing channel with moderate incrementality |
Important caveat: These are generalizations. Your actual incrementality will vary based on your brand awareness, product category, competition, and creative quality.Building an Ongoing Incrementality Program
Incrementality testing shouldn't be a one-time exercise. Build a regular testing cadence:
Quarterly Testing Calendar
Q1: Test your largest paid social channel Q2: Test branded search campaigns Q3: Test retargeting/remarketing campaigns Q4: Test emerging or experimental channelsAnnual Review
Once a year, synthesize all incrementality test results to:
- Recalibrate your attribution models based on actual incrementality data
- Adjust ROAS targets per channel to reflect incremental performance
- Reallocate budget toward higher-incrementality channels
- Identify channels that need creative or targeting improvements
Combining Incrementality with Other Measurement
The most accurate measurement combines incrementality testing with other approaches:
- Use multi-touch attribution for day-to-day optimization
- Use incrementality testing to calibrate attribution models quarterly
- Use marketing mix modeling for top-down budget allocation
- When all three methods agree, you can act with high confidence
Incrementality testing takes effort, but it provides something no other measurement method can: proof of causation. In a world of increasingly noisy attribution data, knowing what your ads actually cause—versus what they merely correlate with—is one of the most valuable insights a marketer can have.