Pilot live: ACP for AI commerce.Explore ACP
Skip to content
Back to Guides
Attribution18 min read8 chapters

The Complete Guide to Marketing Attribution for Ecommerce

Why platform-reported ROAS is wrong, how holdout testing works, and how to find true incremental value per channel.

Cresva Team

Chapter 1The Attribution Crisis

Marketing attribution is broken. Not slightly off, fundamentally, structurally broken. The numbers your platforms report are systematically inflated, and the entire industry allocates billions of dollars based on data that overclaims by a meaningful margin on average.

Material

Average Overclaim

Cross-platform

Higher

Meta Inflation

Avg across verticals

Lower

Google Inflation

Including branded

Highest

TikTok Inflation

Most variance

Here's the core problem: every ad platform is both the seller of advertising AND the measurer of advertising effectiveness. Meta tells you Meta works great. Google tells you Google works great. TikTok tells you TikTok works great. And because they all use different attribution methodologies, view-through windows, click windows, self-attributed conversions, you can add up all the platform-reported conversions and get a number far higher than your actual total conversions.

We call this the “sum problem.” If Meta claims 1,000 conversions, Google claims 800, and TikTok claims 400, that's 2,200 total. But your Shopify shows 1,500 actual orders. Someone is wrong. In reality, everyone is wrong, they're all overcounting, just by different amounts.

The platforms that sell you ads are the same ones measuring whether those ads work. This fundamental conflict of interest means every ROAS number you see is inflated. The question isn't whether it's wrong, it's how wrong.

Chapter 2Why Platforms Lie

“Lie” is strong. The platforms aren't deliberately fabricating numbers. They're using attribution methodologies that systematically favor themselves. There are four primary mechanisms:

View-Through Attribution

Meta counts a conversion if someone saw your ad and purchased within the attribution window, even if they never clicked, never engaged, and would have purchased anyway. If someone scrolls past your ad at 2am and buys from a Google search at noon, Meta claims that conversion.

Impact: High, the largest driver of Meta's overclaim

Multi-Platform Double Counting

A customer sees a Meta ad, clicks a Google ad, and buys. Both platforms claim the full conversion. Neither reports 0.5 conversions. The same sale is counted twice.

Impact: Medium, affects a meaningful share of conversions

Organic Cannibalization

Your most loyal customers were going to buy anyway. But they happened to see an ad or click a branded search result on the way to your site. The platform claims that sale as ad-driven.

Impact: High, especially for branded search campaigns

Algorithmic Attribution Windows

Platforms use different attribution windows (1-day, 7-day, 28-day) and default to the most generous. Longer windows capture more coincidental correlations, not causal relationships.

Impact: Medium, inflates depending on window choice

The iOS 14.5 factor

Apple's ATT framework made tracking harder, but it didn't fix attribution, it made platforms more creative about claiming conversions. Modeled conversions, probabilistic matching, and broadened attribution windows mean the overclaim problem got worse post-iOS14, not better. Platforms now “estimate” conversions they can't directly track, adding another layer of inflation.

Chapter 3Attribution Models Compared

Before diving into solutions, you need to understand the landscape. There are four main attribution approaches, each with distinct tradeoffs. The industry is shifting from simpler models toward incrementality-based approaches.

Interactive

Attribution model comparison

How it works

100% credit to the last touchpoint before conversion.

Strengths

Simple, easy to implement, no ambiguity

Weaknesses

Ignores discovery channels, heavily biases toward branded search and retargeting

Our verdict

Materially undervalues awareness and consideration. Will lead you to over-invest in bottom-funnel.

The ideal approach combines methods: use MMM for strategic quarterly allocation, incrementality testing for validating channel effectiveness, and corrected MTA for daily optimization. No single model is sufficient on its own.

What Parker does

Parker uses a hybrid approach, running continuous incrementality calibration against platform-reported data, applying correction factors per channel, and feeding corrected numbers to Felix's forecasting models. The result: attribution numbers you can trust for budget decisions.

Chapter 4Overclaim by Platform

Not all platforms overclaim equally. Based on holdout testing across a portfolio of ecommerce brands on Cresva, here are the typical inflation rates:

Interactive

Platform overclaim calculator

See how much your platform is likely inflating ROAS.

Reported

4.2x

Overclaim

28%

True ROAS

3.0x

Illustrative example of how correction factors work. Actual overclaim varies by vertical, audience, and campaign type, run a holdout test to measure your own.

PlatformRelative OverclaimPrimary DriverWorst Category
Meta AdsHigherView-through attributionRetargeting campaigns
Google AdsLowerBranded search cannibalizationBrand campaigns
TikTok AdsHighestView-through + broad attributionAwareness campaigns
Pinterest AdsMaterialView-through windowsHome & lifestyle
Snap AdsHigherView attribution defaultsYounger demographics
TikTok tends to overclaim most because its content format, autoplay video, generates view-through attribution even when users aren't paying attention. Meta follows, primarily from view-through and organic cannibalization. Google overclaims least, but branded search cannibalization means the true incremental value of Google Brand campaigns is often near zero.

Chapter 5Holdout Testing, the Gold Standard

The most reliable way to measure true attribution is to stop showing ads to a subset of your audience and measure the difference. This is holdout testing, the gold standard of incrementality measurement.

  1. Define your holdout

    Select 10-20% of your audience (by geo, cohort, or random split) to receive zero ads from the channel you're testing.

  2. Run for 2-4 weeks

    The test needs enough time to capture full purchase cycles. For higher-AOV products, run longer.

  3. Measure the delta

    Compare conversion rates between the exposed group and holdout group. The difference is your true incremental lift.

  4. Calculate true ROAS

    Incremental revenue (exposed - holdout) ÷ ad spend = true incremental ROAS. This is always lower than platform-reported.

  5. Apply correction factor

    Platform-reported ROAS ÷ true ROAS = your correction factor. Apply this to all future platform data.

Budget consideration

Holdout testing means deliberately not showing ads to some potential customers. For a brand spending $100K/month on Meta, a 15% holdout means ~$15K of “foregone” impressions for 3 weeks. The short-term cost is real, but the long-term value of accurate attribution data saves multiples of that amount in misallocated spend.

Chapter 6Building Your Attribution Model

You don't need a data science team to build a reliable attribution model. Here's the practical framework:

  1. Step 1: Baseline

    Run holdout tests on your top 2-3 channels to establish correction factors. Start with your biggest spend channels, the overclaim there costs the most money.

  2. Step 2: Correct

    Apply correction factors to all platform-reported data. Multiply each platform's reported ROAS by its measured correction factor to get the de-biased number.

  3. Step 3: Unify

    Create a single source of truth combining corrected platform data with Shopify/revenue data. This is your de-biased view.

  4. Step 4: Iterate

    Re-run holdout tests quarterly. Overclaim rates change with audience saturation, creative mix, and platform algorithm updates.

The key insight: you don't need perfect attribution. You need attribution that's directionally correct enough to make better allocation decisions. Even a rough correction factor, knowing Meta overclaims meaningfully, materially improves your budget decisions compared to trusting raw platform numbers.

Chapter 7Platform-Specific Correction Factors

Based on holdout testing across a portfolio of ecommerce brands, here are the correction factors by platform and campaign type. Apply these to platform-reported ROAS to estimate true incremental ROAS.

PlatformCampaign TypeCorrection DirectionPattern
MetaProspecting (broad)Modest haircutReported ROAS overstates true value
MetaRetargetingLarge haircutSignificant share would have bought anyway
MetaAdvantage+ ShoppingModest haircutReported ROAS overstates true value
GoogleNon-brand SearchSmall haircutCloser to true incremental
GoogleBranded SearchLarge haircutMost customers would have found you
GooglePerformance MaxModest haircutReported ROAS overstates true value
TikTokSpark AdsModest haircutView-through inflation
TikTokIn-Feed VideoModest haircutView-through inflation
The most surprising pattern: branded search typically has very low true incrementality. Those high-ROAS branded campaigns? Most of those customers would have found you anyway. This is the single biggest misallocation we see across ecommerce brands, over-investing in branded search because the reported ROAS looks attractive.

Chapter 8Automating Attribution with Parker

Everything in this guide is what Parker, Cresva's attribution agent, executes automatically. Parker continuously monitors platform-reported data, applies correction factors derived from ongoing holdout testing, and feeds de-biased numbers to Felix (forecasting) and Sam (strategy).

What Parker does, continuously

  • Monitors platform-reported ROAS across Meta, Google, TikTok, and Pinterest

  • Applies and updates correction factors based on ongoing holdout calibration

  • Detects when overclaim rates change (seasonal shifts, algorithm updates, audience saturation)

  • Feeds corrected numbers to Felix for forecasting and Sam for budget allocation

  • Alerts you when a channel's true incremental ROAS drops below your target

  • Generates attribution reports comparing platform-reported vs corrected performance

The result: you make budget decisions based on corrected, not inflated, platform numbers. Every ROAS figure you see in Cresva has been corrected for platform overclaim, giving you the true incremental picture.

Parker runs this entire methodology 24/7 on your data. Holdout test orchestration, spreadsheet correction factors, and platform-honesty checks all happen continuously, so accurate attribution feeds every other decision in the system.

Written by the Cresva Team. Questions? Email us.