The Attribution Window Mismatch: How Standard BI Infrastructure Killed a 4.2x ROAS Channel
The Core Problem
Standard attribution infrastructure systematically undervalues high-adstock channels because reporting systems are built for transactional e-commerce (24-48 hour conversion cycles), not considered-purchase B2B sales with 30-90 day decision windows.
This isn't a strategy problem. It's a measurement architecture failure.
The Context
I worked with a veterinary equipment manufacturer selling a $1,200 automated prescription dispensing system. The competitive landscape was brutal—we were 6x the price of manual alternatives, and the average sale cycle ran 45-60 days from initial awareness to PO approval.
The Strategic Constraint:
- Target audience: Veterinary practice managers (not individual pet owners)
- Decision-making unit: 1 buyer controlling 500-2,000 units across multiple clinic locations
- LTV: $47,000 (average customer placed 3 reorders over 18 months)
- Acceptable CAC ceiling: $8,500 (5.5:1 LTV:CAC target)
The Measurement Constraint
Management's existing BI stack defaulted to 7-day attribution windows because that's what the Google Ads API and Meta CAPI returned natively. The executive dashboard was inherited from a prior B2C product line with same-day purchase behavior.
The Experiment & The Failure
We launched a $45,000 Connected TV campaign via The Trade Desk targeting veterinary practice decision-makers across 12 DMAs. Platform tracking showed:
Week 1 Performance:
- Impressions: 2.3M
- Estimated reach: 14,000 decision-makers
- Attributed revenue (7-day window): $4,200
- Dashboard ROAS: 0.09x
We killed the campaign on Day 9.
We reallocated the remaining $32,000 budget to Meta retargeting, which showed immediate "success":
- Week 1 ROAS: 2.1x
- Add-to-cart rate: 8.7%
- Purchase conversion rate: 0.4%
The $95,000 Mistake
What we didn't see in the Meta dashboard: 87% of retargeting conversions came from users who had already engaged with sales reps or requested quotes directly. We were paying $180 CPA to intercept our own organic pipeline.
Meanwhile, the CTV campaign we killed had seeded 340 demo requests that converted over the next 6 weeks—we just weren't measuring it.
The Post-Mortem: Diagnosing the Measurement Failure
The problem wasn't the channel. It was the attribution window/decay mismatch.
What the infrastructure measured:
CTV Spend (Day 1): $45,000 Revenue (Day 1-7): $4,200 ROAS: 0.09x What had actually happened: CTV Spend (Day 1): $45,000
Revenue attribution across decay curve:
Week 1: $4,200
Week 2: $18,900 (adstock decay ~72%)
Week 3: $31,400 (decay ~58%)
Week 4: $28,700 (decay ~48%)
Week 5: $19,200 (decay ~35%)
Week 6: $12,800 (decay ~22%)
Total attributed revenue: $115,200
Actual ROAS: 2.56x
| Model Run | Facebook ROAS | Google ROAS | Recommendation |
|---|---|---|---|
| Run 1 (Monday) | 3.2x | 0.1x | Kill Google, go all-in Facebook |
| Run 2 (Tuesday) | 0.2x | 3.1x | Kill Facebook, go all-in Google |
| Run 3 (Wednesday) | 1.6x | 1.5x | Keep both roughly equal |
The Technical Fix: Bayesian MMM with Geo-Holdouts
Six months later, working with a different client in a similar position, I architected the measurement system differently:
1. Geo-Based Incrementality Design
- Split 24 comparable DMAs into treatment (CTV) vs. control (suppressed)
- 60/40 split (political compromise—CFO wouldn't accept 50/50 revenue risk)
- 8-week flight to capture 85% of historical conversion window
2. Custom Attribution Infrastructure Built a Python-based MMM using PyMC3 with:
- Adstock transformation: Geometric decay with estimated half-life of 12 days (derived from historical cohort analysis)
- Saturation curves: Logistic function to model diminishing returns
- Bayesian priors: Informed by industry benchmarks for B2B CTV (typical ROAS range: 1.8x - 4.5x)
Model Structure:
# Simplified representation adstock_rate = 0.6 # 60% retention week-over-week saturation_point = 85000 # Spend level where returns diminish transformed_spend = apply_adstock(raw_spend, decay=adstock_rate) saturated_spend = apply_saturation(transformed_spend, alpha=saturation_point) predicted_revenue = baseline + (coefficient * saturated_spend)
3. Reporting Layer Rebuild Replaced Google Analytics' 7-day default with custom BigQuery views that:
- Aggregated conversions across 90-day windows
- Attributed revenue using position-based model (40% first-touch, 40% last-touch, 20% distributed)
- Separated "assisted" vs "last-click" revenue to avoid double-counting retargeting cannibalization
The Results
Geo-Holdout Findings (8 weeks):
- Treatment DMAs: 127 conversions
- Control DMAs: 41 conversions (scaled to equivalent population: 68 expected)
- Incremental lift: 59 conversions attributable to CTV
Economic Analysis:
- Incremental revenue: $247,000 (59 conversions × $4,200 AOV)
- Total CTV spend: $58,000
- True incremental ROAS: 4.2x
- CAC: $983 (well below $8,500 ceiling)
The Counterfactual: If we'd maintained the original strategy (killing CTV, doubling down on retargeting):
- Estimated retargeting spend: $90,000
- Incremental conversions: ~12 (based on incrementality test showing 89% cannibalization)
- Incremental revenue: $50,400
- Actual ROAS: 0.56x
- Opportunity cost: $196,600 in foregone revenue
What I'd Do Differently Today
Infrastructure First:
- Build attribution windows into the BI layer from day one—default to 90-day lookback for any product >$500 AOV
- Implement geo-based holdouts as standard practice for any brand awareness channel (CTV, audio, OOH)
- Use Robyn (Meta's open-source MMM) or Meridian (Google's Bayesian MMM) instead of building custom—faster stakeholder buy-in
KPI Reframing: The right success metric wasn't "ROAS in Week 1." It was:
- Cost per qualified enterprise demo (measured at 45 days): Target <$850
- Pipeline velocity: % of CTV-exposed accounts that entered sales conversations within 60 days
Stakeholder Management: The executive dashboard issue wasn't a training problem—it was an API constraint. Platform APIs don't expose adstock-adjusted metrics, so leadership sees "failure" in Week 1 and pulls budget before the curve materializes.
Solution: Pre-commit to measurement windows in writing. "We will evaluate this channel at 8 weeks, not 8 days. Here's the statistical power calculation showing why."
The Broader Lesson
Most marketing teams aren't failing because they pick the wrong channels. They're failing because their measurement infrastructure makes long-term channels look like short-term failures.
If your BI stack defaults to 7-day attribution, you will systematically:
- Overinvest in retargeting (cannibalization masked as performance)
- Underinvest in brand/awareness (delayed conversions appear as "no ROAS")
- Optimize for harvesting existing demand instead of creating new demand
The fix isn't better media buying. It's better measurement architecture.
Need a System Blueprint for Your Growth Architecture?
I build measurement systems that capture the true value of your marketing spend.