K
KnowMBAAdvisory
MarketingAdvanced7 min read

Marketing Mix Modeling

Marketing Mix Modeling (MMM) is a statistical technique that uses historical data to measure the contribution of each marketing channel to revenue, while controlling for external factors like seasonality, pricing, and competitor activity. Unlike multi-touch attribution (which tracks individual user journeys), MMM operates at the aggregate level โ€” fitting a regression model to weekly or monthly data on spend by channel against revenue. MMM is enjoying a renaissance because it doesn't require user-level tracking. iOS 14, cookie deprecation, and GDPR have broken digital attribution; MMM never needed cookies in the first place. Procter & Gamble and Coca-Cola have used MMM for 50+ years; Meta, Google, and modern startups are scrambling to rebuild it.

Also known asMMMMedia Mix ModelingMarketing EconometricsTop-Down Attribution

The Trap

The trap is treating MMM as a replacement for digital attribution rather than a complement. MMM is great at high-level budget allocation (should we spend more on TV or YouTube?) but terrible at granular optimization (which keyword bid should I raise?). Companies that bet entirely on MMM lose campaign-level optimization speed; companies that bet entirely on MTA are flying blind on offline and brand. The right answer is a triangulation framework: MMM for strategic budget allocation, MTA for tactical campaign optimization, incrementality tests as the tiebreaker.

What to Do

Get an MMM in production within 6 months โ€” it doesn't have to be perfect. Build the simplest version: 24+ months of weekly data on spend by channel + revenue + key external variables (seasonality, holidays, pricing changes). Open-source tools like Meta's Robyn or Google's LightweightMMM let you build this without an army of econometricians. Use it quarterly for strategic budget allocation. Pair with monthly incrementality tests on your top 3 channels to calibrate.

Formula

Revenue = ฮฒโ‚€ + ฮฒโ‚(TV) + ฮฒโ‚‚(Search) + ฮฒโ‚ƒ(Social) + ... + ฮฒ_n(Other Factors) + ฮต

In Practice

Procter & Gamble has used MMM since the 1960s and continues to invest heavily in it. In 2017, P&G publicly announced cutting $200M in 'wasteful' digital ad spend that MMM had identified as low-incrementality โ€” much of it spent on display ads with high attributed conversions but low actual lift. Sales did not decline. The cut was a public turning point: a major advertiser declaring that digital attribution had been overstating impact, and the long-trusted aggregate models had been right. Other CPG giants (Unilever, Kraft Heinz) followed with similar cuts based on their own MMM analyses.

Pro Tips

  • 01

    MMM exposes diminishing returns in a way attribution can't. The model fits a saturation curve to each channel โ€” you'll see exactly where the next $1M of TV spend produces dramatically less revenue than the previous $1M. This is the single most useful output of MMM and the one most companies ignore.

  • 02

    Calibrate MMM with incrementality tests at least quarterly. Run a controlled holdout (turn off one channel in one geo for 4-8 weeks) and check if your MMM coefficient predicted the revenue impact correctly. If not, retune. MMM is only as good as its calibration.

  • 03

    Don't trust an MMM with less than 24 months of data. Seasonal effects, price changes, and macro factors require multiple cycles to disentangle. With 12 months of data the model can't tell what's seasonal vs structural.

Myth vs Reality

Myth

โ€œMMM is for big enterprises only โ€” startups can't afford it.โ€

Reality

Open-source MMM tools (Meta's Robyn, Google's LightweightMMM, PyMC-Marketing) are free. A startup with one analyst and 18 months of clean data can run a credible MMM in 6 weeks. The bigger barrier is data hygiene, not cost.

Myth

โ€œMMM and MTA contradict each other โ€” pick one.โ€

Reality

They answer different questions. MMM tells you 'how to allocate $10M across channels for next quarter.' MTA tells you 'which campaigns within paid search are working.' Best-in-class marketers use BOTH and reconcile them with incrementality tests. Treating them as competing is a category error.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge โ€” answer the challenge or try the live scenario.

๐Ÿงช

Knowledge Check

Your MMM model shows your TV channel has a saturation curve where $5M of spend drives $25M revenue (5x ROAS), $10M drives $35M revenue (3.5x ROAS), and $15M drives $40M revenue (2.7x ROAS). What's the marginal ROAS of going from $10M to $15M of TV spend?

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets โ€” not absolutes.

MMM Model Quality (R-squared on holdout data)

Out-of-sample Rยฒ for a 24-month MMM with weekly granularity

Production-Ready

> 0.85

Useful with Caveats

0.70 - 0.85

Directional Only

0.55 - 0.70

Don't Trust Allocations

< 0.55

Source: Meta Robyn / Google LightweightMMM Documentation

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

๐Ÿงด

Procter & Gamble

2017-2018

success

P&G has used MMM since the 1960s. In 2017, after running fresh MMM analyses across digital channels, CMO Marc Pritchard publicly announced cutting $200M in 'wasteful' digital advertising โ€” primarily display and ad-tech spend that the MMM had identified as low-incrementality despite high attributed conversions. The market expected a sales hit. Sales did not decline. The cut was a watershed moment that forced the entire digital advertising industry to confront the gap between attributed performance and actual incrementality.

Digital Ad Cut

$200M+

Sales Impact

Negligible / positive

MMM Vintage

Used since 1960s

Industry Reaction

Forced reckoning

When your MMM disagrees with your attribution dashboard, MMM is usually right at the strategic level. Attribution measures who 'touched' a sale; MMM measures what actually CAUSED a sale.

Source โ†—
๐Ÿ“Š

Nielsen MMM Customers (CPG Aggregate)

2020-2023

success

Nielsen analyzed MMM results across hundreds of CPG advertisers between 2020-2023. They found that, on average, brands using MMM identified 15-25% of their marketing spend as 'wasteful' (below break-even marginal ROAS) once saturation curves were properly modeled. The most common over-saturation: programmatic display ads (often funded heavily because of inflated last-click attribution) and bottom-funnel paid search at scale (where the next dollar mostly cannibalizes organic search). Reallocation following MMM analysis typically lifted marketing-driven revenue by 10-20% on flat budgets.

Avg Wasteful Spend Identified

15-25%

Most Over-Saturated Channel

Programmatic Display

Typical Revenue Lift Post-Reallocation

10-20% on flat budget

MMM consistently finds that 1 in 5 marketing dollars is producing near-zero incremental revenue. The channels at fault are almost always the ones with the most attractive last-click numbers โ€” because attribution and incrementality diverge most where signal-to-noise is worst.

Source โ†—

Related concepts

Keep connecting.

The concepts that orbit this one โ€” each one sharpens the others.

Beyond the concept

Turn Marketing Mix Modeling into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h ยท No retainer required

Turn Marketing Mix Modeling into a live operating decision.

Use Marketing Mix Modeling as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.