Schaefer — Consumer Research Framework

The Audience
Assumption Test.

Most brands are spending against a buyer they invented. Not out of negligence — out of the natural tendency to build targeting around who seems most likely to buy, rather than who actually does. The Audience Assumption Test is a four-question diagnostic that surfaces the gap.

Audience Research Consumer Insight F&B CPG Layer 4 Diagnostic

The Problem

Every brand has an assumed audience.
Almost none have tested it.

Assumed audiences aren't guesses made carelessly. They come from somewhere — from the brand's origin story, from what the product looks like, from what the category has always done, from what a platform algorithm suggested. They feel reasonable. They pass the common-sense check. And they're often wrong in ways that cost millions before anyone notices.

Source 1
The founding story
Source 2
The category playbook
Source 3
The platform suggestion
Where it comes from
"We built this for busy parents who care about nutrition."
The founder's original customer in mind. Based on personal experience or early feedback, not systematic research. Often reflects who the brand wanted to serve, not who bought first.
Partially true. Never verified at scale.
Where it comes from
"Protein brands target fitness-oriented males 25–40."
Inherited from what other brands in the space do. Feels safe because competitors use it. But competitive targeting convergence is one of the most common reasons CPAs rise — everyone fishing the same small pond.
Often wrong. Validated only by repetition, not results.
Where it comes from
"The platform's lookalike audience from our best customers."
Built from behavioral signals — who clicked, who bought — but not from motivational data. If the original audience had wrong-fit buyers, the lookalike compounds the error. Garbage in, garbage audience out.
Mathematically precise. Directionally unreliable.

The core problem isn't that brands target the wrong person. It's that they've never specifically tested whether their audience assumption is correct — so they optimize harder and harder against a target that's slightly or substantially off. The Audience Assumption Test doesn't require new research to start. It requires honest answers to four questions about the evidence you already have.

How Assumptions Fail

There are four patterns.
Each one wastes money differently.

Wrong audience assumptions don't all look the same. Understanding which failure pattern you're in determines what the fix looks like. Some require research. Some require creative testing. Some require a full segmentation rebuild.

Pattern 1
The Wrong Demo

The demographic profile is wrong. Age, income, gender, or household composition doesn't match who actually converts. Media dollars are reaching the right category of person on paper — but the actual buyer is a different demographic entirely. MeatWorks is the canonical example: targeting 30–45-year-old suburban dads, real buyer was 65+ retirees cooking for one or two. Same product motivation — reward and comfort — completely different person. The creative, the channel mix, and the price framing were all built for the wrong human.

Pattern 2
The Wrong Motivator

The demographic is approximately right, but the assumed motivation is wrong. The brand believes buyers purchase for performance — so ads lead with protein content, clinical benefits, and macros. The actual buyer purchases for reward and afternoon ritual. Same person, wrong emotional register. The ads reach them. The message doesn't land. CTR is mediocre. Conversion is weak. The team tests new creative — still performance-led — and wonders why nothing moves the needle. The demographic data looks fine. The motivator data is missing entirely.

Pattern 3
The Invisible Segment

The assumed audience exists and converts — but there's a second, higher-value segment that's never been identified because it doesn't match the brand's mental model of its own customer. The brand is successfully reaching and converting Segment A while unknowingly leaving Segment B entirely unaddressed. Segment B often has higher LTV, stronger loyalty, and greater word-of-mouth potential. Research reveals it. Assumption never would — because you can't look for what you don't know exists.

Pattern 4
The Correct Assumption

The assumed audience is broadly correct — but the brand doesn't know it, so it underinvests in serving that segment as well as it could. Confirmation through research isn't just a negative check. It frees up budget that was being hedged against other assumptions, unlocks confidence to deepen creative investment in proven segments, and generates the motivational data needed to build creative that actually converts the confirmed segment at a higher rate than generic messaging was achieving.

The Test

Four questions.
Run them before the next campaign.

The Audience Assumption Test isn't a research project. It's a structured audit of the evidence you already have — or should have — about who your buyer actually is. Each question has three possible answers: pass, warn, or fail. Two or more fails means you're spending against an assumption, not a confirmed audience.

1
Can you name your buyer's dominant purchase motivation — not their demographic?
The answer must be a psychological driver, not a product feature or an audience label.
What you're testing

Whether your audience definition contains motivational data — the why behind the purchase — or whether it's purely demographic and behavioral. A real audience definition names the emotional or identity driver that triggers the purchase.

A demographic label ("women 30–45 who care about health") is not a motivation. A motivation is: "reward after a hard day," "identity validation as someone who takes their nutrition seriously," or "affordable luxury they look forward to."

How to answer

Write down your answer in one sentence without using age, gender, income, or behavioral descriptors. If the sentence collapses without those things — if you can't describe why your buyer buys without describing who they are — you don't have a motivation. You have a demographic.

Fails if answer sounds like:"Our buyer is a health-conscious woman aged 30–45 with a household income over $75K."
Passes if answer sounds like:"Our buyer purchases for reward and daily ritual — a small indulgence they feel good about."
Scoring
✓ Pass
Motivation is specific, emotionally grounded, and sourced from consumer research or direct buyer interviews.
⚑ Warn
Motivation exists but is inferred from category convention or product features, not validated buyer data.
✗ Fail
No motivation defined. Audience is demographic only. Creative brief has no emotional anchor.
2
Do your highest-LTV buyers match your assumed target audience?
Compare your top 20% of buyers by lifetime value against the profile you're actively targeting.
What you're testing

Whether the audience generating the most long-term revenue for your brand looks like the audience you're spending media dollars to reach. It's common to target the most obvious buyer while the highest-value buyer is someone else entirely — or the same demographic but for completely different reasons that require different messaging.

High-LTV buyers are the Kingpin segment. If they don't match the assumed audience, you're optimizing for acquisition of the wrong buyer type.

How to answer

Pull your customer data and segment by LTV. Look at your top 20%. Cross-reference with demographic and behavioral data. Ask: if you showed this profile to your media team, would they recognize it as your target? If not — there's a mismatch between who you're acquiring and who you should be acquiring more of.

Red flag signal:"Our top LTV customers are 65+, but we're running all our ads targeting 30–45-year-olds."
Scoring
✓ Pass
Top LTV segment matches or closely overlaps the assumed target audience.
⚑ Warn
Top LTV segment partially matches — some overlap but meaningful divergence on key attributes.
✗ Fail
Top LTV segment clearly doesn't match the assumed target. Budget is acquiring the wrong buyers.
3
Have you asked your buyers — directly — why they buy?
Platform data, purchase history, and analytics tell you what happened. Only direct research tells you why.
What you're testing

Whether the motivational data in your audience definition comes from actual consumer research — surveys, interviews, the Replacement Model — or from inference. Inferred motivation is a hypothesis. Validated motivation is a fact.

Most brands answer "yes, we know why they buy" and mean "we believe we know why they buy based on what seems logical." Those are different things. One is research. One is assumption wearing a confidence costume.

How to answer

The bar is specific: have you run surveys or interviews with current buyers that directly asked about purchase motivation — not product satisfaction, not net promoter score, but the emotional and psychological reason they chose you? Can you quote a buyer's exact language back?

The bar for pass:"In interviews, buyers consistently said 'it's my little treat for myself' and 'I don't feel guilty about it.' That language is in our briefs."
Scoring
✓ Pass
Direct consumer research on motivation exists. Buyer language is documented and used in creative briefs.
⚑ Warn
Satisfaction surveys or NPS data exists, but no direct motivation research. Motivation is inferred from indirect signals.
✗ Fail
No direct buyer research on motivation. All audience definition is inferred from analytics, category convention, or founder intuition.
4
Would each of your segments produce a completely distinct creative brief?
If two segments would respond to the same ad, they're not real segments — they're the same assumption with two names.
What you're testing

Whether your audience segmentation is real — meaning each segment has a distinct motivator that requires distinct creative — or cosmetic, where the segments share the same underlying assumption and just have different demographic labels attached.

This is the Brief Test from Segmentation 101 applied directly to your current audience structure. It's the fastest way to identify whether your segments are real or manufactured.

How to answer

Take each segment you're currently targeting. Write one hook for each one — the actual opening line you'd use to stop that buyer's scroll. Now read them back. If the hooks are substantially different — not just tonally different, but opening on different emotional states — your segments are real. If they sound like variations of the same message, you have one segment wearing multiple hats.

Same assumption, two labels:Seg A: "Clean fuel for your active day."  |  Seg B: "Real nutrition for real results." — Same message. One segment.
Real segments, distinct briefs:Seg A: "You made it to 3pm. This one's yours."  |  Seg B: "Built for people who don't cut corners." — Different motivators. Real segments.
Scoring
✓ Pass
Each segment produces a genuinely distinct creative brief with a different hook, copy register, and CTA frame.
⚑ Warn
Segments produce different-sounding briefs but share the same underlying motivator. Real differentiation is tonal, not motivational.
✗ Fail
Segments produce nearly identical briefs. The audience structure is cosmetic — one assumption split into demographic buckets.

Scoring guide: Four passes means your audience assumption is validated and your segmentation is real. Two or more fails means you're spending against an invented buyer — and the investment needed to fix it is a fraction of what the wrong assumption is costing you in wasted media. One or two warns means you have a directionally sound assumption that needs research to sharpen into a precise, motivationally-grounded target.

Worked Examples

Three failure patterns.
Three real-world versions.

Each example runs the Audience Assumption Test against a real or constructed F&B brand scenario. They represent the three most common failure patterns Schaefer encounters when auditing a new client's audience strategy.

Real example — Schaefer client · Pattern 1
MeatWorks — The wrong demographic entirely
Wrong Demo
The assumed audience
Male, 30–45, suburban, high income
MeatWorks targeted the Grillmaster — a 30–45-year-old suburban dad with high household income, big backyard, and a passion for performance grilling. The product looked right for him. The category said he was the buyer. The creative was built entirely for his motivator: outdoor performance, bold protein, cookout culture.
Test result: 3 fails, 1 warn. No validated motivation (Q1). LTV data wasn't segmented by age (Q2 warn). No direct buyer research — category convention only (Q3). Creative briefs all pointed at the same grillmaster assumption (Q4).
What research revealed
Retirees 65+, cooking for 1–2
Why People Buy research uncovered the real buyer: retirees 65 and older, cooking for themselves or a partner, spending modest discretionary income on a small daily luxury. Motivator: reward and comfort — a Tier 2 WPB driver. Not performance, not outdoor culture. Warmth, indulgence, and the pleasure of something worth looking forward to.
The result: Creative rebuilt around the real buyer. Channel mix shifted. Messaging reframed from performance to reward. 8× revenue growth in 12 months on the same media budget.
Q1 — Motivation
✗ Fail
Q2 — LTV match
⚑ Warn
Q3 — Direct research
✗ Fail
Q4 — Distinct briefs
✗ Fail

The demographic assumption was entirely wrong — and it took direct consumer research to reveal it. No amount of creative testing or media optimization would have found the real buyer. The answer was in the research, not the dashboard.

Constructed example — functional beverage brand · Pattern 2
PeakDrive — The right person, entirely wrong reason
Wrong Motivator
The assumed audience
Fitness-oriented professionals, 25–40
PeakDrive, a clean-ingredient energy drink, built its entire audience strategy around performance-driven buyers. The assumed motivator: athletic output, training support, competitive edge. Creative led with "clean fuel for peak performance." Channels skewed toward fitness content and sports-adjacent placements.
Test result: 2 fails, 2 warns. Motivation defined but inferred from product attributes, not buyer research (Q1 warn). LTV data showed slightly lower retention than expected — a signal of weak emotional attachment (Q2 warn). No direct motivation research (Q3 fail). Creative briefs all pointed at performance — no motivational variation (Q4 fail).
What research revealed
Afternoon rechargers, identity-conscious
Buyer interviews revealed the dominant purchase occasion wasn't pre-workout — it was the 2–4pm window at a desk. The real motivator wasn't athletic performance. It was cognitive rescue + identity: "I'm someone who doesn't reach for a Red Bull." The buyers were health-conscious professionals choosing PeakDrive as a statement about who they were, not what they could physically achieve.
The fix: Hook shifted from "peak performance" to "still sharp at 4pm." Visual language moved from gym to office. Identity framing ("for people who've outgrown their coffee habit") replaced performance framing. CTR improved 2.8× in the first test cycle.
Q1 — Motivation
⚑ Warn
Q2 — LTV match
⚑ Warn
Q3 — Direct research
✗ Fail
Q4 — Distinct briefs
✗ Fail

The demographic was right. The motivator was wrong. This is the hardest pattern to see from the outside — because the ads are reaching the right person, which looks like success. The failure shows up in conversion rate and retention, not reach. Only buyer research surfaces the motivator gap.

Constructed example — plant-based snack brand · Pattern 3
GroundUp Bars — The high-value buyer nobody was targeting
Invisible Segment
The assumed audience
Vegan / plant-based enthusiasts, 25–38
GroundUp Bars, a clean plant-based protein bar, targeted the obvious segment: committed vegans and plant-based lifestyle adopters. The creative led with ethical sourcing, environmental values, and the identity statement of plant-based eating. The audience was real and converting — so no one questioned it.
Test result: 1 pass, 2 warns, 1 fail. Motivation defined for the assumed segment (Q1 pass). LTV data existed but wasn't segmented by buyer type (Q2 warn). Surveys existed but only asked about product satisfaction (Q3 warn). Creative briefs all centered the values segment — no other motivational territory explored (Q4 fail).
What research revealed
Flexitarians seeking clean convenience
Buyer interviews uncovered a second segment buying at higher frequency and higher AOV: flexitarians — non-vegans choosing plant-based for convenience and ingredient quality, not ethics. Motivator: clean convenience + absence of guilt — not environmental values. These buyers never saw a GroundUp ad because every ad was coded for values-driven identity. The brand was invisible to its highest-LTV segment.
The fix: New creative track built for the flexitarian: "No labels required. Just good ingredients." Ethical language removed. Convenience and taste foregrounded. New segment converted at 34% lower CPA than the values segment and showed 2.1× higher 90-day retention.
Q1 — Motivation
✓ Pass
Q2 — LTV match
⚑ Warn
Q3 — Direct research
⚑ Warn
Q4 — Distinct briefs
✗ Fail

The assumed segment was real. The invisible segment was better. This is the pattern where "it's working" becomes the enemy of "it could be working twice as well." The test revealed not a wrong audience but a missing one — and the missing one was worth more.

What to Do With the Results

The test tells you where you are.
This is what happens next.

The test result determines the starting point — not the full solution. Each outcome points toward a different type of research investment and a different timeline to resolution.

Result
3–4 Fails
AUDIENCE ASSUMPTION NOT VALIDATED

You're spending against an invented buyer. Every dollar of optimization is making a wrong target more efficiently wrong. The right move is to pause scaling and run Why People Buy research before the next budget cycle.

1
Consumer surveys to identify dominant motivators across buyer base
2
Segment interviews to capture buyer language and validate motivator clusters
3
Replacement Model deployment to surface competitive set and switching triggers
4
Full brief rebuild from WPB tier assignments before creative production
Result
1–2 Fails / Multiple Warns
ASSUMPTION DIRECTIONALLY CORRECT, NOT VALIDATED

You have a reasonable audience hypothesis that hasn't been confirmed with direct motivational research. Current performance may be acceptable — but it's running on inference, not insight. The risk is hidden: you don't know what you're missing.

1
Run buyer interviews with top 20% LTV customers to validate or refine motivator
2
Deploy Replacement Model question to confirm brand role and emotional attachment
3
Test one motivator-matched creative against current best performer before scaling
Result
3–4 Passes
ASSUMPTION VALIDATED — SCALE WITH CONFIDENCE

Your audience assumption is research-backed and your segmentation is real. The opportunity now is optimization and expansion: deeper creative investment in proven motivational territory, Kingpin scoring to identify which segment to concentrate on, and testing for invisible adjacent segments.

1
Apply Kingpin rubric to determine which validated segment to concentrate on first
2
Run Segment Creative Framework to build full motivator-matched brief per segment
3
Test for Pattern 3 (Invisible Segment) — who else is buying that you haven't targeted yet?
A note on all results
The test doesn't replace research

The Audience Assumption Test is a diagnostic — it surfaces where the gaps are and how severe they are. Even a full-pass result doesn't mean there's no work to do. It means the foundation is solid enough to build on.

The test answers the question: should I run research before scaling? Why People Buy research then answers the question: what do I actually need to know? One surfaces the gap. The other fills it.

Where This Connects

The test is the entry point
to the entire Schaefer system.

The Audience Assumption Test doesn't produce the answer. It identifies whether you have one. Every framework downstream depends on having a validated, motivationally-grounded audience as its foundation.

Surfaces the gap for
The Data Layers — Layer 4

A failed test result means Layer 4 data — motivational, psychological, identity-level — is missing from your audience definition. The test tells you which layers you have. Layer 4 research is what fills the gap the test identifies.

Validates input for
Why People Buy Pyramid

WPB Pyramid mapping requires validated motivational data. A failed Q1 or Q3 means the tier assignment is an assumption, not a derived finding. The test confirms whether WPB research has been done — and whether the tier assignment it produced is based on real buyer data.

Identifies candidates for
Kingpin Strategy

Kingpin scoring requires real segments with distinct motivators. A failed Q4 means there are no real segments to score — just demographic buckets. The test is the prerequisite check before the Kingpin rubric can be applied meaningfully.

Confirms briefs for
Creative Is the New Targeting

Motivator-matched creative can only be written from validated motivational data. A failed test means the creative brief is built on assumption — which means the targeting signal the algorithm receives is also built on assumption. The test is the quality gate for the entire creative-as-targeting system.

Most direct input
Segmentation 101

Q4 of the Audience Assumption Test is the Brief Test from Segmentation 101 applied directly. A failed Q4 means your segments fail the most fundamental validity check. Segmentation 101 then provides the framework for rebuilding them correctly — starting with motivational research rather than demographic convention.

The Schaefer entry point: Most new clients come to Schaefer because performance has plateaued or CPA is climbing. In the majority of cases, the root cause isn't the media buy or the creative execution. It's an audience assumption that was never tested. The Audience Assumption Test is often the first thing we run — because it tells us exactly where the work needs to start, and whether the problem is a creative problem, a segmentation problem, or a research problem. Usually it's all three.