Diagnose Meta Ads campaign performance using Meta's actual system mechanics — Breakdown Effect, Learning Phase, Auction Overlap, Pacing, and Creative Fatigue — and produce structured, testable recommendations that avoid judging segments by average CPA instead of marginal efficiency.
npx gooseworks install --claude # Then in your agent: /gooseworks <prompt> --skill meta-ads-analyzer
Most "Meta Ads analysis" stops at "this CPA is high, pause it." That's wrong more often than it's right. Meta's delivery system optimizes for marginal efficiency — the cost of the next conversion — not average efficiency across a snapshot. A segment with a higher average CPA is often the one keeping your overall campaign cheap. Pausing it makes things worse.
This skill diagnoses Meta campaigns the way a senior media buyer would: at the right evaluation level, accounting for learning state, separating noise from signal, and explaining why the system is making the decisions it's making before recommending any change.
Core principle: Holistic first, then drill down. Marginal over average. Dynamic over static. Every recommendation is a testable hypothesis with expected impact, not a directive.
This is the most important step. Evaluating at the wrong level is the #1 source of wrong recommendations.
| Campaign Setup | Correct Evaluation Level | Why |
|---|---|---|
| Advantage+ Campaign Budget (CBO) | Campaign level | System pools budget across ad sets — only campaign totals reflect reality |
| Automatic placements (no CBO) | Ad Set level | System pools budget across placements within the ad set |
| Multiple ads in 1 ad set | Ad Set level | System pools delivery across ads |
| Manual placements + ABO | Placement / Ad Set level | Each is independent |
Output for this phase: State the evaluation level explicitly and explain why before any metric is interpreted.
If asked "is this Meta placement underperforming?" on a CBO campaign, the answer is "wrong question — at CBO the placement-level CPA is misleading. Here's the campaign total..."
Before judging anything, check delivery state per ad set.
Learning state checklist:
Learning (delivery less stable, CPA typically higher, results not predictive)Learning Limited = can't get enough events → flag as a structural issue, not a performance issueSignificant edits that reset learning:
Output for this phase: Per ad set, mark Active / Learning / Learning Limited. Caveat all conclusions for anything in learning. Do not recommend pausing a Learning ad set based on CPA alone.
Run the diagnosis through these five lenses. Each one explains a different class of "weird" behavior.
The Breakdown Effect: the system shifts budget toward segments where the next conversion is cheapest, not where the average conversion is cheapest. A segment can have a high average CPA in a breakdown report and still be the right place for budget.
How to spot it:
Mandatory framing in the report: Never recommend pausing a segment based solely on higher average CPA/CPM in a breakdown report. Removing it will often raise total cost. Frame any cut as a hypothesis to test with a holdout, not an instruction.
For each ad with sufficient impressions (~500+), check the three rankings:
| Ranking | Below Average → | Action |
|---|---|---|
| Quality Ranking | Creative is the problem | Test new creative formats / hooks |
| Engagement Rate Ranking | Hook isn't pulling | Test new opener / first 3 seconds |
| Conversion Rate Ranking | Post-click is leaking | Audit landing page (use ad-to-landing-page-auditor) |
Two below average + one average = creative refresh. All three below average = scrap and rebuild.
Symptoms: ad sets in the same campaign chronically Learning Limited, underspending budget, or showing erratic delivery.
Causes: Overlapping audiences within the same ad account / Page mean only one of your ads enters each auction (Meta picks the highest-value one; the others are excluded — you don't bid against yourself, but the suppressed ad sets can't learn).
Action:
Pacing = the system smoothing budget across the day/period to capture the best opportunities. Daily snapshots will look uneven by design.
How to read it:
Distinguish noise from trend before recommending anything.
| Signal | Verdict |
|---|---|
| Day-to-day CPA swing within 20–30% | Normal — ignore |
| Weekend vs. weekday delta | Normal — control for it |
| Gradual change over weeks | Trend — investigate |
| Sudden ≥50% cost increase sustained 3+ days | Real problem — diagnose |
| Delivery near zero | Account/asset/policy issue — check first |
| Conv rate dropping while spend rises | Creative fatigue or LP regression |
Always check sample size. A 1-conversion difference at low volume is meaningless.
Before writing the report, restate every finding from Phase 3 in terms of what the system is trying to do:
"Placement A shows $10 average CPA vs Placement B's $15. Time-series shows A's CPA rising. The system is correctly shifting toward B because B's marginal CPA is now lower. Recommendation: do nothing on placements; test new creative in A to lower its marginal CPA."
If a finding can't be restated in marginal/system-mechanics terms, it's probably noise — drop it.
Use this exact structure. No deviation.
1. EXECUTIVE SUMMARY
- 2–3 sentences on overall health
- Top 1 thing to do, top 1 thing NOT to do
2. EVALUATION LEVEL
- Stated explicitly with the reason
3. LEARNING STATUS
- Per-ad-set table: Active / Learning / Learning Limited
- Caveats applied to any in-learning analysis
4. PERFORMANCE OVERVIEW
- Standardized metric naming (see table below)
- Aggregate first, then drill-down
- Compare to target where given, benchmarks otherwise
5. DIAGNOSIS
- Findings from Phase 3, each tagged to its lens
(Marginal / Relevance / Overlap / Pacing / Fluctuation)
- Each finding cites specific data
6. RECOMMENDATIONS
- Each = hypothesis + expected impact + how to test
- Marked Critical / High / Medium / Low priority
- Anything paused/scaled has a rollback plan
7. BREAKDOWN EFFECT NOTES
- Explicit callouts where average ≠ marginal
- "Do not do X" warnings if the data tempts a wrong moveThese are not style suggestions. Violating them produces wrong analysis.
get_recommendations first if you have live API access. If your recommendation diverges from Meta's, explicitly explain why.Always rename raw metric names to these standardized display names in any output:
| Raw | Display |
|---|---|
impressions | Impressions |
reach | Reach (Accounts Center accounts) |
frequency | Frequency |
spend | Amount Spent |
cpm | CPM |
clicks | Clicks (all) |
cpc | CPC (all) |
ctr | CTR (all) |
cost_per_action_type:link_click | CPC (Link Click) |
outbound_clicks_ctr | Outbound CTR |
actions:purchase | Purchases |
action_values:purchase | Purchase Value |
cost_per_action_type:purchase | Cost per Purchase |
purchase_roas | Purchase ROAS (return on ad spend) |
video_thruplay_watched_actions | ThruPlays |
The misinterpretation that Meta's system shifts budget into "underperforming" segments. In reality the system maximizes total results by optimizing for marginal efficiency. A breakdown report sliced by placement, demographic, or device shows averages — but the system optimizes for the next dollar, not the average. A segment with high average CPA may be protecting overall campaign efficiency by preventing even higher marginal cost elsewhere.
Delivery state where the system is exploring how to deliver a new or significantly edited ad set. Performance is less stable, CPA is typically higher, and results are not predictive of long-term performance. Exits after ~50 optimization events within 7 days of the last significant edit. Don't edit during learning (resets the clock). Don't fragment with too many ad sets (each needs its own 50 events). Use realistic budgets — too small or too large gives bad signal.
When ad sets share overlapping audiences within the same ad account, only the highest-value ad from your portfolio enters each auction. The others are excluded. Symptoms: chronic Learning Limited, underspending, erratic delivery. Fix: consolidate ad sets, or pause the lower-performing overlapping ones to free up auction entries.
The system spreads spend across the day/period to capture best opportunities. Daily under/overspend is by design — only sustained underspend (3+ days) is a real signal.
Effectiveness decreases as the same audience sees the same creative repeatedly. Watch frequency (>3–4 in a 7-day window for prospecting) and conversion-rate decline while spend stays flat. Refresh creative on a rotation rather than waiting for fatigue to show in CPA.
Day-to-day CPA variation within 20–30% is normal. Weekend/weekday differences are normal. Sudden ≥50% sustained cost increases over 3+ days, near-zero delivery, or conv-rate drops while spend rises are the only patterns worth diagnosing as "problems."
messaging-ab-tester for variants and ad-angle-miner for source material.ad-to-landing-page-auditor — and use it whenever Conversion Rate Ranking is below average.ad-campaign-analyzer for cross-channel budget reallocation.ad-campaign-analyzer — Multi-platform performance review and budget reallocation. Run this first if you have multiple channels; run meta-ads-analyzer after for the Meta-specific deep dive.ad-to-landing-page-auditor — Always pair with this when Conversion Rate Ranking is below average.messaging-ab-tester — Generate variants when creative fatigue is the diagnosis.meta-ads-campaign-builder — Architect a new campaign when the diagnosis points to "rebuild, don't fix".Meta system-mechanics framing (Breakdown Effect, Learning Phase, Auction Overlap reference content) adapted from an MIT-licensed Meta ads analyzer project by Mathias Chu.
Pre-flight policy check for Meta ads. Takes ad copy plus advertiser context, resolves and fetches the relevant Meta transparency-center policy pages at runtime, and returns a Pass / Fix Required / Block verdict with cited findings and rewrites.
For paid lead-gen and participant-recruitment ads, replaces vanity CPA with true CAC per qualified lead by joining ad-platform data with downstream funnel events, surfaces tracking gaps, and classifies every creative into Scale / Keep / Investigate / Cut.
Set up a complete outbound email campaign in Smartlead. Asks the user for campaign goal, audience, messaging, schedule, and mailbox allocation. Creates the campaign, adds leads, saves email sequences, sets schedule, and assigns available mailboxes. Use when a user wants to launch email outreach via Smartlead.