For paid lead-gen and participant-recruitment ads, replaces vanity CPA with true CAC per qualified lead by joining ad-platform data with downstream funnel events, surfaces tracking gaps, and classifies every creative into Scale / Keep / Investigate / Cut.
npx gooseworks install --claude # Then in your agent: /gooseworks <prompt> --skill ad-lead-quality-analyzer
Meta optimizes for whatever conversion event you fire. For lead-gen and participant-recruitment campaigns that's almost always "signup" — but a signup is worthless if the lead never qualifies, never completes the requested action, or never gets paid out. The lowest-CPA campaign is often the one bringing in the worst leads.
This skill joins what the ad platform knows (spend, signups) with what your own product knows (downstream funnel) and replaces vanity CPA with true CAC per qualified lead. It then classifies every creative into actionable buckets so you stop scaling the wrong winners.
Core principle: The ad platform's CPA is a half-truth. Real optimization needs both halves of the funnel — pre-signup (the platform has it) and post-signup (you have it). Until they're joined, you're flying blind.
This skill is opinionated about what to measure (true CAC per qualified lead, with cohort maturation, with vanity scoring) and agnostic about how the data is sourced.
It assumes one of three standard attribution patterns:
| Pattern | Setup | Join Key |
|---|---|---|
| A. UTM-only (most common) | UTM params captured on signup form, stored on lead/user record. Downstream events joined by user_id inside your DB. | utm_content (typically the ad ID) on both sides, or fbclid |
| B. UTM + CAPI send-back (best) | Same as A, plus your app fires Conversions API events back to Meta when downstream stages hit. Meta then optimizes for quality, not signups. | event_id / external_id |
| C. Meta Lead Ads + CRM sync | Meta-hosted lead form, lead_id syncs to CRM/DB, joined there. | lead_id |
If none of these patterns is wired up, the skill switches to tracking-gap mode — it produces a fix-the-tracking report instead of an analysis.
6 short questions. Don't proceed until each is answered (default = "I don't know — let's find out").
utm_* params or fbclid? ("I don't know" → inspect the signup form's HTML / network requests)Output of Phase 0: a one-paragraph Pipeline Brief stating the assumed pattern (A/B/C), the join key, the qualification definition, and any unknowns.
Pull a sample of 10–20 recent signups from the downstream source. For each, check:
utm_source / utm_campaign / utm_content present? (Or fbclid? Or lead_id?)Coverage thresholds:
| Coverage | Action |
|---|---|
| ≥80% joinable | Proceed to Phase 2 (analysis mode) |
| 50–80% joinable | Proceed with explicit confidence caveat on every finding |
| <50% joinable | Switch to tracking-gap mode. Skip Phases 2–6. Output the gap report. |
Output of Phase 1: a Data Quality Report with coverage %, sample of orphan records, and exact field-level findings.
For every ad / ad set / campaign with statistical volume (default ≥30 signups in the window), construct:
| Stage | Count | Conv. from prev. | What a drop here means |
|---|---|---|---|
| Impressions | n | — | — |
| Link Clicks | n | CTR | Hook / placement issue |
| Signups | n | Click → Signup | LP / form friction (use ad-to-landing-page-auditor) |
| Qualified action started | n | Signup → Started | Vanity signups — wrong promise in the ad |
| Qualified action approved | n | Started → Approved | Wrong audience or fraud |
| Payout / value event | n | Approved → Paid | The "real" conversion |
| Repeat action (configurable window) | n | Retention | One-and-done quality |
The skill should pull Meta-side data via the existing Meta Marketing API connection (MCP, native API, or pasted CSV) and downstream-side data via whichever source Phase 0 identified.
Per creative / ad set / campaign:
Compute three quality scores per creative with sufficient volume:
Then classify into action buckets:
| Bucket | Rule | Action |
|---|---|---|
| Scale | Low True CAC + good quality + sufficient volume | Increase budget, watch for diminishing returns |
| Keep | Mid True CAC + acceptable quality | Hold |
| Investigate | High True CAC but high quality (often low volume) | Give it more budget before deciding |
| Cut | Low Platform CPA + high vanity score (the dangerous one — looks like a winner) | Pause and replace |
| Insufficient data | Below volume threshold | Wait, do not act |
Every classification cites the data and gets a confidence flag (sample size + CI on True CAC).
The biggest analysis trap: judging signups before they've had time to complete the funnel.
meta-ads-analyzerUse this exact structure.
1. PIPELINE BRIEF
- Pattern (A/B/C), join key, qualification definition, unknowns
2. DATA QUALITY
- Coverage %, orphan counts, confidence level
3. HEADLINE
- Overall True CAC vs. Platform CPA
- Overall Quality Multiplier
- Period-over-period delta
4. PER-CREATIVE TABLE
- Ad ID | Spend | Signups | Qualified | Platform CPA | True CAC | Quality Mult. | Vanity | Class
5. ACTION LIST (prioritized)
- Cut (dangerous winners) → Scale (proven quality) → Investigate (low-vol promising) → Keep
- Each action: hypothesis + expected impact + rollback plan
6. AUDIENCE / PLACEMENT PATTERNS
- Which interests / lookalikes / geos / placements correlate with qualified leads
- Which correlate with vanity signups
7. TRACKING GAPS (if any from Phase 1)
- Specific fields, code locations, or events to wire upIf <50% of signups are joinable, the skill stops the analysis and outputs:
1. WHAT'S BROKEN
- Specific symptoms (e.g. "0 signups have utm_content; signup form's hidden fields are empty")
2. WHAT TO ADD
- Code-level recommendations (e.g. "preserve URL params on form submit and POST to /signup as utm_source, utm_campaign, utm_content, fbclid")
- Schema changes (e.g. "add columns to leads table: utm_source, utm_campaign, utm_content, fbclid, signup_timestamp")
- CAPI event setup (recommended, not required)
3. HOW TO VERIFY
- The 5-minute test: drop a tagged URL, complete signup, query DB, confirm fields populated
4. EXPECTED IMPACT
- "Once fixed, re-run this skill in `analysis` mode in N days when you have enough signups for statistical volume"messaging-ab-tester and ad-angle-miner.meta-ads-analyzer for that layer.meta-ads-analyzer — Run after this skill to interpret why a creative's quality is low using Meta's system mechanicsad-campaign-analyzer — Use for cross-channel budget reallocation once true CAC is knownad-to-landing-page-auditor — Pair with this when "Click → Signup" drop-off is the leakmessaging-ab-tester — Use to generate replacement creatives for anything in the Cut bucketDiagnose Meta Ads campaign performance using Meta's actual system mechanics — Breakdown Effect, Learning Phase, Auction Overlap, Pacing, and Creative Fatigue — and produce structured, testable recommendations that avoid judging segments by average CPA instead of marginal efficiency.
Pre-flight policy check for Meta ads. Takes ad copy plus advertiser context, resolves and fetches the relevant Meta transparency-center policy pages at runtime, and returns a Pass / Fix Required / Block verdict with cited findings and rewrites.
Set up a complete outbound email campaign in Smartlead. Asks the user for campaign goal, audience, messaging, schedule, and mailbox allocation. Creates the campaign, adds leads, saves email sequences, sets schedule, and assigns available mailboxes. Use when a user wants to launch email outreach via Smartlead.