Your competitors' unhappy users are broadcasting exactly which search queries you should own.
They're posting in Reddit threads, writing G2 reviews, venting in product forums — and every frustration contains the seed of a search someone is about to make. The problem isn't that this intelligence doesn't exist.
It's that no human has the bandwidth to compile it at the scale you actually need.
Claude reads that map systematically. Give it three competitors and a one-paragraph description of your customer, and it surfaces 80–120 actionable queries in an afternoon — filtered by intent, ranked by opportunity, ready to write against. You review once, approve what fits, and the pipeline writes the posts.
The Skills That Make This Work
Two skills from the SEO Content Engine handle the full discovery-to-content pipeline:
- seo-content-audit — mines forums, review sites, and support communities for competitor pain signals, then converts them into search queries with clear intent and opportunity worth targeting.
- content-asset-creator — writes the post for each approved query: intent-first intro, complete answer, soft CTA. Stays specific at post 150 the same as post 1.
The audit is the critical piece. Everything downstream — briefing, writing, publishing — depends on the quality of the query list it produces. A weak list means 200 posts nobody finds.
How It Works
Step 1: Define the competitive surface
Give Claude two inputs: a list of 3–5 tools your customers currently use, and one paragraph describing who those customers are and what job they're hiring a tool to do. That's the full scope of human input required. Claude uses it to define exactly where to look, what competitor communities to scan, and which pain signals are worth surfacing.
Step 2: Map the pain surface
Claude scans every public channel where your competitors' users vent — Reddit communities, G2 and Capterra reviews, product forums, YouTube comment sections, support documentation threads. It categorizes complaints into five buckets: functionality gaps, performance complaints, workflow friction, pricing frustration, and migration anxiety. Each bucket maps to a different query archetype. "Airtable gets slow past 10,000 rows" is a performance complaint. "How do I get my Asana projects into a new tool" is a migration query. Both are searches someone is about to make.
Step 3: Translate pain into queries
Forum posts are emotional — they're not search queries. This is where most manual research breaks down. A human captures the pain but misses half the query variants. Claude translates every signal into the actual strings people type into Google when they decide to act: the direct complaint version, the "fix" version, the "alternative" version, the how-to version. One frustrated review generates four queries. None get missed. By the time it finishes scanning three competitors, Claude has mapped a query landscape that would take a human researcher weeks to build.
Step 4: Filter for intent and opportunity
Raw volume is a trap. Claude cross-references the query list against search competition data and intent signals. High-frustration queries with manageable competition and clear buying intent rise to the top. "Airtable alternative for large datasets" with 800 searches per month and no strong challenger in the top 5 is worth targeting. A broad comparison query dominated by established review sites is not. The filtered list — typically 80–120 queries — goes to you for a single 30-minute review pass. You cut anything off-target and approve the rest.
Step 5: Build a brief for each post
For every approved query, Claude generates a content brief: the intent behind the search, what the post needs to answer, which current results it's competing against, and the angle that beats them. The brief is what prevents posts from drifting into generality. Without it, posts start sounding like content produced to fill a keyword slot rather than answer a real question. With it, every post is specific enough to rank against the exact query it's targeting.
Step 6: Write and publish on a schedule
Each post is 600–900 words. Intent-first intro, complete answer in the body, one soft CTA line. The publishing pipeline exports finished drafts to your CMS over 4–6 weeks so indexing looks organic. You spot-check the first batch, confirm the pattern is right, and let the rest run without touching it again. The human review window is one afternoon. The publishing window is six weeks. After that, it runs.
Step 7: Let it compound
Posts take 60–90 days to index. By month six, you're pulling sign-ups from pain-point queries without touching the system. A founder on r/SaaS documented exactly this — 200 posts targeting the most boring questions their customers Google at 3pm on a Tuesday, still pulling 50+ sign-ups a month eight months later, no ad spend. Every complaint their competitors' users posted publicly became a page that ranks when the next frustrated user searches for the same thing.
Every complaint your competitors' users post publicly is a search query you can own. Claude finds all of them, writes the posts, and publishes them on a schedule.
Start your competitive intelligence pipeline with Gooseworks → gooseworks.ai
