Monitor competitor content across blogs, LinkedIn, and Twitter/X on a recurring basis. Surfaces new posts, trending topics, and content gaps you can own. Chains blog-scraper, linkedin-profile-post-scraper, and twitter-scraper. Use when you want a weekly digest of what competitors are publishing and which topics are generating engagement.
npx gooseworks install --claude # Then in your agent: /gooseworks <prompt> --skill competitor-content-tracker
Monitor competitor content activity across three channels — blog, LinkedIn, Twitter/X — and produce a consolidated digest highlighting what's new, what's getting traction, and where you have a content gap.
https://clay.com/blog)Save config to clients/<client-name>/configs/competitor-content-tracker.json.
{
"competitors": [
{
"name": "Clay",
"blog_url": "https://clay.com/blog",
"linkedin_profiles": ["https://www.linkedin.com/in/kareem-amin/"],
"twitter_handles": ["@clay_hq", "@kareemamin"]
}
],
"days_back": 7,
"keywords": ["GTM", "outbound", "AI agents", "growth"],
"output_mode": "highlights"
}Run blog-scraper for each competitor blog URL:
python3 skills/blog-scraper/scripts/scrape_blogs.py \
--urls "<competitor_blog_url>" \
--days <days_back> \
--keywords "<keywords>" \
--output summaryCollect: post title, publish date, URL, excerpt.
Run linkedin-profile-post-scraper for each tracked founder/executive LinkedIn URL:
python3 skills/linkedin-profile-post-scraper/scripts/scrape_linkedin_posts.py \
--profiles "<linkedin_url_1>,<linkedin_url_2>" \
--days <days_back> \
--max-posts 20 \
--output summaryCollect: post text preview, date, reactions, comments, post URL.
Run twitter-scraper for each handle:
python3 skills/twitter-scraper/scripts/search_twitter.py \
--query "from:<handle>" \
--since <YYYY-MM-DD> \
--until <YYYY-MM-DD> \
--max-tweets 20 \
--output summaryCollect: tweet text, date, likes, retweets, URL.
After collecting raw data, synthesize across all channels:
Produce a structured markdown digest:
# Competitor Content Digest — Week of [DATE]
## Summary
- [N] new blog posts tracked across [N] competitors
- Top trending topic: [topic]
- Biggest content gap for you: [topic]
---
## [Competitor Name]
### Blog
- [Post Title] — [Date] — [URL]
> [One-sentence summary]
### LinkedIn (top post)
> "[Post preview...]"
— [Author], [Date] | [Reactions] reactions, [Comments] comments
[URL]
### Twitter/X (top tweet)
> "[Tweet text]"
— [@handle], [Date] | [Likes] likes
[URL]
### Themes this week: [tag1], [tag2], [tag3]
---
## Content Gap Analysis
| Topic | Competitors covering | You covering |
|-------|---------------------|--------------|
| [topic] | Clay, Apollo | ❌ No |
| [topic] | Nobody | ✅ Yes |
## Recommended Actions
1. [Specific content opportunity to act on this week]
2. [Topic to consider writing a response/alternative take on]Save digest to clients/<client-name>/intelligence/competitor-content-[YYYY-MM-DD].md.
This skill is designed to run weekly (Mondays recommended). Set up a cron job:
# Every Monday at 8am
0 8 * * 1 python3 run_skill.py competitor-content-tracker --client <client-name>| Component | Cost |
|---|---|
| Blog scraping (RSS mode) | Free |
| LinkedIn post scraping | ~$0.05-0.20/profile (Apify) |
| Twitter scraping | ~$0.01-0.05 per run |
| Total per weekly run | ~$0.10-0.50 depending on scope |
APIFY_API_TOKEN env varblog-scraper, linkedin-profile-post-scraper, twitter-scraperCheck and improve your brand's visibility across AI search engines (ChatGPT, Perplexity, Gemini, Grok, Claude, DeepSeek). Set up tracking, run visibility analyses, audit your website for AI readability, and get actionable recommendations. Uses the npx goose-aeo@latest CLI.
Extract competitor and customer intelligence from any company's landing page HTML. Discovers tech stack, analytics tools, ad pixels, customer logos, SEO metadata, CTAs, hidden elements, and more. No API keys required.
Discover all customers of a given company by scanning websites, case studies, review sites, press, social media, job postings, and more. Use when you need competitive intelligence on who a company sells to.