Search LinkedIn posts by keywords using Crustdata API directly, deduplicate and sort by engagement. Outputs to CSV or JSON. Use when researching LinkedIn content around specific topics.
npx gooseworks install --claude # Then in your agent: /gooseworks <prompt> --skill linkedin-post-research
Search LinkedIn for posts matching keywords via the Crustdata API, deduplicate results, and output sorted by engagement.
Requires requests and CRUSTDATA_API_TOKEN environment variable.
# Single keyword search
python3 skills/linkedin-post-research/scripts/search_posts.py \
--keyword "AI sourcing" \
--time-frame past-week
# Multiple keywords, output CSV
python3 skills/linkedin-post-research/scripts/search_posts.py \
--keyword "talent sourcing tools" \
--keyword "recruiting automation" \
--keyword "AI sourcing" \
--time-frame past-week \
--output csv \
--output-file results.csv
# Keywords from file, multiple pages
python3 skills/linkedin-post-research/scripts/search_posts.py \
--keywords-file keywords.txt \
--time-frame past-week \
--pages 3 \
--output json \
--output-file results.json
# Summary only (prints to stderr)
python3 skills/linkedin-post-research/scripts/search_posts.py \
--keyword "recruiting stack" \
--output summary--keyword flags or --keywords-file (one per line)past-day, past-week, past-month, past-quarter, past-year, all-time (default: past-month)relevance or date (default: relevance)| Flag | Default | Description |
|---|---|---|
--keyword, -k | required | Keyword to search (repeatable) |
--keywords-file, -f | — | File with one keyword per line (lines starting with # are ignored) |
--time-frame, -t | past-month | Time filter |
--sort-by, -s | relevance | Sort order |
--pages, -p | 1 | Pages per keyword (~5 posts per page) |
--limit, -l | — | Exact number of posts per API call (1-100) |
--output, -o | json | Output format: json, csv, summary |
--output-file | stdout | Write output to file |
--max-workers | 6 | Max parallel API calls |
/screener/linkedin_posts/keyword_search API in parallel for all (keyword × page) combinationsbackend_urntotal_reactions descending{
"author": "Jane Smith",
"keyword": "AI sourcing",
"reactions": 142,
"comments": 28,
"date": "2026-02-20",
"post_preview": "First 200 chars of the post text...",
"url": "https://www.linkedin.com/posts/...",
"backend_urn": "urn:li:activity:123456789",
"num_shares": 12,
"reactions_by_type": "{\"LIKE\": 100, \"EMPATHY\": 30, \"PRAISE\": 12}",
"is_repost": false
}| Column | Description |
|---|---|
| author | LinkedIn post author name |
| keyword | Which search keyword matched this post |
| reactions | Total reaction count |
| comments | Total comment count |
| date | Post date (YYYY-MM-DD) |
| post_preview | First ~200 characters of the post |
| url | Direct link to the LinkedIn post |
| backend_urn | Unique post identifier |
| num_shares | Number of shares |
Endpoint: GET https://api.crustdata.com/screener/linkedin_posts/keyword_search
Parameters:
keyword — Search termpage — Page number (1-based, ~5 posts per page)sort_by — relevance or datedate_posted — Time filter (past-day, past-week, etc.)limit — Exact number of posts to return (1-100)Auth: Authorization: Token <CRUSTDATA_API_TOKEN>
Credit usage: 1 credit per post returned.
Rate limits: Searches run in parallel (default 6 workers). The script handles 429 responses with automatic retry.
| Variable | Required | Description |
|---|---|---|
CRUSTDATA_API_TOKEN | Yes | Crustdata API token |
~1 credit per post returned. Searching 10 keywords × 1 page = ~50 posts = ~50 credits.
After getting the post list, common next steps:
linkedin-commenter-extractor — pass post URLs to find warm leadslead-qualification — score extracted people against ICPExample pipeline:
# Step 1: Search posts
python3 skills/linkedin-post-research/scripts/search_posts.py \
--keyword "AI sourcing" --keyword "recruiting automation" \
--time-frame past-week --output json --output-file posts.json
# Step 2: Extract post URLs for commenter extraction
cat posts.json | python3 -c "
import json, sys
posts = json.load(sys.stdin)
# Filter: keep posts with 5+ comments
for p in posts:
if p['comments'] >= 5:
print(p['url'])
" > post_urls.txt
# Step 3: Extract commenters from those posts
while read url; do
python3 skills/linkedin-commenter-extractor/scripts/extract_commenters.py --post-url "\$url" --output json
done < post_urls.txt--keyword flags or --keywords-file (one per line)past-day, past-week, past-month, past-quarter, past-year, all-time (default: past-month)relevance or date (default: relevance)keyword — Search termCheck and improve your brand's visibility across AI search engines (ChatGPT, Perplexity, Gemini, Grok, Claude, DeepSeek). Set up tracking, run visibility analyses, audit your website for AI readability, and get actionable recommendations. Uses the npx goose-aeo@latest CLI.
Extract competitor and customer intelligence from any company's landing page HTML. Discovers tech stack, analytics tools, ad pixels, customer logos, SEO metadata, CTAs, hidden elements, and more. No API keys required.
Discover all customers of a given company by scanning websites, case studies, review sites, press, social media, job postings, and more. Use when you need competitive intelligence on who a company sells to.