Scrapes LinkedIn job postings using the JobSpy library (python-jobspy). Use this skill whenever the user wants to find jobs on LinkedIn, search for open roles, pull job listings, build a job pipeline, source job targets for GTM research, or monitor hiring signals. Even if the user just says "find me some jobs" or "what roles is [company] hiring for", use this skill. It runs a local Python script that outputs a CSV of job postings with title, company, location, salary, job type, description, and direct URLs.
npx gooseworks install --claude # Then in your agent: /gooseworks <prompt> --skill linkedin-job-scraper
This skill finds LinkedIn job postings by running tools/jobspy_scraper.py, a thin wrapper
around the JobSpy library. It handles installation,
parameter construction, execution, and result interpretation.
Install the dependency once (requires Python 3.10+):
python3.12 -m pip install -U python-jobspy --break-system-packagesRun the scraper:
python3.12 tools/jobspy_scraper.py \
--search "software engineer" \
--location "San Francisco, CA" \
--results 25 \
--output .tmp/jobs.csvResults are saved as CSV and printed as a summary table.
Identify from the user's message:
hours_old filter if user wants recent posts (e.g. "last 48 hours")linkedin_company_ids if targeting a specific company--fetch-descriptions if user needs job description textIf anything is ambiguous (e.g. "find AI jobs"), pick reasonable defaults and tell the user what you used.
Build the tools/jobspy_scraper.py command using the parameters below.
Always save output to .tmp/ so it's disposable and easy to find.
python tools/jobspy_scraper.py \
--search "<term>" \
--location "<location>" \
--results <N> \
[--hours-old <N>] \
[--fetch-descriptions] \
[--company-ids <id1,id2>] \
[--job-type fulltime|parttime|contract|internship] \
[--remote] \
--output .tmp/<descriptive_filename>.csvNote: --hours-old and --easy-apply cannot be used together (LinkedIn API constraint).
Execute the command. The script will print a progress message and a summary of results found.
If the script is not found at tools/jobspy_scraper.py, check whether the file needs to be created
by reading skills/linkedin-job-scraper/scripts/jobspy_scraper.py and copying it to tools/.
After the run:
| Flag | Description | Default |
|---|---|---|
--search | Job title / keywords | required |
--location | City, state, or country | none |
--results | Number of results to fetch | 25 |
--hours-old | Only jobs posted within N hours | none |
--fetch-descriptions | Fetch full job descriptions (slower) | false |
--company-ids | Comma-separated LinkedIn company IDs | none |
--job-type | fulltime, parttime, contract, internship | any |
--remote | Filter for remote jobs only | false |
--output | Path for CSV output | .tmp/jobs.csv |
The CSV output includes:
| Column | Description |
|---|---|
TITLE | Job title |
COMPANY | Employer name |
LOCATION | City / State / Country |
IS_REMOTE | True/False |
JOB_TYPE | fulltime, contract, etc. |
DATE_POSTED | When the listing was posted |
MIN_AMOUNT | Minimum salary |
MAX_AMOUNT | Maximum salary |
CURRENCY | Currency code |
JOB_URL | Direct link to the LinkedIn posting |
DESCRIPTION | Full job description (if --fetch-descriptions used) |
JOB_LEVEL | Seniority level (LinkedIn-specific) |
COMPANY_INDUSTRY | Industry classification |
Find recent engineering roles at a startup:
python tools/jobspy_scraper.py --search "growth engineer" --location "New York" \
--results 50 --hours-old 72 --output .tmp/growth_eng_nyc.csvMonitor what a specific company is hiring for:
# First find the LinkedIn company ID from the company's LinkedIn URL
python tools/jobspy_scraper.py --search "engineer" --company-ids 1234567 \
--results 100 --fetch-descriptions --output .tmp/company_hiring.csvFind remote contract roles:
python tools/jobspy_scraper.py --search "data analyst" --remote \
--job-type contract --results 30 --output .tmp/remote_contracts.csv| Error | Fix |
|---|---|
ModuleNotFoundError: jobspy | Run pip install -U python-jobspy |
| 0 results returned | Broaden search term, remove location, increase --results |
| Rate limited / blocked | Wait a few minutes; avoid running back-to-back large scrapes |
hours_old and easy_apply cannot both be set | Remove one of those flags |
The scraper script lives at tools/jobspy_scraper.py.
If it doesn't exist, copy it from skills/linkedin-scraper/scripts/jobspy_scraper.py to tools/:
cp skills/linkedin-job-scraper/scripts/jobspy_scraper.py tools/hours_old filter if user wants recent posts (e.g. "last 48 hours")linkedin_company_ids if targeting a specific companyCheck and improve your brand's visibility across AI search engines (ChatGPT, Perplexity, Gemini, Grok, Claude, DeepSeek). Set up tracking, run visibility analyses, audit your website for AI readability, and get actionable recommendations. Uses the npx goose-aeo@latest CLI.
Extract competitor and customer intelligence from any company's landing page HTML. Discovers tech stack, analytics tools, ad pixels, customer logos, SEO metadata, CTAs, hidden elements, and more. No API keys required.
Discover all customers of a given company by scanning websites, case studies, review sites, press, social media, job postings, and more. Use when you need competitive intelligence on who a company sells to.