mirror of
https://github.com/Bunsly/JobSpy.git
synced 2026-03-05 03:54:31 -08:00
Compare commits
2 Commits
13c74a0fed
...
v.1.1.16
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
78c1ec8e9f | ||
|
|
a2dd93aca1 |
41
.github/workflows/publish-to-pypi.yml
vendored
41
.github/workflows/publish-to-pypi.yml
vendored
@@ -1,50 +1,33 @@
|
||||
name: Publish Python 🐍 distributions 📦 to PyPI
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
- closed
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
on: push
|
||||
|
||||
jobs:
|
||||
build-n-publish:
|
||||
name: Build and publish Python 🐍 distributions 📦 to PyPI
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
if: github.event.pull_request.merged == true && github.event.pull_request.base.ref == 'main'
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.10"
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install toml
|
||||
|
||||
- name: Increment version
|
||||
run: python increment_version.py
|
||||
|
||||
- name: Commit version increment
|
||||
run: |
|
||||
git config --global user.name 'github-actions'
|
||||
git config --global user.email 'github-actions@github.com'
|
||||
git add pyproject.toml
|
||||
git commit -m 'Increment version'
|
||||
|
||||
- name: Push changes
|
||||
run: git push
|
||||
|
||||
- name: Install poetry
|
||||
run: pip install poetry --user
|
||||
run: >-
|
||||
python3 -m
|
||||
pip install
|
||||
poetry
|
||||
--user
|
||||
|
||||
- name: Build distribution 📦
|
||||
run: poetry build
|
||||
run: >-
|
||||
python3 -m
|
||||
poetry
|
||||
build
|
||||
|
||||
- name: Publish distribution 📦 to PyPI
|
||||
if: startsWith(github.ref, 'refs/tags')
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
password: ${{ secrets.PYPI_API_TOKEN }}
|
||||
password: ${{ secrets.PYPI_API_TOKEN }}
|
||||
22
.github/workflows/python-test.yml
vendored
22
.github/workflows/python-test.yml
vendored
@@ -1,22 +0,0 @@
|
||||
name: Python Tests
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: '3.8'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install poetry
|
||||
poetry install
|
||||
- name: Run tests
|
||||
run: poetry run pytest tests/test_all.py
|
||||
@@ -1,7 +0,0 @@
|
||||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 24.2.0
|
||||
hooks:
|
||||
- id: black
|
||||
language_version: python
|
||||
args: [--line-length=88, --quiet]
|
||||
247
README.md
247
README.md
@@ -2,18 +2,29 @@
|
||||
|
||||
**JobSpy** is a simple, yet comprehensive, job scraping library.
|
||||
|
||||
**Not technical?** Try out the web scraping tool on our site at [usejobspy.com](https://usejobspy.com).
|
||||
|
||||
*Looking to build a data-focused software product?* **[Book a call](https://calendly.com/bunsly/15min)** *to
|
||||
work with us.*
|
||||
\
|
||||
Check out another project we wrote: ***[HomeHarvest](https://github.com/Bunsly/HomeHarvest)** – a Python package
|
||||
for real estate scraping*
|
||||
|
||||
## Features
|
||||
|
||||
- Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, **Google**, & **ZipRecruiter** simultaneously
|
||||
- Aggregates the job postings in a dataframe
|
||||
- Proxies support to bypass blocking
|
||||
- Scrapes job postings from **LinkedIn**, **Indeed** & **ZipRecruiter** simultaneously
|
||||
- Aggregates the job postings in a Pandas DataFrame
|
||||
- Proxy support (HTTP/S, SOCKS)
|
||||
|
||||
[Video Guide for JobSpy](https://www.youtube.com/watch?v=RuP1HrAZnxs&pp=ygUgam9icyBzY3JhcGVyIGJvdCBsaW5rZWRpbiBpbmRlZWQ%3D) -
|
||||
Updated for release v1.1.3
|
||||
|
||||

|
||||
|
||||
### Installation
|
||||
|
||||
```
|
||||
pip install -U python-jobspy
|
||||
pip install python-jobspy
|
||||
```
|
||||
|
||||
_Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_
|
||||
@@ -21,30 +32,24 @@ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/)
|
||||
### Usage
|
||||
|
||||
```python
|
||||
import csv
|
||||
from jobspy import scrape_jobs
|
||||
|
||||
jobs = scrape_jobs(
|
||||
site_name=["indeed", "linkedin", "zip_recruiter", "glassdoor", "google"],
|
||||
site_name=["indeed", "linkedin", "zip_recruiter"],
|
||||
search_term="software engineer",
|
||||
google_search_term="software engineer jobs near San Francisco, CA since yesterday",
|
||||
location="San Francisco, CA",
|
||||
results_wanted=20,
|
||||
hours_old=72,
|
||||
country_indeed='USA',
|
||||
|
||||
# linkedin_fetch_description=True # gets more info such as description, direct job url (slower)
|
||||
# proxies=["208.195.175.46:65095", "208.195.175.45:65095", "localhost"],
|
||||
location="Dallas, TX",
|
||||
results_wanted=10,
|
||||
country_indeed='USA' # only needed for indeed
|
||||
)
|
||||
print(f"Found {len(jobs)} jobs")
|
||||
print(jobs.head())
|
||||
jobs.to_csv("jobs.csv", quoting=csv.QUOTE_NONNUMERIC, escapechar="\\", index=False) # to_excel
|
||||
jobs.to_csv("jobs.csv", index=False) # / to_xlsx
|
||||
```
|
||||
|
||||
### Output
|
||||
|
||||
```
|
||||
SITE TITLE COMPANY CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION
|
||||
SITE TITLE COMPANY_NAME CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION
|
||||
indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!...
|
||||
indeed Senior Software Engineer TherapyNotes.com Philadelphia PA fulltime yearly 135000 110000 https://www.indeed.com/viewjob?jk=da39574a40cb... About Us TherapyNotes is the national leader i...
|
||||
linkedin Software Engineer - Early Career Lockheed Martin Sunnyvale CA fulltime yearly None None https://www.linkedin.com/jobs/view/3693012711 Description:By bringing together people that u...
|
||||
@@ -56,188 +61,112 @@ zip_recruiter Software Developer TEKsystems Phoenix
|
||||
### Parameters for `scrape_jobs()`
|
||||
|
||||
```plaintext
|
||||
Required
|
||||
├── site_type (List[enum]): linkedin, zip_recruiter, indeed
|
||||
└── search_term (str)
|
||||
Optional
|
||||
├── site_name (list|str):
|
||||
| linkedin, zip_recruiter, indeed, glassdoor, google
|
||||
| (default is all)
|
||||
│
|
||||
├── search_term (str)
|
||||
|
|
||||
├── google_search_term (str)
|
||||
| search term for google jobs. This is the only param for filtering google jobs.
|
||||
│
|
||||
├── location (str)
|
||||
│
|
||||
├── distance (int):
|
||||
| in miles, default 50
|
||||
│
|
||||
├── job_type (str):
|
||||
| fulltime, parttime, internship, contract
|
||||
│
|
||||
├── proxies (list):
|
||||
| in format ['user:pass@host:port', 'localhost']
|
||||
| each job board scraper will round robin through the proxies
|
||||
|
|
||||
├── location (int)
|
||||
├── distance (int): in miles
|
||||
├── job_type (enum): fulltime, parttime, internship, contract
|
||||
├── proxy (str): in format 'http://user:pass@host:port' or [https, socks]
|
||||
├── is_remote (bool)
|
||||
│
|
||||
├── results_wanted (int):
|
||||
| number of job results to retrieve for each site specified in 'site_name'
|
||||
│
|
||||
├── easy_apply (bool):
|
||||
| filters for jobs that are hosted on the job board site (LinkedIn easy apply filter no longer works)
|
||||
│
|
||||
├── description_format (str):
|
||||
| markdown, html (Format type of the job descriptions. Default is markdown.)
|
||||
│
|
||||
├── offset (int):
|
||||
| starts the search from an offset (e.g. 25 will start the search from the 25th result)
|
||||
│
|
||||
├── hours_old (int):
|
||||
| filters jobs by the number of hours since the job was posted
|
||||
| (ZipRecruiter and Glassdoor round up to next day.)
|
||||
│
|
||||
├── verbose (int) {0, 1, 2}:
|
||||
| Controls the verbosity of the runtime printouts
|
||||
| (0 prints only errors, 1 is errors+warnings, 2 is all logs. Default is 2.)
|
||||
|
||||
├── linkedin_fetch_description (bool):
|
||||
| fetches full description and direct job url for LinkedIn (Increases requests by O(n))
|
||||
│
|
||||
├── linkedin_company_ids (list[int]):
|
||||
| searches for linkedin jobs with specific company ids
|
||||
|
|
||||
├── country_indeed (str):
|
||||
| filters the country on Indeed & Glassdoor (see below for correct spelling)
|
||||
|
|
||||
├── enforce_annual_salary (bool):
|
||||
| converts wages to annual salary
|
||||
|
|
||||
├── ca_cert (str)
|
||||
| path to CA Certificate file for proxies
|
||||
├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type'
|
||||
├── easy_apply (bool): filters for jobs that are hosted on LinkedIn
|
||||
├── country_indeed (enum): filters the country on Indeed (see below for correct spelling)
|
||||
├── offset (num): starts the search from an offset (e.g. 25 will start the search from the 25th result)
|
||||
```
|
||||
|
||||
### JobPost Schema
|
||||
|
||||
```plaintext
|
||||
JobPost
|
||||
├── title (str)
|
||||
├── company (str)
|
||||
├── job_url (str)
|
||||
├── location (object)
|
||||
│ ├── country (str)
|
||||
│ ├── city (str)
|
||||
│ ├── state (str)
|
||||
├── description (str)
|
||||
├── job_type (str): fulltime, parttime, internship, contract
|
||||
├── compensation (object)
|
||||
│ ├── interval (str): yearly, monthly, weekly, daily, hourly
|
||||
│ ├── min_amount (int)
|
||||
│ ├── max_amount (int)
|
||||
│ └── currency (enum)
|
||||
└── date_posted (date)
|
||||
└── emails (str)
|
||||
└── num_urgent_words (int)
|
||||
└── is_remote (bool)
|
||||
```
|
||||
├── Indeed limitations:
|
||||
| Only one from this list can be used in a search:
|
||||
| - hours_old
|
||||
| - job_type & is_remote
|
||||
| - easy_apply
|
||||
│
|
||||
└── LinkedIn limitations:
|
||||
| Only one from this list can be used in a search:
|
||||
| - hours_old
|
||||
| - easy_apply
|
||||
```
|
||||
|
||||
### Exceptions
|
||||
|
||||
The following exceptions may be raised when using JobSpy:
|
||||
|
||||
* `LinkedInException`
|
||||
* `IndeedException`
|
||||
* `ZipRecruiterException`
|
||||
|
||||
## Supported Countries for Job Searching
|
||||
|
||||
### **LinkedIn**
|
||||
|
||||
LinkedIn searches globally & uses only the `location` parameter.
|
||||
LinkedIn searches globally & uses only the `location` parameter.
|
||||
|
||||
### **ZipRecruiter**
|
||||
|
||||
ZipRecruiter searches for jobs in **US/Canada** & uses only the `location` parameter.
|
||||
|
||||
### **Indeed / Glassdoor**
|
||||
### **Indeed**
|
||||
|
||||
Indeed & Glassdoor supports most countries, but the `country_indeed` parameter is required. Additionally, use the `location`
|
||||
parameter to narrow down the location, e.g. city & state if necessary.
|
||||
Indeed supports most countries, but the `country_indeed` parameter is required. Additionally, use the `location`
|
||||
parameter to narrow down the location, e.g. city & state if necessary.
|
||||
|
||||
You can specify the following countries when searching on Indeed (use the exact name, * indicates support for Glassdoor):
|
||||
You can specify the following countries when searching on Indeed (use the exact name):
|
||||
|
||||
| | | | |
|
||||
|----------------------|--------------|------------|----------------|
|
||||
| Argentina | Australia* | Austria* | Bahrain |
|
||||
| Belgium* | Brazil* | Canada* | Chile |
|
||||
| Argentina | Australia | Austria | Bahrain |
|
||||
| Belgium | Brazil | Canada | Chile |
|
||||
| China | Colombia | Costa Rica | Czech Republic |
|
||||
| Denmark | Ecuador | Egypt | Finland |
|
||||
| France* | Germany* | Greece | Hong Kong* |
|
||||
| Hungary | India* | Indonesia | Ireland* |
|
||||
| Israel | Italy* | Japan | Kuwait |
|
||||
| Luxembourg | Malaysia | Mexico* | Morocco |
|
||||
| Netherlands* | New Zealand* | Nigeria | Norway |
|
||||
| France | Germany | Greece | Hong Kong |
|
||||
| Hungary | India | Indonesia | Ireland |
|
||||
| Israel | Italy | Japan | Kuwait |
|
||||
| Luxembourg | Malaysia | Mexico | Morocco |
|
||||
| Netherlands | New Zealand | Nigeria | Norway |
|
||||
| Oman | Pakistan | Panama | Peru |
|
||||
| Philippines | Poland | Portugal | Qatar |
|
||||
| Romania | Saudi Arabia | Singapore* | South Africa |
|
||||
| South Korea | Spain* | Sweden | Switzerland* |
|
||||
| Romania | Saudi Arabia | Singapore | South Africa |
|
||||
| South Korea | Spain | Sweden | Switzerland |
|
||||
| Taiwan | Thailand | Turkey | Ukraine |
|
||||
| United Arab Emirates | UK* | USA* | Uruguay |
|
||||
| Venezuela | Vietnam* | | |
|
||||
|
||||
|
||||
## Notes
|
||||
* Indeed is the best scraper currently with no rate limiting.
|
||||
* All the job board endpoints are capped at around 1000 jobs on a given search.
|
||||
* LinkedIn is the most restrictive and usually rate limits around the 10th page with one ip. Proxies are a must basically.
|
||||
| United Arab Emirates | UK | USA | Uruguay |
|
||||
| Venezuela | Vietnam | | |
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
---
|
||||
**Q: Why is Indeed giving unrelated roles?**
|
||||
**A:** Indeed searches the description too.
|
||||
|
||||
- use - to remove words
|
||||
- "" for exact match
|
||||
|
||||
Example of a good Indeed query
|
||||
|
||||
```py
|
||||
search_term='"engineering intern" software summer (java OR python OR c++) 2025 -tax -marketing'
|
||||
```
|
||||
|
||||
This searches the description/title and must include software, summer, 2025, one of the languages, engineering intern exactly, no tax, no marketing.
|
||||
|
||||
---
|
||||
|
||||
**Q: No results when using "google"?**
|
||||
**A:** You have to use super specific syntax. Search for google jobs on your browser and then whatever pops up in the google jobs search box after applying some filters is what you need to copy & paste into the google_search_term.
|
||||
**Q: Encountering issues with your queries?**
|
||||
**A:** Try reducing the number of `results_wanted` and/or broadening the filters. If problems
|
||||
persist, [submit an issue](https://github.com/Bunsly/JobSpy/issues).
|
||||
|
||||
---
|
||||
|
||||
**Q: Received a response code 429?**
|
||||
**A:** This indicates that you have been blocked by the job board site for sending too many requests. All of the job board sites are aggressive with blocking. We recommend:
|
||||
|
||||
- Wait some time between scrapes (site-dependent).
|
||||
- Try using the proxies param to change your IP address.
|
||||
- Waiting a few seconds between requests.
|
||||
- Trying a VPN or proxy to change your IP address.
|
||||
|
||||
---
|
||||
|
||||
### JobPost Schema
|
||||
**Q: Experiencing a "Segmentation fault: 11" on macOS Catalina?**
|
||||
**A:** This is due to `tls_client` dependency not supporting your architecture. Solutions and workarounds include:
|
||||
|
||||
```plaintext
|
||||
JobPost
|
||||
├── title
|
||||
├── company
|
||||
├── company_url
|
||||
├── job_url
|
||||
├── location
|
||||
│ ├── country
|
||||
│ ├── city
|
||||
│ ├── state
|
||||
├── description
|
||||
├── job_type: fulltime, parttime, internship, contract
|
||||
├── job_function
|
||||
│ ├── interval: yearly, monthly, weekly, daily, hourly
|
||||
│ ├── min_amount
|
||||
│ ├── max_amount
|
||||
│ ├── currency
|
||||
│ └── salary_source: direct_data, description (parsed from posting)
|
||||
├── date_posted
|
||||
├── emails
|
||||
└── is_remote
|
||||
- Upgrade to a newer version of MacOS
|
||||
- Reach out to the maintainers of [tls_client](https://github.com/bogdanfinn/tls-client) for fixes
|
||||
|
||||
Linkedin specific
|
||||
└── job_level
|
||||
|
||||
Linkedin & Indeed specific
|
||||
└── company_industry
|
||||
|
||||
Indeed specific
|
||||
├── company_country
|
||||
├── company_addresses
|
||||
├── company_employees_label
|
||||
├── company_revenue_label
|
||||
├── company_description
|
||||
└── company_logo
|
||||
```
|
||||
|
||||
|
||||
167
examples/JobSpy_Demo.ipynb
Normal file
167
examples/JobSpy_Demo.ipynb
Normal file
@@ -0,0 +1,167 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "00a94b47-f47b-420f-ba7e-714ef219c006",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from jobspy import scrape_jobs\n",
|
||||
"import pandas as pd\n",
|
||||
"from IPython.display import display, HTML"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9f773e6c-d9fc-42cc-b0ef-63b739e78435",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pd.set_option('display.max_columns', None)\n",
|
||||
"pd.set_option('display.max_rows', None)\n",
|
||||
"pd.set_option('display.width', None)\n",
|
||||
"pd.set_option('display.max_colwidth', 50)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1253c1f8-9437-492e-9dd3-e7fe51099420",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# example 1 (no hyperlinks, USA)\n",
|
||||
"jobs = scrape_jobs(\n",
|
||||
" site_name=[\"linkedin\"],\n",
|
||||
" location='san francisco',\n",
|
||||
" search_term=\"engineer\",\n",
|
||||
" results_wanted=5,\n",
|
||||
"\n",
|
||||
" # use if you want to use a proxy\n",
|
||||
" # proxy=\"socks5://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
|
||||
" proxy=\"http://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
|
||||
" #proxy=\"https://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
|
||||
")\n",
|
||||
"display(jobs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6a581b2d-f7da-4fac-868d-9efe143ee20a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# example 2 - remote USA & hyperlinks\n",
|
||||
"jobs = scrape_jobs(\n",
|
||||
" site_name=[\"linkedin\", \"zip_recruiter\", \"indeed\"],\n",
|
||||
" # location='san francisco',\n",
|
||||
" search_term=\"software engineer\",\n",
|
||||
" country_indeed=\"USA\",\n",
|
||||
" hyperlinks=True,\n",
|
||||
" is_remote=True,\n",
|
||||
" results_wanted=5, \n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fe8289bc-5b64-4202-9a64-7c117c83fd9a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# use if hyperlinks=True\n",
|
||||
"html = jobs.to_html(escape=False)\n",
|
||||
"# change max-width: 200px to show more or less of the content\n",
|
||||
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
|
||||
"display(HTML(truncate_width))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "951c2fe1-52ff-407d-8bb1-068049b36777",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# example 3 - with hyperlinks, international - linkedin (no zip_recruiter)\n",
|
||||
"jobs = scrape_jobs(\n",
|
||||
" site_name=[\"linkedin\"],\n",
|
||||
" location='berlin',\n",
|
||||
" search_term=\"engineer\",\n",
|
||||
" hyperlinks=True,\n",
|
||||
" results_wanted=5,\n",
|
||||
" easy_apply=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1e37a521-caef-441c-8fc2-2eb5b2e7da62",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# use if hyperlinks=True\n",
|
||||
"html = jobs.to_html(escape=False)\n",
|
||||
"# change max-width: 200px to show more or less of the content\n",
|
||||
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
|
||||
"display(HTML(truncate_width))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0650e608-0b58-4bf5-ae86-68348035b16a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# example 4 - international indeed (no zip_recruiter)\n",
|
||||
"jobs = scrape_jobs(\n",
|
||||
" site_name=[\"indeed\"],\n",
|
||||
" search_term=\"engineer\",\n",
|
||||
" country_indeed = \"China\",\n",
|
||||
" hyperlinks=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "40913ac8-3f8a-4d7e-ac47-afb88316432b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# use if hyperlinks=True\n",
|
||||
"html = jobs.to_html(escape=False)\n",
|
||||
"# change max-width: 200px to show more or less of the content\n",
|
||||
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
|
||||
"display(HTML(truncate_width))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
31
examples/JobSpy_Demo.py
Normal file
31
examples/JobSpy_Demo.py
Normal file
@@ -0,0 +1,31 @@
|
||||
from jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
jobs: pd.DataFrame = scrape_jobs(
|
||||
site_name=["indeed", "linkedin", "zip_recruiter"],
|
||||
search_term="software engineer",
|
||||
location="Dallas, TX",
|
||||
results_wanted=50, # be wary the higher it is, the more likey you'll get blocked (rotating proxy should work tho)
|
||||
country_indeed="USA",
|
||||
offset=25 # start jobs from an offset (use if search failed and want to continue)
|
||||
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
|
||||
)
|
||||
|
||||
# formatting for pandas
|
||||
pd.set_option("display.max_columns", None)
|
||||
pd.set_option("display.max_rows", None)
|
||||
pd.set_option("display.width", None)
|
||||
pd.set_option("display.max_colwidth", 50) # set to 0 to see full job url / desc
|
||||
|
||||
# 1: output to console
|
||||
print(jobs)
|
||||
|
||||
# 2: output to .csv
|
||||
jobs.to_csv("./jobs.csv", index=False)
|
||||
print("outputted to jobs.csv")
|
||||
|
||||
# 3: output to .xlsx
|
||||
# jobs.to_xlsx('jobs.xlsx', index=False)
|
||||
|
||||
# 4: display in Jupyter Notebook (1. pip install jupyter 2. jupyter notebook)
|
||||
# display(jobs)
|
||||
@@ -1,21 +0,0 @@
|
||||
import toml
|
||||
|
||||
def increment_version(version):
|
||||
major, minor, patch = map(int, version.split('.'))
|
||||
patch += 1
|
||||
return f"{major}.{minor}.{patch}"
|
||||
|
||||
# Load pyproject.toml
|
||||
with open('pyproject.toml', 'r') as file:
|
||||
pyproject = toml.load(file)
|
||||
|
||||
# Increment the version
|
||||
current_version = pyproject['tool']['poetry']['version']
|
||||
new_version = increment_version(current_version)
|
||||
pyproject['tool']['poetry']['version'] = new_version
|
||||
|
||||
# Save the updated pyproject.toml
|
||||
with open('pyproject.toml', 'w') as file:
|
||||
toml.dump(pyproject, file)
|
||||
|
||||
print(f"Version updated from {current_version} to {new_version}")
|
||||
2682
poetry.lock
generated
2682
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,35 +1,29 @@
|
||||
[build-system]
|
||||
requires = [ "poetry-core",]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
[tool.poetry]
|
||||
name = "python-jobspy"
|
||||
version = "1.1.76"
|
||||
description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter"
|
||||
authors = [ "Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>",]
|
||||
version = "1.1.16"
|
||||
description = "Job scraper for LinkedIn, Indeed & ZipRecruiter"
|
||||
authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"]
|
||||
homepage = "https://github.com/Bunsly/JobSpy"
|
||||
readme = "README.md"
|
||||
keywords = [ "jobs-scraper", "linkedin", "indeed", "glassdoor", "ziprecruiter",]
|
||||
[[tool.poetry.packages]]
|
||||
include = "jobspy"
|
||||
from = "src"
|
||||
|
||||
[tool.black]
|
||||
line-length = 88
|
||||
packages = [
|
||||
{ include = "jobspy", from = "src" }
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.10"
|
||||
requests = "^2.31.0"
|
||||
tls-client = "^0.2.1"
|
||||
beautifulsoup4 = "^4.12.2"
|
||||
pandas = "^2.1.0"
|
||||
NUMPY = "1.26.3"
|
||||
NUMPY = "1.24.2"
|
||||
pydantic = "^2.3.0"
|
||||
tls-client = "^1.0.1"
|
||||
markdownify = "^0.13.1"
|
||||
regex = "^2024.4.28"
|
||||
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
pytest = "^7.4.1"
|
||||
jupyter = "^1.0.0"
|
||||
black = "*"
|
||||
pre-commit = "*"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
@@ -1,64 +1,48 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import pandas as pd
|
||||
from typing import Tuple
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
import concurrent.futures
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from typing import Tuple, Optional
|
||||
|
||||
from .jobs import JobType, Location
|
||||
from .scrapers.utils import set_logger_level, extract_salary, create_logger
|
||||
from .scrapers.indeed import IndeedScraper
|
||||
from .scrapers.ziprecruiter import ZipRecruiterScraper
|
||||
from .scrapers.glassdoor import GlassdoorScraper
|
||||
from .scrapers.google import GoogleJobsScraper
|
||||
from .scrapers.linkedin import LinkedInScraper
|
||||
from .scrapers import SalarySource, ScraperInput, Site, JobResponse, Country
|
||||
from .scrapers import ScraperInput, Site, JobResponse, Country
|
||||
from .scrapers.exceptions import (
|
||||
LinkedInException,
|
||||
IndeedException,
|
||||
ZipRecruiterException,
|
||||
GlassdoorException,
|
||||
GoogleJobsException,
|
||||
)
|
||||
|
||||
SCRAPER_MAPPING = {
|
||||
Site.LINKEDIN: LinkedInScraper,
|
||||
Site.INDEED: IndeedScraper,
|
||||
Site.ZIP_RECRUITER: ZipRecruiterScraper,
|
||||
}
|
||||
|
||||
|
||||
def _map_str_to_site(site_name: str) -> Site:
|
||||
return Site[site_name.upper()]
|
||||
|
||||
|
||||
def scrape_jobs(
|
||||
site_name: str | list[str] | Site | list[Site] | None = None,
|
||||
search_term: str | None = None,
|
||||
google_search_term: str | None = None,
|
||||
location: str | None = None,
|
||||
distance: int | None = 50,
|
||||
site_name: str | list[str] | Site | list[Site],
|
||||
search_term: str,
|
||||
location: str = "",
|
||||
distance: int = None,
|
||||
is_remote: bool = False,
|
||||
job_type: str | None = None,
|
||||
easy_apply: bool | None = None,
|
||||
job_type: str = None,
|
||||
easy_apply: bool = False, # linkedin
|
||||
results_wanted: int = 15,
|
||||
country_indeed: str = "usa",
|
||||
hyperlinks: bool = False,
|
||||
proxies: list[str] | str | None = None,
|
||||
ca_cert: str | None = None,
|
||||
description_format: str = "markdown",
|
||||
linkedin_fetch_description: bool | None = False,
|
||||
linkedin_company_ids: list[int] | None = None,
|
||||
offset: int | None = 0,
|
||||
hours_old: int = None,
|
||||
enforce_annual_salary: bool = False,
|
||||
verbose: int = 2,
|
||||
**kwargs,
|
||||
proxy: Optional[str] = None,
|
||||
offset: Optional[int] = 0,
|
||||
) -> pd.DataFrame:
|
||||
"""
|
||||
Simultaneously scrapes job data from multiple job sites.
|
||||
:return: pandas dataframe containing job data
|
||||
:return: results_wanted: pandas dataframe containing job data
|
||||
"""
|
||||
SCRAPER_MAPPING = {
|
||||
Site.LINKEDIN: LinkedInScraper,
|
||||
Site.INDEED: IndeedScraper,
|
||||
Site.ZIP_RECRUITER: ZipRecruiterScraper,
|
||||
Site.GLASSDOOR: GlassdoorScraper,
|
||||
Site.GOOGLE: GoogleJobsScraper,
|
||||
}
|
||||
set_logger_level(verbose)
|
||||
|
||||
def map_str_to_site(site_name: str) -> Site:
|
||||
return Site[site_name.upper()]
|
||||
|
||||
def get_enum_from_value(value_str):
|
||||
for job_type in JobType:
|
||||
@@ -68,46 +52,46 @@ def scrape_jobs(
|
||||
|
||||
job_type = get_enum_from_value(job_type) if job_type else None
|
||||
|
||||
def get_site_type():
|
||||
site_types = list(Site)
|
||||
if isinstance(site_name, str):
|
||||
site_types = [map_str_to_site(site_name)]
|
||||
elif isinstance(site_name, Site):
|
||||
site_types = [site_name]
|
||||
elif isinstance(site_name, list):
|
||||
site_types = [
|
||||
map_str_to_site(site) if isinstance(site, str) else site
|
||||
for site in site_name
|
||||
]
|
||||
return site_types
|
||||
if type(site_name) == str:
|
||||
site_type = [_map_str_to_site(site_name)]
|
||||
else: #: if type(site_name) == list
|
||||
site_type = [
|
||||
_map_str_to_site(site) if type(site) == str else site_name
|
||||
for site in site_name
|
||||
]
|
||||
|
||||
country_enum = Country.from_string(country_indeed)
|
||||
|
||||
scraper_input = ScraperInput(
|
||||
site_type=get_site_type(),
|
||||
site_type=site_type,
|
||||
country=country_enum,
|
||||
search_term=search_term,
|
||||
google_search_term=google_search_term,
|
||||
location=location,
|
||||
distance=distance,
|
||||
is_remote=is_remote,
|
||||
job_type=job_type,
|
||||
easy_apply=easy_apply,
|
||||
description_format=description_format,
|
||||
linkedin_fetch_description=linkedin_fetch_description,
|
||||
results_wanted=results_wanted,
|
||||
linkedin_company_ids=linkedin_company_ids,
|
||||
offset=offset,
|
||||
hours_old=hours_old,
|
||||
)
|
||||
|
||||
def scrape_site(site: Site) -> Tuple[str, JobResponse]:
|
||||
scraper_class = SCRAPER_MAPPING[site]
|
||||
scraper = scraper_class(proxies=proxies, ca_cert=ca_cert)
|
||||
scraped_data: JobResponse = scraper.scrape(scraper_input)
|
||||
cap_name = site.value.capitalize()
|
||||
site_name = "ZipRecruiter" if cap_name == "Zip_recruiter" else cap_name
|
||||
create_logger(site_name).info(f"finished scraping")
|
||||
scraper = scraper_class(proxy=proxy)
|
||||
|
||||
try:
|
||||
scraped_data: JobResponse = scraper.scrape(scraper_input)
|
||||
except (LinkedInException, IndeedException, ZipRecruiterException) as lie:
|
||||
raise lie
|
||||
except Exception as e:
|
||||
if site == Site.LINKEDIN:
|
||||
raise LinkedInException(str(e))
|
||||
if site == Site.INDEED:
|
||||
raise IndeedException(str(e))
|
||||
if site == Site.ZIP_RECRUITER:
|
||||
raise ZipRecruiterException(str(e))
|
||||
else:
|
||||
raise e
|
||||
return site.value, scraped_data
|
||||
|
||||
site_to_jobs_dict = {}
|
||||
@@ -121,32 +105,18 @@ def scrape_jobs(
|
||||
executor.submit(worker, site): site for site in scraper_input.site_type
|
||||
}
|
||||
|
||||
for future in as_completed(future_to_site):
|
||||
for future in concurrent.futures.as_completed(future_to_site):
|
||||
site_value, scraped_data = future.result()
|
||||
site_to_jobs_dict[site_value] = scraped_data
|
||||
|
||||
def convert_to_annual(job_data: dict):
|
||||
if job_data["interval"] == "hourly":
|
||||
job_data["min_amount"] *= 2080
|
||||
job_data["max_amount"] *= 2080
|
||||
if job_data["interval"] == "monthly":
|
||||
job_data["min_amount"] *= 12
|
||||
job_data["max_amount"] *= 12
|
||||
if job_data["interval"] == "weekly":
|
||||
job_data["min_amount"] *= 52
|
||||
job_data["max_amount"] *= 52
|
||||
if job_data["interval"] == "daily":
|
||||
job_data["min_amount"] *= 260
|
||||
job_data["max_amount"] *= 260
|
||||
job_data["interval"] = "yearly"
|
||||
|
||||
jobs_dfs: list[pd.DataFrame] = []
|
||||
|
||||
for site, job_response in site_to_jobs_dict.items():
|
||||
for job in job_response.jobs:
|
||||
job_data = job.dict()
|
||||
job_url = job_data["job_url"]
|
||||
job_data["job_url_hyper"] = f'<a href="{job_url}">{job_url}</a>'
|
||||
job_data[
|
||||
"job_url_hyper"
|
||||
] = f'<a href="{job_data["job_url"]}">{job_data["job_url"]}</a>'
|
||||
job_data["site"] = site
|
||||
job_data["company"] = job_data["company_name"]
|
||||
job_data["job_type"] = (
|
||||
@@ -157,10 +127,7 @@ def scrape_jobs(
|
||||
job_data["emails"] = (
|
||||
", ".join(job_data["emails"]) if job_data["emails"] else None
|
||||
)
|
||||
if job_data["location"]:
|
||||
job_data["location"] = Location(
|
||||
**job_data["location"]
|
||||
).display_location()
|
||||
job_data["location"] = Location(**job_data["location"]).display_location()
|
||||
|
||||
compensation_obj = job_data.get("compensation")
|
||||
if compensation_obj and isinstance(compensation_obj, dict):
|
||||
@@ -172,86 +139,37 @@ def scrape_jobs(
|
||||
job_data["min_amount"] = compensation_obj.get("min_amount")
|
||||
job_data["max_amount"] = compensation_obj.get("max_amount")
|
||||
job_data["currency"] = compensation_obj.get("currency", "USD")
|
||||
job_data["salary_source"] = SalarySource.DIRECT_DATA.value
|
||||
if enforce_annual_salary and (
|
||||
job_data["interval"]
|
||||
and job_data["interval"] != "yearly"
|
||||
and job_data["min_amount"]
|
||||
and job_data["max_amount"]
|
||||
):
|
||||
convert_to_annual(job_data)
|
||||
|
||||
else:
|
||||
if country_enum == Country.USA:
|
||||
(
|
||||
job_data["interval"],
|
||||
job_data["min_amount"],
|
||||
job_data["max_amount"],
|
||||
job_data["currency"],
|
||||
) = extract_salary(
|
||||
job_data["description"],
|
||||
enforce_annual_salary=enforce_annual_salary,
|
||||
)
|
||||
job_data["salary_source"] = SalarySource.DESCRIPTION.value
|
||||
job_data["interval"] = None
|
||||
job_data["min_amount"] = None
|
||||
job_data["max_amount"] = None
|
||||
job_data["currency"] = None
|
||||
|
||||
job_data["salary_source"] = (
|
||||
job_data["salary_source"]
|
||||
if "min_amount" in job_data and job_data["min_amount"]
|
||||
else None
|
||||
)
|
||||
job_df = pd.DataFrame([job_data])
|
||||
jobs_dfs.append(job_df)
|
||||
|
||||
if jobs_dfs:
|
||||
# Step 1: Filter out all-NA columns from each DataFrame before concatenation
|
||||
filtered_dfs = [df.dropna(axis=1, how="all") for df in jobs_dfs]
|
||||
|
||||
# Step 2: Concatenate the filtered DataFrames
|
||||
jobs_df = pd.concat(filtered_dfs, ignore_index=True)
|
||||
|
||||
# Desired column order
|
||||
desired_order = [
|
||||
"id",
|
||||
"site",
|
||||
jobs_df = pd.concat(jobs_dfs, ignore_index=True)
|
||||
desired_order: list[str] = [
|
||||
"job_url_hyper" if hyperlinks else "job_url",
|
||||
"job_url_direct",
|
||||
"site",
|
||||
"title",
|
||||
"company",
|
||||
"location",
|
||||
"date_posted",
|
||||
"job_type",
|
||||
"salary_source",
|
||||
"date_posted",
|
||||
"interval",
|
||||
"min_amount",
|
||||
"max_amount",
|
||||
"currency",
|
||||
"is_remote",
|
||||
"job_level",
|
||||
"job_function",
|
||||
"listing_type",
|
||||
"num_urgent_words",
|
||||
"benefits",
|
||||
"emails",
|
||||
"description",
|
||||
"company_industry",
|
||||
"company_url",
|
||||
"company_logo",
|
||||
"company_url_direct",
|
||||
"company_addresses",
|
||||
"company_num_employees",
|
||||
"company_revenue",
|
||||
"company_description",
|
||||
]
|
||||
|
||||
# Step 3: Ensure all desired columns are present, adding missing ones as empty
|
||||
for column in desired_order:
|
||||
if column not in jobs_df.columns:
|
||||
jobs_df[column] = None # Add missing columns as empty
|
||||
|
||||
# Reorder the DataFrame according to the desired order
|
||||
jobs_df = jobs_df[desired_order]
|
||||
|
||||
# Step 4: Sort the DataFrame as required
|
||||
return jobs_df.sort_values(
|
||||
by=["site", "date_posted"], ascending=[True, False]
|
||||
).reset_index(drop=True)
|
||||
jobs_formatted_df = jobs_df[desired_order]
|
||||
else:
|
||||
return pd.DataFrame()
|
||||
jobs_formatted_df = pd.DataFrame()
|
||||
|
||||
return jobs_formatted_df
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Optional
|
||||
from typing import Union, Optional
|
||||
from datetime import date
|
||||
from enum import Enum
|
||||
from pydantic import BaseModel
|
||||
|
||||
from pydantic import BaseModel, validator
|
||||
|
||||
|
||||
class JobType(Enum):
|
||||
@@ -57,47 +56,40 @@ class JobType(Enum):
|
||||
|
||||
|
||||
class Country(Enum):
|
||||
"""
|
||||
Gets the subdomain for Indeed and Glassdoor.
|
||||
The second item in the tuple is the subdomain (and API country code if there's a ':' separator) for Indeed
|
||||
The third item in the tuple is the subdomain (and tld if there's a ':' separator) for Glassdoor
|
||||
"""
|
||||
|
||||
ARGENTINA = ("argentina", "ar", "com.ar")
|
||||
AUSTRALIA = ("australia", "au", "com.au")
|
||||
AUSTRIA = ("austria", "at", "at")
|
||||
ARGENTINA = ("argentina", "ar")
|
||||
AUSTRALIA = ("australia", "au")
|
||||
AUSTRIA = ("austria", "at")
|
||||
BAHRAIN = ("bahrain", "bh")
|
||||
BELGIUM = ("belgium", "be", "fr:be")
|
||||
BRAZIL = ("brazil", "br", "com.br")
|
||||
CANADA = ("canada", "ca", "ca")
|
||||
BELGIUM = ("belgium", "be")
|
||||
BRAZIL = ("brazil", "br")
|
||||
CANADA = ("canada", "ca")
|
||||
CHILE = ("chile", "cl")
|
||||
CHINA = ("china", "cn")
|
||||
COLOMBIA = ("colombia", "co")
|
||||
COSTARICA = ("costa rica", "cr")
|
||||
CZECHREPUBLIC = ("czech republic,czechia", "cz")
|
||||
CZECHREPUBLIC = ("czech republic", "cz")
|
||||
DENMARK = ("denmark", "dk")
|
||||
ECUADOR = ("ecuador", "ec")
|
||||
EGYPT = ("egypt", "eg")
|
||||
FINLAND = ("finland", "fi")
|
||||
FRANCE = ("france", "fr", "fr")
|
||||
GERMANY = ("germany", "de", "de")
|
||||
FRANCE = ("france", "fr")
|
||||
GERMANY = ("germany", "de")
|
||||
GREECE = ("greece", "gr")
|
||||
HONGKONG = ("hong kong", "hk", "com.hk")
|
||||
HONGKONG = ("hong kong", "hk")
|
||||
HUNGARY = ("hungary", "hu")
|
||||
INDIA = ("india", "in", "co.in")
|
||||
INDIA = ("india", "in")
|
||||
INDONESIA = ("indonesia", "id")
|
||||
IRELAND = ("ireland", "ie", "ie")
|
||||
IRELAND = ("ireland", "ie")
|
||||
ISRAEL = ("israel", "il")
|
||||
ITALY = ("italy", "it", "it")
|
||||
ITALY = ("italy", "it")
|
||||
JAPAN = ("japan", "jp")
|
||||
KUWAIT = ("kuwait", "kw")
|
||||
LUXEMBOURG = ("luxembourg", "lu")
|
||||
MALAYSIA = ("malaysia", "malaysia:my", "com")
|
||||
MALTA = ("malta", "malta:mt", "mt")
|
||||
MEXICO = ("mexico", "mx", "com.mx")
|
||||
MALAYSIA = ("malaysia", "malaysia")
|
||||
MEXICO = ("mexico", "mx")
|
||||
MOROCCO = ("morocco", "ma")
|
||||
NETHERLANDS = ("netherlands", "nl", "nl")
|
||||
NEWZEALAND = ("new zealand", "nz", "co.nz")
|
||||
NETHERLANDS = ("netherlands", "nl")
|
||||
NEWZEALAND = ("new zealand", "nz")
|
||||
NIGERIA = ("nigeria", "ng")
|
||||
NORWAY = ("norway", "no")
|
||||
OMAN = ("oman", "om")
|
||||
@@ -110,66 +102,54 @@ class Country(Enum):
|
||||
QATAR = ("qatar", "qa")
|
||||
ROMANIA = ("romania", "ro")
|
||||
SAUDIARABIA = ("saudi arabia", "sa")
|
||||
SINGAPORE = ("singapore", "sg", "sg")
|
||||
SINGAPORE = ("singapore", "sg")
|
||||
SOUTHAFRICA = ("south africa", "za")
|
||||
SOUTHKOREA = ("south korea", "kr")
|
||||
SPAIN = ("spain", "es", "es")
|
||||
SPAIN = ("spain", "es")
|
||||
SWEDEN = ("sweden", "se")
|
||||
SWITZERLAND = ("switzerland", "ch", "de:ch")
|
||||
SWITZERLAND = ("switzerland", "ch")
|
||||
TAIWAN = ("taiwan", "tw")
|
||||
THAILAND = ("thailand", "th")
|
||||
TURKEY = ("türkiye,turkey", "tr")
|
||||
TURKEY = ("turkey", "tr")
|
||||
UKRAINE = ("ukraine", "ua")
|
||||
UNITEDARABEMIRATES = ("united arab emirates", "ae")
|
||||
UK = ("uk,united kingdom", "uk:gb", "co.uk")
|
||||
USA = ("usa,us,united states", "www:us", "com")
|
||||
UK = ("uk", "uk")
|
||||
USA = ("usa", "www")
|
||||
URUGUAY = ("uruguay", "uy")
|
||||
VENEZUELA = ("venezuela", "ve")
|
||||
VIETNAM = ("vietnam", "vn", "com")
|
||||
VIETNAM = ("vietnam", "vn")
|
||||
|
||||
# internal for ziprecruiter
|
||||
US_CANADA = ("usa/ca", "www")
|
||||
|
||||
# internal for linkedin
|
||||
# internal for linkeind
|
||||
WORLDWIDE = ("worldwide", "www")
|
||||
|
||||
@property
|
||||
def indeed_domain_value(self):
|
||||
subdomain, _, api_country_code = self.value[1].partition(":")
|
||||
if subdomain and api_country_code:
|
||||
return subdomain, api_country_code.upper()
|
||||
return self.value[1], self.value[1].upper()
|
||||
def __new__(cls, country, domain):
|
||||
obj = object.__new__(cls)
|
||||
obj._value_ = country
|
||||
obj.domain = domain
|
||||
return obj
|
||||
|
||||
@property
|
||||
def glassdoor_domain_value(self):
|
||||
if len(self.value) == 3:
|
||||
subdomain, _, domain = self.value[2].partition(":")
|
||||
if subdomain and domain:
|
||||
return f"{subdomain}.glassdoor.{domain}"
|
||||
else:
|
||||
return f"www.glassdoor.{self.value[2]}"
|
||||
else:
|
||||
raise Exception(f"Glassdoor is not available for {self.name}")
|
||||
|
||||
def get_glassdoor_url(self):
|
||||
return f"https://{self.glassdoor_domain_value}/"
|
||||
def domain_value(self):
|
||||
return self.domain
|
||||
|
||||
@classmethod
|
||||
def from_string(cls, country_str: str):
|
||||
"""Convert a string to the corresponding Country enum."""
|
||||
country_str = country_str.strip().lower()
|
||||
for country in cls:
|
||||
country_names = country.value[0].split(",")
|
||||
if country_str in country_names:
|
||||
if country.value == country_str:
|
||||
return country
|
||||
valid_countries = [country.value for country in cls]
|
||||
raise ValueError(
|
||||
f"Invalid country string: '{country_str}'. Valid countries are: {', '.join([country[0] for country in valid_countries])}"
|
||||
f"Invalid country string: '{country_str}'. Valid countries (only include this param for Indeed) are: {', '.join(valid_countries)}"
|
||||
)
|
||||
|
||||
|
||||
class Location(BaseModel):
|
||||
country: Country | str | None = None
|
||||
country: Country = None
|
||||
city: Optional[str] = None
|
||||
state: Optional[str] = None
|
||||
|
||||
@@ -179,19 +159,11 @@ class Location(BaseModel):
|
||||
location_parts.append(self.city)
|
||||
if self.state:
|
||||
location_parts.append(self.state)
|
||||
if isinstance(self.country, str):
|
||||
location_parts.append(self.country)
|
||||
elif self.country and self.country not in (
|
||||
Country.US_CANADA,
|
||||
Country.WORLDWIDE,
|
||||
):
|
||||
country_name = self.country.value[0]
|
||||
if "," in country_name:
|
||||
country_name = country_name.split(",")[0]
|
||||
if country_name in ("usa", "uk"):
|
||||
location_parts.append(country_name.upper())
|
||||
if self.country and self.country not in (Country.US_CANADA, Country.WORLDWIDE):
|
||||
if self.country.value in ("usa", "uk"):
|
||||
location_parts.append(self.country.value.upper())
|
||||
else:
|
||||
location_parts.append(country_name.title())
|
||||
location_parts.append(self.country.value.title())
|
||||
return ", ".join(location_parts)
|
||||
|
||||
|
||||
@@ -202,65 +174,29 @@ class CompensationInterval(Enum):
|
||||
DAILY = "daily"
|
||||
HOURLY = "hourly"
|
||||
|
||||
@classmethod
|
||||
def get_interval(cls, pay_period):
|
||||
interval_mapping = {
|
||||
"YEAR": cls.YEARLY,
|
||||
"HOUR": cls.HOURLY,
|
||||
}
|
||||
if pay_period in interval_mapping:
|
||||
return interval_mapping[pay_period].value
|
||||
else:
|
||||
return cls[pay_period].value if pay_period in cls.__members__ else None
|
||||
|
||||
|
||||
class Compensation(BaseModel):
|
||||
interval: Optional[CompensationInterval] = None
|
||||
min_amount: float | None = None
|
||||
max_amount: float | None = None
|
||||
min_amount: int | None = None
|
||||
max_amount: int | None = None
|
||||
currency: Optional[str] = "USD"
|
||||
|
||||
|
||||
class DescriptionFormat(Enum):
|
||||
MARKDOWN = "markdown"
|
||||
HTML = "html"
|
||||
|
||||
|
||||
class JobPost(BaseModel):
|
||||
id: str | None = None
|
||||
title: str
|
||||
company_name: str | None
|
||||
company_name: str
|
||||
job_url: str
|
||||
job_url_direct: str | None = None
|
||||
location: Optional[Location]
|
||||
|
||||
description: str | None = None
|
||||
company_url: str | None = None
|
||||
company_url_direct: str | None = None
|
||||
|
||||
job_type: list[JobType] | None = None
|
||||
compensation: Compensation | None = None
|
||||
date_posted: date | None = None
|
||||
benefits: str | None = None
|
||||
emails: list[str] | None = None
|
||||
num_urgent_words: int | None = None
|
||||
is_remote: bool | None = None
|
||||
listing_type: str | None = None
|
||||
|
||||
# linkedin specific
|
||||
job_level: str | None = None
|
||||
|
||||
# linkedin and indeed specific
|
||||
company_industry: str | None = None
|
||||
|
||||
# indeed specific
|
||||
company_addresses: str | None = None
|
||||
company_num_employees: str | None = None
|
||||
company_revenue: str | None = None
|
||||
company_description: str | None = None
|
||||
company_logo: str | None = None
|
||||
banner_photo_url: str | None = None
|
||||
|
||||
# linkedin only atm
|
||||
job_function: str | None = None
|
||||
# company_industry: str | None = None
|
||||
|
||||
|
||||
class JobResponse(BaseModel):
|
||||
|
||||
@@ -1,57 +1,32 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from ..jobs import (
|
||||
Enum,
|
||||
BaseModel,
|
||||
JobType,
|
||||
JobResponse,
|
||||
Country,
|
||||
DescriptionFormat,
|
||||
)
|
||||
from ..jobs import Enum, BaseModel, JobType, JobResponse, Country
|
||||
from typing import List, Optional, Any
|
||||
|
||||
|
||||
class Site(Enum):
|
||||
LINKEDIN = "linkedin"
|
||||
INDEED = "indeed"
|
||||
ZIP_RECRUITER = "zip_recruiter"
|
||||
GLASSDOOR = "glassdoor"
|
||||
GOOGLE = "google"
|
||||
|
||||
|
||||
class SalarySource(Enum):
|
||||
DIRECT_DATA = "direct_data"
|
||||
DESCRIPTION = "description"
|
||||
|
||||
|
||||
class ScraperInput(BaseModel):
|
||||
site_type: list[Site]
|
||||
search_term: str | None = None
|
||||
google_search_term: str | None = None
|
||||
site_type: List[Site]
|
||||
search_term: str
|
||||
|
||||
location: str | None = None
|
||||
country: Country | None = Country.USA
|
||||
distance: int | None = None
|
||||
location: str = None
|
||||
country: Optional[Country] = Country.USA
|
||||
distance: Optional[int] = None
|
||||
is_remote: bool = False
|
||||
job_type: JobType | None = None
|
||||
easy_apply: bool | None = None
|
||||
job_type: Optional[JobType] = None
|
||||
easy_apply: bool = None # linkedin
|
||||
offset: int = 0
|
||||
linkedin_fetch_description: bool = False
|
||||
linkedin_company_ids: list[int] | None = None
|
||||
description_format: DescriptionFormat | None = DescriptionFormat.MARKDOWN
|
||||
|
||||
results_wanted: int = 15
|
||||
hours_old: int | None = None
|
||||
|
||||
|
||||
class Scraper(ABC):
|
||||
def __init__(
|
||||
self, site: Site, proxies: list[str] | None = None, ca_cert: str | None = None
|
||||
):
|
||||
class Scraper:
|
||||
def __init__(self, site: Site, proxy: Optional[List[str]] = None):
|
||||
self.site = site
|
||||
self.proxies = proxies
|
||||
self.ca_cert = ca_cert
|
||||
self.proxy = (lambda p: {"http": p, "https": p} if p else None)(proxy)
|
||||
|
||||
@abstractmethod
|
||||
def scrape(self, scraper_input: ScraperInput) -> JobResponse: ...
|
||||
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
|
||||
...
|
||||
|
||||
@@ -19,13 +19,3 @@ class IndeedException(Exception):
|
||||
class ZipRecruiterException(Exception):
|
||||
def __init__(self, message=None):
|
||||
super().__init__(message or "An error occurred with ZipRecruiter")
|
||||
|
||||
|
||||
class GlassdoorException(Exception):
|
||||
def __init__(self, message=None):
|
||||
super().__init__(message or "An error occurred with Glassdoor")
|
||||
|
||||
|
||||
class GoogleJobsException(Exception):
|
||||
def __init__(self, message=None):
|
||||
super().__init__(message or "An error occurred with Google Jobs")
|
||||
|
||||
@@ -1,364 +0,0 @@
|
||||
"""
|
||||
jobspy.scrapers.glassdoor
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This module contains routines to scrape Glassdoor.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
import json
|
||||
import requests
|
||||
from typing import Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
|
||||
from .constants import fallback_token, query_template, headers
|
||||
from .. import Scraper, ScraperInput, Site
|
||||
from ..utils import extract_emails_from_text, create_logger
|
||||
from ..exceptions import GlassdoorException
|
||||
from ..utils import (
|
||||
create_session,
|
||||
markdown_converter,
|
||||
)
|
||||
from ...jobs import (
|
||||
JobPost,
|
||||
Compensation,
|
||||
CompensationInterval,
|
||||
Location,
|
||||
JobResponse,
|
||||
JobType,
|
||||
DescriptionFormat,
|
||||
)
|
||||
|
||||
logger = create_logger("Glassdoor")
|
||||
|
||||
|
||||
class GlassdoorScraper(Scraper):
|
||||
def __init__(
|
||||
self, proxies: list[str] | str | None = None, ca_cert: str | None = None
|
||||
):
|
||||
"""
|
||||
Initializes GlassdoorScraper with the Glassdoor job search url
|
||||
"""
|
||||
site = Site(Site.GLASSDOOR)
|
||||
super().__init__(site, proxies=proxies, ca_cert=ca_cert)
|
||||
|
||||
self.base_url = None
|
||||
self.country = None
|
||||
self.session = None
|
||||
self.scraper_input = None
|
||||
self.jobs_per_page = 30
|
||||
self.max_pages = 30
|
||||
self.seen_urls = set()
|
||||
|
||||
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
|
||||
"""
|
||||
Scrapes Glassdoor for jobs with scraper_input criteria.
|
||||
:param scraper_input: Information about job search criteria.
|
||||
:return: JobResponse containing a list of jobs.
|
||||
"""
|
||||
self.scraper_input = scraper_input
|
||||
self.scraper_input.results_wanted = min(900, scraper_input.results_wanted)
|
||||
self.base_url = self.scraper_input.country.get_glassdoor_url()
|
||||
|
||||
self.session = create_session(
|
||||
proxies=self.proxies, ca_cert=self.ca_cert, is_tls=True, has_retry=True
|
||||
)
|
||||
token = self._get_csrf_token()
|
||||
headers["gd-csrf-token"] = token if token else fallback_token
|
||||
self.session.headers.update(headers)
|
||||
|
||||
location_id, location_type = self._get_location(
|
||||
scraper_input.location, scraper_input.is_remote
|
||||
)
|
||||
if location_type is None:
|
||||
logger.error("Glassdoor: location not parsed")
|
||||
return JobResponse(jobs=[])
|
||||
job_list: list[JobPost] = []
|
||||
cursor = None
|
||||
|
||||
range_start = 1 + (scraper_input.offset // self.jobs_per_page)
|
||||
tot_pages = (scraper_input.results_wanted // self.jobs_per_page) + 2
|
||||
range_end = min(tot_pages, self.max_pages + 1)
|
||||
for page in range(range_start, range_end):
|
||||
logger.info(f"search page: {page} / {range_end-1}")
|
||||
try:
|
||||
jobs, cursor = self._fetch_jobs_page(
|
||||
scraper_input, location_id, location_type, page, cursor
|
||||
)
|
||||
job_list.extend(jobs)
|
||||
if not jobs or len(job_list) >= scraper_input.results_wanted:
|
||||
job_list = job_list[: scraper_input.results_wanted]
|
||||
break
|
||||
except Exception as e:
|
||||
logger.error(f"Glassdoor: {str(e)}")
|
||||
break
|
||||
return JobResponse(jobs=job_list)
|
||||
|
||||
def _fetch_jobs_page(
|
||||
self,
|
||||
scraper_input: ScraperInput,
|
||||
location_id: int,
|
||||
location_type: str,
|
||||
page_num: int,
|
||||
cursor: str | None,
|
||||
) -> Tuple[list[JobPost], str | None]:
|
||||
"""
|
||||
Scrapes a page of Glassdoor for jobs with scraper_input criteria
|
||||
"""
|
||||
jobs = []
|
||||
self.scraper_input = scraper_input
|
||||
try:
|
||||
payload = self._add_payload(location_id, location_type, page_num, cursor)
|
||||
response = self.session.post(
|
||||
f"{self.base_url}/graph",
|
||||
timeout_seconds=15,
|
||||
data=payload,
|
||||
)
|
||||
if response.status_code != 200:
|
||||
exc_msg = f"bad response status code: {response.status_code}"
|
||||
raise GlassdoorException(exc_msg)
|
||||
res_json = response.json()[0]
|
||||
if "errors" in res_json:
|
||||
raise ValueError("Error encountered in API response")
|
||||
except (
|
||||
requests.exceptions.ReadTimeout,
|
||||
GlassdoorException,
|
||||
ValueError,
|
||||
Exception,
|
||||
) as e:
|
||||
logger.error(f"Glassdoor: {str(e)}")
|
||||
return jobs, None
|
||||
|
||||
jobs_data = res_json["data"]["jobListings"]["jobListings"]
|
||||
|
||||
with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
|
||||
future_to_job_data = {
|
||||
executor.submit(self._process_job, job): job for job in jobs_data
|
||||
}
|
||||
for future in as_completed(future_to_job_data):
|
||||
try:
|
||||
job_post = future.result()
|
||||
if job_post:
|
||||
jobs.append(job_post)
|
||||
except Exception as exc:
|
||||
raise GlassdoorException(f"Glassdoor generated an exception: {exc}")
|
||||
|
||||
return jobs, self.get_cursor_for_page(
|
||||
res_json["data"]["jobListings"]["paginationCursors"], page_num + 1
|
||||
)
|
||||
|
||||
def _get_csrf_token(self):
|
||||
"""
|
||||
Fetches csrf token needed for API by visiting a generic page
|
||||
"""
|
||||
res = self.session.get(f"{self.base_url}/Job/computer-science-jobs.htm")
|
||||
pattern = r'"token":\s*"([^"]+)"'
|
||||
matches = re.findall(pattern, res.text)
|
||||
token = None
|
||||
if matches:
|
||||
token = matches[0]
|
||||
return token
|
||||
|
||||
def _process_job(self, job_data):
|
||||
"""
|
||||
Processes a single job and fetches its description.
|
||||
"""
|
||||
job_id = job_data["jobview"]["job"]["listingId"]
|
||||
job_url = f"{self.base_url}job-listing/j?jl={job_id}"
|
||||
if job_url in self.seen_urls:
|
||||
return None
|
||||
self.seen_urls.add(job_url)
|
||||
job = job_data["jobview"]
|
||||
title = job["job"]["jobTitleText"]
|
||||
company_name = job["header"]["employerNameFromSearch"]
|
||||
company_id = job_data["jobview"]["header"]["employer"]["id"]
|
||||
location_name = job["header"].get("locationName", "")
|
||||
location_type = job["header"].get("locationType", "")
|
||||
age_in_days = job["header"].get("ageInDays")
|
||||
is_remote, location = False, None
|
||||
date_diff = (datetime.now() - timedelta(days=age_in_days)).date()
|
||||
date_posted = date_diff if age_in_days is not None else None
|
||||
|
||||
if location_type == "S":
|
||||
is_remote = True
|
||||
else:
|
||||
location = self.parse_location(location_name)
|
||||
|
||||
compensation = self.parse_compensation(job["header"])
|
||||
try:
|
||||
description = self._fetch_job_description(job_id)
|
||||
except:
|
||||
description = None
|
||||
company_url = f"{self.base_url}Overview/W-EI_IE{company_id}.htm"
|
||||
company_logo = (
|
||||
job_data["jobview"].get("overview", {}).get("squareLogoUrl", None)
|
||||
)
|
||||
listing_type = (
|
||||
job_data["jobview"]
|
||||
.get("header", {})
|
||||
.get("adOrderSponsorshipLevel", "")
|
||||
.lower()
|
||||
)
|
||||
return JobPost(
|
||||
id=f"gd-{job_id}",
|
||||
title=title,
|
||||
company_url=company_url if company_id else None,
|
||||
company_name=company_name,
|
||||
date_posted=date_posted,
|
||||
job_url=job_url,
|
||||
location=location,
|
||||
compensation=compensation,
|
||||
is_remote=is_remote,
|
||||
description=description,
|
||||
emails=extract_emails_from_text(description) if description else None,
|
||||
company_logo=company_logo,
|
||||
listing_type=listing_type,
|
||||
)
|
||||
|
||||
def _fetch_job_description(self, job_id):
|
||||
"""
|
||||
Fetches the job description for a single job ID.
|
||||
"""
|
||||
url = f"{self.base_url}/graph"
|
||||
body = [
|
||||
{
|
||||
"operationName": "JobDetailQuery",
|
||||
"variables": {
|
||||
"jl": job_id,
|
||||
"queryString": "q",
|
||||
"pageTypeEnum": "SERP",
|
||||
},
|
||||
"query": """
|
||||
query JobDetailQuery($jl: Long!, $queryString: String, $pageTypeEnum: PageTypeEnum) {
|
||||
jobview: jobView(
|
||||
listingId: $jl
|
||||
contextHolder: {queryString: $queryString, pageTypeEnum: $pageTypeEnum}
|
||||
) {
|
||||
job {
|
||||
description
|
||||
__typename
|
||||
}
|
||||
__typename
|
||||
}
|
||||
}
|
||||
""",
|
||||
}
|
||||
]
|
||||
res = requests.post(url, json=body, headers=headers)
|
||||
if res.status_code != 200:
|
||||
return None
|
||||
data = res.json()[0]
|
||||
desc = data["data"]["jobview"]["job"]["description"]
|
||||
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
|
||||
desc = markdown_converter(desc)
|
||||
return desc
|
||||
|
||||
def _get_location(self, location: str, is_remote: bool) -> (int, str):
|
||||
if not location or is_remote:
|
||||
return "11047", "STATE" # remote options
|
||||
url = f"{self.base_url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}"
|
||||
res = self.session.get(url)
|
||||
if res.status_code != 200:
|
||||
if res.status_code == 429:
|
||||
err = f"429 Response - Blocked by Glassdoor for too many requests"
|
||||
logger.error(err)
|
||||
return None, None
|
||||
else:
|
||||
err = f"Glassdoor response status code {res.status_code}"
|
||||
err += f" - {res.text}"
|
||||
logger.error(f"Glassdoor response status code {res.status_code}")
|
||||
return None, None
|
||||
items = res.json()
|
||||
|
||||
if not items:
|
||||
raise ValueError(f"Location '{location}' not found on Glassdoor")
|
||||
location_type = items[0]["locationType"]
|
||||
if location_type == "C":
|
||||
location_type = "CITY"
|
||||
elif location_type == "S":
|
||||
location_type = "STATE"
|
||||
elif location_type == "N":
|
||||
location_type = "COUNTRY"
|
||||
return int(items[0]["locationId"]), location_type
|
||||
|
||||
def _add_payload(
|
||||
self,
|
||||
location_id: int,
|
||||
location_type: str,
|
||||
page_num: int,
|
||||
cursor: str | None = None,
|
||||
) -> str:
|
||||
fromage = None
|
||||
if self.scraper_input.hours_old:
|
||||
fromage = max(self.scraper_input.hours_old // 24, 1)
|
||||
filter_params = []
|
||||
if self.scraper_input.easy_apply:
|
||||
filter_params.append({"filterKey": "applicationType", "values": "1"})
|
||||
if fromage:
|
||||
filter_params.append({"filterKey": "fromAge", "values": str(fromage)})
|
||||
payload = {
|
||||
"operationName": "JobSearchResultsQuery",
|
||||
"variables": {
|
||||
"excludeJobListingIds": [],
|
||||
"filterParams": filter_params,
|
||||
"keyword": self.scraper_input.search_term,
|
||||
"numJobsToShow": 30,
|
||||
"locationType": location_type,
|
||||
"locationId": int(location_id),
|
||||
"parameterUrlInput": f"IL.0,12_I{location_type}{location_id}",
|
||||
"pageNumber": page_num,
|
||||
"pageCursor": cursor,
|
||||
"fromage": fromage,
|
||||
"sort": "date",
|
||||
},
|
||||
"query": query_template,
|
||||
}
|
||||
if self.scraper_input.job_type:
|
||||
payload["variables"]["filterParams"].append(
|
||||
{"filterKey": "jobType", "values": self.scraper_input.job_type.value[0]}
|
||||
)
|
||||
return json.dumps([payload])
|
||||
|
||||
@staticmethod
|
||||
def parse_compensation(data: dict) -> Optional[Compensation]:
|
||||
pay_period = data.get("payPeriod")
|
||||
adjusted_pay = data.get("payPeriodAdjustedPay")
|
||||
currency = data.get("payCurrency", "USD")
|
||||
if not pay_period or not adjusted_pay:
|
||||
return None
|
||||
|
||||
interval = None
|
||||
if pay_period == "ANNUAL":
|
||||
interval = CompensationInterval.YEARLY
|
||||
elif pay_period:
|
||||
interval = CompensationInterval.get_interval(pay_period)
|
||||
min_amount = int(adjusted_pay.get("p10") // 1)
|
||||
max_amount = int(adjusted_pay.get("p90") // 1)
|
||||
return Compensation(
|
||||
interval=interval,
|
||||
min_amount=min_amount,
|
||||
max_amount=max_amount,
|
||||
currency=currency,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def get_job_type_enum(job_type_str: str) -> list[JobType] | None:
|
||||
for job_type in JobType:
|
||||
if job_type_str in job_type.value:
|
||||
return [job_type]
|
||||
|
||||
@staticmethod
|
||||
def parse_location(location_name: str) -> Location | None:
|
||||
if not location_name or location_name == "Remote":
|
||||
return
|
||||
city, _, state = location_name.partition(", ")
|
||||
return Location(city=city, state=state)
|
||||
|
||||
@staticmethod
|
||||
def get_cursor_for_page(pagination_cursors, page_num):
|
||||
for cursor_data in pagination_cursors:
|
||||
if cursor_data["pageNumber"] == page_num:
|
||||
return cursor_data["cursor"]
|
||||
@@ -1,184 +0,0 @@
|
||||
headers = {
|
||||
"authority": "www.glassdoor.com",
|
||||
"accept": "*/*",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"apollographql-client-name": "job-search-next",
|
||||
"apollographql-client-version": "4.65.5",
|
||||
"content-type": "application/json",
|
||||
"origin": "https://www.glassdoor.com",
|
||||
"referer": "https://www.glassdoor.com/",
|
||||
"sec-ch-ua": '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
|
||||
"sec-ch-ua-mobile": "?0",
|
||||
"sec-ch-ua-platform": '"macOS"',
|
||||
"sec-fetch-dest": "empty",
|
||||
"sec-fetch-mode": "cors",
|
||||
"sec-fetch-site": "same-origin",
|
||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36",
|
||||
}
|
||||
query_template = """
|
||||
query JobSearchResultsQuery(
|
||||
$excludeJobListingIds: [Long!],
|
||||
$keyword: String,
|
||||
$locationId: Int,
|
||||
$locationType: LocationTypeEnum,
|
||||
$numJobsToShow: Int!,
|
||||
$pageCursor: String,
|
||||
$pageNumber: Int,
|
||||
$filterParams: [FilterParams],
|
||||
$originalPageUrl: String,
|
||||
$seoFriendlyUrlInput: String,
|
||||
$parameterUrlInput: String,
|
||||
$seoUrl: Boolean
|
||||
) {
|
||||
jobListings(
|
||||
contextHolder: {
|
||||
searchParams: {
|
||||
excludeJobListingIds: $excludeJobListingIds,
|
||||
keyword: $keyword,
|
||||
locationId: $locationId,
|
||||
locationType: $locationType,
|
||||
numPerPage: $numJobsToShow,
|
||||
pageCursor: $pageCursor,
|
||||
pageNumber: $pageNumber,
|
||||
filterParams: $filterParams,
|
||||
originalPageUrl: $originalPageUrl,
|
||||
seoFriendlyUrlInput: $seoFriendlyUrlInput,
|
||||
parameterUrlInput: $parameterUrlInput,
|
||||
seoUrl: $seoUrl,
|
||||
searchType: SR
|
||||
}
|
||||
}
|
||||
) {
|
||||
companyFilterOptions {
|
||||
id
|
||||
shortName
|
||||
__typename
|
||||
}
|
||||
filterOptions
|
||||
indeedCtk
|
||||
jobListings {
|
||||
...JobView
|
||||
__typename
|
||||
}
|
||||
jobListingSeoLinks {
|
||||
linkItems {
|
||||
position
|
||||
url
|
||||
__typename
|
||||
}
|
||||
__typename
|
||||
}
|
||||
jobSearchTrackingKey
|
||||
jobsPageSeoData {
|
||||
pageMetaDescription
|
||||
pageTitle
|
||||
__typename
|
||||
}
|
||||
paginationCursors {
|
||||
cursor
|
||||
pageNumber
|
||||
__typename
|
||||
}
|
||||
indexablePageForSeo
|
||||
searchResultsMetadata {
|
||||
searchCriteria {
|
||||
implicitLocation {
|
||||
id
|
||||
localizedDisplayName
|
||||
type
|
||||
__typename
|
||||
}
|
||||
keyword
|
||||
location {
|
||||
id
|
||||
shortName
|
||||
localizedShortName
|
||||
localizedDisplayName
|
||||
type
|
||||
__typename
|
||||
}
|
||||
__typename
|
||||
}
|
||||
helpCenterDomain
|
||||
helpCenterLocale
|
||||
jobSerpJobOutlook {
|
||||
occupation
|
||||
paragraph
|
||||
__typename
|
||||
}
|
||||
showMachineReadableJobs
|
||||
__typename
|
||||
}
|
||||
totalJobsCount
|
||||
__typename
|
||||
}
|
||||
}
|
||||
|
||||
fragment JobView on JobListingSearchResult {
|
||||
jobview {
|
||||
header {
|
||||
adOrderId
|
||||
advertiserType
|
||||
adOrderSponsorshipLevel
|
||||
ageInDays
|
||||
divisionEmployerName
|
||||
easyApply
|
||||
employer {
|
||||
id
|
||||
name
|
||||
shortName
|
||||
__typename
|
||||
}
|
||||
employerNameFromSearch
|
||||
goc
|
||||
gocConfidence
|
||||
gocId
|
||||
jobCountryId
|
||||
jobLink
|
||||
jobResultTrackingKey
|
||||
jobTitleText
|
||||
locationName
|
||||
locationType
|
||||
locId
|
||||
needsCommission
|
||||
payCurrency
|
||||
payPeriod
|
||||
payPeriodAdjustedPay {
|
||||
p10
|
||||
p50
|
||||
p90
|
||||
__typename
|
||||
}
|
||||
rating
|
||||
salarySource
|
||||
savedJobId
|
||||
sponsored
|
||||
__typename
|
||||
}
|
||||
job {
|
||||
description
|
||||
importConfigId
|
||||
jobTitleId
|
||||
jobTitleText
|
||||
listingId
|
||||
__typename
|
||||
}
|
||||
jobListingAdminDetails {
|
||||
cpcVal
|
||||
importConfigId
|
||||
jobListingId
|
||||
jobSourceId
|
||||
userEligibleForAdminJobDetails
|
||||
__typename
|
||||
}
|
||||
overview {
|
||||
shortName
|
||||
squareLogoUrl
|
||||
__typename
|
||||
}
|
||||
__typename
|
||||
}
|
||||
__typename
|
||||
}
|
||||
"""
|
||||
fallback_token = "Ft6oHEWlRZrxDww95Cpazw:0pGUrkb2y3TyOpAIqF2vbPmUXoXVkD3oEGDVkvfeCerceQ5-n8mBg3BovySUIjmCPHCaW0H2nQVdqzbtsYqf4Q:wcqRqeegRUa9MVLJGyujVXB7vWFPjdaS1CtrrzJq-ok"
|
||||
@@ -1,250 +0,0 @@
|
||||
"""
|
||||
jobspy.scrapers.google
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This module contains routines to scrape Google.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import math
|
||||
import re
|
||||
import json
|
||||
from typing import Tuple
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from .constants import headers_jobs, headers_initial, async_param
|
||||
from .. import Scraper, ScraperInput, Site
|
||||
from ..utils import extract_emails_from_text, create_logger, extract_job_type
|
||||
from ..utils import (
|
||||
create_session,
|
||||
)
|
||||
from ...jobs import (
|
||||
JobPost,
|
||||
JobResponse,
|
||||
Location,
|
||||
JobType,
|
||||
)
|
||||
|
||||
logger = create_logger("Google")
|
||||
|
||||
|
||||
class GoogleJobsScraper(Scraper):
|
||||
def __init__(
|
||||
self, proxies: list[str] | str | None = None, ca_cert: str | None = None
|
||||
):
|
||||
"""
|
||||
Initializes Google Scraper with the Goodle jobs search url
|
||||
"""
|
||||
site = Site(Site.GOOGLE)
|
||||
super().__init__(site, proxies=proxies, ca_cert=ca_cert)
|
||||
|
||||
self.country = None
|
||||
self.session = None
|
||||
self.scraper_input = None
|
||||
self.jobs_per_page = 10
|
||||
self.seen_urls = set()
|
||||
self.url = "https://www.google.com/search"
|
||||
self.jobs_url = "https://www.google.com/async/callback:550"
|
||||
|
||||
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
|
||||
"""
|
||||
Scrapes Google for jobs with scraper_input criteria.
|
||||
:param scraper_input: Information about job search criteria.
|
||||
:return: JobResponse containing a list of jobs.
|
||||
"""
|
||||
self.scraper_input = scraper_input
|
||||
self.scraper_input.results_wanted = min(900, scraper_input.results_wanted)
|
||||
|
||||
self.session = create_session(
|
||||
proxies=self.proxies, ca_cert=self.ca_cert, is_tls=False, has_retry=True
|
||||
)
|
||||
forward_cursor, job_list = self._get_initial_cursor_and_jobs()
|
||||
if forward_cursor is None:
|
||||
logger.warning(
|
||||
"initial cursor not found, try changing your query or there was at most 10 results"
|
||||
)
|
||||
return JobResponse(jobs=job_list)
|
||||
|
||||
page = 1
|
||||
|
||||
while (
|
||||
len(self.seen_urls) < scraper_input.results_wanted + scraper_input.offset
|
||||
and forward_cursor
|
||||
):
|
||||
logger.info(
|
||||
f"search page: {page} / {math.ceil(scraper_input.results_wanted / self.jobs_per_page)}"
|
||||
)
|
||||
try:
|
||||
jobs, forward_cursor = self._get_jobs_next_page(forward_cursor)
|
||||
except Exception as e:
|
||||
logger.error(f"failed to get jobs on page: {page}, {e}")
|
||||
break
|
||||
if not jobs:
|
||||
logger.info(f"found no jobs on page: {page}")
|
||||
break
|
||||
job_list += jobs
|
||||
page += 1
|
||||
return JobResponse(
|
||||
jobs=job_list[
|
||||
scraper_input.offset : scraper_input.offset
|
||||
+ scraper_input.results_wanted
|
||||
]
|
||||
)
|
||||
|
||||
def _get_initial_cursor_and_jobs(self) -> Tuple[str, list[JobPost]]:
|
||||
"""Gets initial cursor and jobs to paginate through job listings"""
|
||||
query = f"{self.scraper_input.search_term} jobs"
|
||||
|
||||
def get_time_range(hours_old):
|
||||
if hours_old <= 24:
|
||||
return "since yesterday"
|
||||
elif hours_old <= 72:
|
||||
return "in the last 3 days"
|
||||
elif hours_old <= 168:
|
||||
return "in the last week"
|
||||
else:
|
||||
return "in the last month"
|
||||
|
||||
job_type_mapping = {
|
||||
JobType.FULL_TIME: "Full time",
|
||||
JobType.PART_TIME: "Part time",
|
||||
JobType.INTERNSHIP: "Internship",
|
||||
JobType.CONTRACT: "Contract",
|
||||
}
|
||||
|
||||
if self.scraper_input.job_type in job_type_mapping:
|
||||
query += f" {job_type_mapping[self.scraper_input.job_type]}"
|
||||
|
||||
if self.scraper_input.location:
|
||||
query += f" near {self.scraper_input.location}"
|
||||
|
||||
if self.scraper_input.hours_old:
|
||||
time_filter = get_time_range(self.scraper_input.hours_old)
|
||||
query += f" {time_filter}"
|
||||
|
||||
if self.scraper_input.is_remote:
|
||||
query += " remote"
|
||||
|
||||
if self.scraper_input.google_search_term:
|
||||
query = self.scraper_input.google_search_term
|
||||
|
||||
params = {"q": query, "udm": "8"}
|
||||
response = self.session.get(self.url, headers=headers_initial, params=params)
|
||||
|
||||
pattern_fc = r'<div jsname="Yust4d"[^>]+data-async-fc="([^"]+)"'
|
||||
match_fc = re.search(pattern_fc, response.text)
|
||||
data_async_fc = match_fc.group(1) if match_fc else None
|
||||
jobs_raw = self._find_job_info_initial_page(response.text)
|
||||
jobs = []
|
||||
for job_raw in jobs_raw:
|
||||
job_post = self._parse_job(job_raw)
|
||||
if job_post:
|
||||
jobs.append(job_post)
|
||||
return data_async_fc, jobs
|
||||
|
||||
def _get_jobs_next_page(self, forward_cursor: str) -> Tuple[list[JobPost], str]:
|
||||
params = {"fc": [forward_cursor], "fcv": ["3"], "async": [async_param]}
|
||||
response = self.session.get(self.jobs_url, headers=headers_jobs, params=params)
|
||||
return self._parse_jobs(response.text)
|
||||
|
||||
def _parse_jobs(self, job_data: str) -> Tuple[list[JobPost], str]:
|
||||
"""
|
||||
Parses jobs on a page with next page cursor
|
||||
"""
|
||||
start_idx = job_data.find("[[[")
|
||||
end_idx = job_data.rindex("]]]") + 3
|
||||
s = job_data[start_idx:end_idx]
|
||||
parsed = json.loads(s)[0]
|
||||
|
||||
pattern_fc = r'data-async-fc="([^"]+)"'
|
||||
match_fc = re.search(pattern_fc, job_data)
|
||||
data_async_fc = match_fc.group(1) if match_fc else None
|
||||
jobs_on_page = []
|
||||
for array in parsed:
|
||||
_, job_data = array
|
||||
if not job_data.startswith("[[["):
|
||||
continue
|
||||
job_d = json.loads(job_data)
|
||||
|
||||
job_info = self._find_job_info(job_d)
|
||||
job_post = self._parse_job(job_info)
|
||||
if job_post:
|
||||
jobs_on_page.append(job_post)
|
||||
return jobs_on_page, data_async_fc
|
||||
|
||||
def _parse_job(self, job_info: list):
|
||||
job_url = job_info[3][0][0] if job_info[3] and job_info[3][0] else None
|
||||
if job_url in self.seen_urls:
|
||||
return
|
||||
self.seen_urls.add(job_url)
|
||||
|
||||
title = job_info[0]
|
||||
company_name = job_info[1]
|
||||
location = city = job_info[2]
|
||||
state = country = date_posted = None
|
||||
if location and "," in location:
|
||||
city, state, *country = [*map(lambda x: x.strip(), location.split(","))]
|
||||
|
||||
days_ago_str = job_info[12]
|
||||
if type(days_ago_str) == str:
|
||||
match = re.search(r"\d+", days_ago_str)
|
||||
days_ago = int(match.group()) if match else None
|
||||
date_posted = (datetime.now() - timedelta(days=days_ago)).date()
|
||||
|
||||
description = job_info[19]
|
||||
|
||||
job_post = JobPost(
|
||||
id=f"go-{job_info[28]}",
|
||||
title=title,
|
||||
company_name=company_name,
|
||||
location=Location(
|
||||
city=city, state=state, country=country[0] if country else None
|
||||
),
|
||||
job_url=job_url,
|
||||
date_posted=date_posted,
|
||||
is_remote="remote" in description.lower() or "wfh" in description.lower(),
|
||||
description=description,
|
||||
emails=extract_emails_from_text(description),
|
||||
job_type=extract_job_type(description),
|
||||
)
|
||||
return job_post
|
||||
|
||||
@staticmethod
|
||||
def _find_job_info(jobs_data: list | dict) -> list | None:
|
||||
"""Iterates through the JSON data to find the job listings"""
|
||||
if isinstance(jobs_data, dict):
|
||||
for key, value in jobs_data.items():
|
||||
if key == "520084652" and isinstance(value, list):
|
||||
return value
|
||||
else:
|
||||
result = GoogleJobsScraper._find_job_info(value)
|
||||
if result:
|
||||
return result
|
||||
elif isinstance(jobs_data, list):
|
||||
for item in jobs_data:
|
||||
result = GoogleJobsScraper._find_job_info(item)
|
||||
if result:
|
||||
return result
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _find_job_info_initial_page(html_text: str):
|
||||
pattern = (
|
||||
f'520084652":('
|
||||
+ r"\[.*?\]\s*])\s*}\s*]\s*]\s*]\s*]\s*]"
|
||||
)
|
||||
results = []
|
||||
matches = re.finditer(pattern, html_text)
|
||||
|
||||
import json
|
||||
|
||||
for match in matches:
|
||||
try:
|
||||
parsed_data = json.loads(match.group(1))
|
||||
results.append(parsed_data)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Failed to parse match: {str(e)}")
|
||||
results.append({"raw_match": match.group(0), "error": str(e)})
|
||||
return results
|
||||
@@ -1,52 +0,0 @@
|
||||
headers_initial = {
|
||||
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"priority": "u=0, i",
|
||||
"referer": "https://www.google.com/",
|
||||
"sec-ch-prefers-color-scheme": "dark",
|
||||
"sec-ch-ua": '"Chromium";v="130", "Google Chrome";v="130", "Not?A_Brand";v="99"',
|
||||
"sec-ch-ua-arch": '"arm"',
|
||||
"sec-ch-ua-bitness": '"64"',
|
||||
"sec-ch-ua-form-factors": '"Desktop"',
|
||||
"sec-ch-ua-full-version": '"130.0.6723.58"',
|
||||
"sec-ch-ua-full-version-list": '"Chromium";v="130.0.6723.58", "Google Chrome";v="130.0.6723.58", "Not?A_Brand";v="99.0.0.0"',
|
||||
"sec-ch-ua-mobile": "?0",
|
||||
"sec-ch-ua-model": '""',
|
||||
"sec-ch-ua-platform": '"macOS"',
|
||||
"sec-ch-ua-platform-version": '"15.0.1"',
|
||||
"sec-ch-ua-wow64": "?0",
|
||||
"sec-fetch-dest": "document",
|
||||
"sec-fetch-mode": "navigate",
|
||||
"sec-fetch-site": "same-origin",
|
||||
"sec-fetch-user": "?1",
|
||||
"upgrade-insecure-requests": "1",
|
||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36",
|
||||
"x-browser-channel": "stable",
|
||||
"x-browser-copyright": "Copyright 2024 Google LLC. All rights reserved.",
|
||||
"x-browser-year": "2024",
|
||||
}
|
||||
|
||||
headers_jobs = {
|
||||
"accept": "*/*",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"priority": "u=1, i",
|
||||
"referer": "https://www.google.com/",
|
||||
"sec-ch-prefers-color-scheme": "dark",
|
||||
"sec-ch-ua": '"Chromium";v="130", "Google Chrome";v="130", "Not?A_Brand";v="99"',
|
||||
"sec-ch-ua-arch": '"arm"',
|
||||
"sec-ch-ua-bitness": '"64"',
|
||||
"sec-ch-ua-form-factors": '"Desktop"',
|
||||
"sec-ch-ua-full-version": '"130.0.6723.58"',
|
||||
"sec-ch-ua-full-version-list": '"Chromium";v="130.0.6723.58", "Google Chrome";v="130.0.6723.58", "Not?A_Brand";v="99.0.0.0"',
|
||||
"sec-ch-ua-mobile": "?0",
|
||||
"sec-ch-ua-model": '""',
|
||||
"sec-ch-ua-platform": '"macOS"',
|
||||
"sec-ch-ua-platform-version": '"15.0.1"',
|
||||
"sec-ch-ua-wow64": "?0",
|
||||
"sec-fetch-dest": "empty",
|
||||
"sec-fetch-mode": "cors",
|
||||
"sec-fetch-site": "same-origin",
|
||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36",
|
||||
}
|
||||
|
||||
async_param = "_basejs:/xjs/_/js/k=xjs.s.en_US.JwveA-JiKmg.2018.O/am=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAACAAAoICAAAAAAAKMAfAAAAIAQAAAAAAAAAAAAACCAAAEJDAAACAAAAAGABAIAAARBAAABAAAAAgAgQAABAASKAfv8JAAABAAAAAAwAQAQACQAAAAAAcAEAQABoCAAAABAAAIABAACAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAACAQADoBwAAAAAAAAAAAAAQBAAAAATQAAoACOAHAAAAAAAAAQAAAIIAAAA_ZAACAAAAAAAAcB8APB4wHFJ4AAAAAAAAAAAAAAAACECCYA5If0EACAAAAAAAAAAAAAAAAAAAUgRNXG4AMAE/dg=0/br=1/rs=ACT90oGxMeaFMCopIHq5tuQM-6_3M_VMjQ,_basecss:/xjs/_/ss/k=xjs.s.IwsGu62EDtU.L.B1.O/am=QOoQIAQAAAQAREADEBAAAAAAAAAAAAAAAAAAAAAgAQAAIAAAgAQAAAIAIAIAoEwCAADIC8AfsgEAawwAPkAAjgoAGAAAAAAAAEADAAAAAAIgAECHAAAAAAAAAAABAQAggAARQAAAQCEAAAAAIAAAABgAAAAAIAQIACCAAfB-AAFIQABoCEA_CgEAAIABAACEgHAEwwAEFQAM4CgAAAAAAAAAAAAACABCAAAAQEAAABAgAMCPAAA4AoE2BAEAggSAAIoAQAAAAAgAAAAACCAQAAAxEwA_ZAACAAAAAAAAAAkAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAQAEAAAAAAAAAAAAAAAAAAAAAQA/br=1/rs=ACT90oGZc36t3uUQkj0srnIvvbHjO2hgyg,_basecomb:/xjs/_/js/k=xjs.s.en_US.JwveA-JiKmg.2018.O/ck=xjs.s.IwsGu62EDtU.L.B1.O/am=QOoQIAQAAAQAREADEBAAAAAAAAAAAAAAAAAAAAAgAQAAIAAAgAQAAAKAIAoIqEwCAADIK8AfsgEAawwAPkAAjgoAGAAACCAAAEJDAAACAAIgAGCHAIAAARBAAABBAQAggAgRQABAQSOAfv8JIAABABgAAAwAYAQICSCAAfB-cAFIQABoCEA_ChEAAIABAACEgHAEwwAEFQAM4CgAAAAAAAAAAAAACABCAACAQEDoBxAgAMCPAAA4AoE2BAEAggTQAIoASOAHAAgAAAAACSAQAIIxEwA_ZAACAAAAAAAAcB8APB4wHFJ4AAAAAAAAAAAAAAAACECCYA5If0EACAAAAAAAAAAAAAAAAAAAUgRNXG4AMAE/d=1/ed=1/dg=0/br=1/ujg=1/rs=ACT90oFNLTjPzD_OAqhhtXwe2pg1T3WpBg,_fmt:prog,_id:fc_5FwaZ86OKsfdwN4P4La3yA4_2"
|
||||
@@ -4,21 +4,23 @@ jobspy.scrapers.indeed
|
||||
|
||||
This module contains routines to scrape Indeed.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
import math
|
||||
from typing import Tuple
|
||||
import io
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
from .constants import job_search_query, api_headers
|
||||
from .. import Scraper, ScraperInput, Site
|
||||
import urllib.parse
|
||||
from bs4 import BeautifulSoup
|
||||
from bs4.element import Tag
|
||||
from concurrent.futures import ThreadPoolExecutor, Future
|
||||
|
||||
from ..exceptions import IndeedException
|
||||
from ..utils import (
|
||||
count_urgent_words,
|
||||
extract_emails_from_text,
|
||||
get_enum_from_job_type,
|
||||
markdown_converter,
|
||||
create_session,
|
||||
create_logger,
|
||||
get_enum_from_job_type,
|
||||
)
|
||||
from ...jobs import (
|
||||
JobPost,
|
||||
@@ -27,32 +29,153 @@ from ...jobs import (
|
||||
Location,
|
||||
JobResponse,
|
||||
JobType,
|
||||
DescriptionFormat,
|
||||
)
|
||||
|
||||
logger = create_logger("Indeed")
|
||||
from .. import Scraper, ScraperInput, Site
|
||||
|
||||
|
||||
class IndeedScraper(Scraper):
|
||||
def __init__(
|
||||
self, proxies: list[str] | str | None = None, ca_cert: str | None = None
|
||||
):
|
||||
def __init__(self, proxy: str | None = None):
|
||||
"""
|
||||
Initializes IndeedScraper with the Indeed API url
|
||||
Initializes IndeedScraper with the Indeed job search url
|
||||
"""
|
||||
super().__init__(Site.INDEED, proxies=proxies)
|
||||
self.url = None
|
||||
self.country = None
|
||||
site = Site(Site.INDEED)
|
||||
super().__init__(site, proxy=proxy)
|
||||
|
||||
self.session = create_session(
|
||||
proxies=self.proxies, ca_cert=ca_cert, is_tls=False
|
||||
)
|
||||
self.scraper_input = None
|
||||
self.jobs_per_page = 100
|
||||
self.num_workers = 10
|
||||
self.jobs_per_page = 15
|
||||
self.seen_urls = set()
|
||||
self.headers = None
|
||||
self.api_country_code = None
|
||||
self.base_url = None
|
||||
self.api_url = "https://apis.indeed.com/graphql"
|
||||
|
||||
def scrape_page(
|
||||
self, scraper_input: ScraperInput, page: int
|
||||
) -> tuple[list[JobPost], int]:
|
||||
"""
|
||||
Scrapes a page of Indeed for jobs with scraper_input criteria
|
||||
:param scraper_input:
|
||||
:param page:
|
||||
:return: jobs found on page, total number of jobs found for search
|
||||
"""
|
||||
self.country = scraper_input.country
|
||||
domain = self.country.domain_value
|
||||
self.url = f"https://{domain}.indeed.com"
|
||||
session = create_session(self.proxy)
|
||||
|
||||
params = {
|
||||
"q": scraper_input.search_term,
|
||||
"l": scraper_input.location,
|
||||
"filter": 0,
|
||||
"start": scraper_input.offset + page * 10,
|
||||
}
|
||||
if scraper_input.distance:
|
||||
params["radius"] = scraper_input.distance
|
||||
|
||||
sc_values = []
|
||||
if scraper_input.is_remote:
|
||||
sc_values.append("attr(DSQF7)")
|
||||
if scraper_input.job_type:
|
||||
sc_values.append("jt({})".format(scraper_input.job_type.value))
|
||||
|
||||
if sc_values:
|
||||
params["sc"] = "0kf:" + "".join(sc_values) + ";"
|
||||
try:
|
||||
response = session.get(
|
||||
f"{self.url}/jobs",
|
||||
headers=self.get_headers(),
|
||||
params=params,
|
||||
allow_redirects=True,
|
||||
timeout_seconds=10,
|
||||
)
|
||||
if response.status_code not in range(200, 400):
|
||||
raise IndeedException(
|
||||
f"bad response with status code: {response.status_code}"
|
||||
)
|
||||
except Exception as e:
|
||||
if "Proxy responded with" in str(e):
|
||||
raise IndeedException("bad proxy")
|
||||
raise IndeedException(str(e))
|
||||
|
||||
soup = BeautifulSoup(response.content, "html.parser")
|
||||
if "did not match any jobs" in response.text:
|
||||
raise IndeedException("Parsing exception: Search did not match any jobs")
|
||||
|
||||
jobs = IndeedScraper.parse_jobs(
|
||||
soup
|
||||
) #: can raise exception, handled by main scrape function
|
||||
total_num_jobs = IndeedScraper.total_jobs(soup)
|
||||
|
||||
if (
|
||||
not jobs.get("metaData", {})
|
||||
.get("mosaicProviderJobCardsModel", {})
|
||||
.get("results")
|
||||
):
|
||||
raise IndeedException("No jobs found.")
|
||||
|
||||
def process_job(job) -> JobPost | None:
|
||||
job_url = f'{self.url}/jobs/viewjob?jk={job["jobkey"]}'
|
||||
job_url_client = f'{self.url}/viewjob?jk={job["jobkey"]}'
|
||||
if job_url in self.seen_urls:
|
||||
return None
|
||||
|
||||
extracted_salary = job.get("extractedSalary")
|
||||
compensation = None
|
||||
if extracted_salary:
|
||||
salary_snippet = job.get("salarySnippet")
|
||||
currency = salary_snippet.get("currency") if salary_snippet else None
|
||||
interval = (extracted_salary.get("type"),)
|
||||
if isinstance(interval, tuple):
|
||||
interval = interval[0]
|
||||
|
||||
interval = interval.upper()
|
||||
if interval in CompensationInterval.__members__:
|
||||
compensation = Compensation(
|
||||
interval=CompensationInterval[interval],
|
||||
min_amount=int(extracted_salary.get("min")),
|
||||
max_amount=int(extracted_salary.get("max")),
|
||||
currency=currency,
|
||||
)
|
||||
|
||||
job_type = IndeedScraper.get_job_type(job)
|
||||
timestamp_seconds = job["pubDate"] / 1000
|
||||
date_posted = datetime.fromtimestamp(timestamp_seconds)
|
||||
date_posted = date_posted.strftime("%Y-%m-%d")
|
||||
|
||||
description = self.get_description(job_url)
|
||||
with io.StringIO(job["snippet"]) as f:
|
||||
soup_io = BeautifulSoup(f, "html.parser")
|
||||
li_elements = soup_io.find_all("li")
|
||||
if description is None and li_elements:
|
||||
description = " ".join(li.text for li in li_elements)
|
||||
|
||||
job_post = JobPost(
|
||||
title=job["normTitle"],
|
||||
description=description,
|
||||
company_name=job["company"],
|
||||
location=Location(
|
||||
city=job.get("jobLocationCity"),
|
||||
state=job.get("jobLocationState"),
|
||||
country=self.country,
|
||||
),
|
||||
job_type=job_type,
|
||||
compensation=compensation,
|
||||
date_posted=date_posted,
|
||||
job_url=job_url_client,
|
||||
emails=extract_emails_from_text(description) if description else None,
|
||||
num_urgent_words=count_urgent_words(description)
|
||||
if description
|
||||
else None,
|
||||
is_remote=self.is_remote_job(job),
|
||||
)
|
||||
return job_post
|
||||
|
||||
jobs = jobs["metaData"]["mosaicProviderJobCardsModel"]["results"]
|
||||
with ThreadPoolExecutor(max_workers=1) as executor:
|
||||
job_results: list[Future] = [
|
||||
executor.submit(process_job, job) for job in jobs
|
||||
]
|
||||
|
||||
job_list = [result.result() for result in job_results if result.result()]
|
||||
|
||||
return job_list, total_num_jobs
|
||||
|
||||
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
|
||||
"""
|
||||
@@ -60,290 +183,191 @@ class IndeedScraper(Scraper):
|
||||
:param scraper_input:
|
||||
:return: job_response
|
||||
"""
|
||||
self.scraper_input = scraper_input
|
||||
domain, self.api_country_code = self.scraper_input.country.indeed_domain_value
|
||||
self.base_url = f"https://{domain}.indeed.com"
|
||||
self.headers = api_headers.copy()
|
||||
self.headers["indeed-co"] = self.scraper_input.country.indeed_domain_value
|
||||
job_list = []
|
||||
page = 1
|
||||
pages_to_process = (
|
||||
math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1
|
||||
)
|
||||
|
||||
cursor = None
|
||||
#: get first page to initialize session
|
||||
job_list, total_results = self.scrape_page(scraper_input, 0)
|
||||
|
||||
while len(self.seen_urls) < scraper_input.results_wanted + scraper_input.offset:
|
||||
logger.info(
|
||||
f"search page: {page} / {math.ceil(scraper_input.results_wanted / self.jobs_per_page)}"
|
||||
)
|
||||
jobs, cursor = self._scrape_page(cursor)
|
||||
if not jobs:
|
||||
logger.info(f"found no jobs on page: {page}")
|
||||
break
|
||||
job_list += jobs
|
||||
page += 1
|
||||
return JobResponse(
|
||||
jobs=job_list[
|
||||
scraper_input.offset : scraper_input.offset
|
||||
+ scraper_input.results_wanted
|
||||
with ThreadPoolExecutor(max_workers=1) as executor:
|
||||
futures: list[Future] = [
|
||||
executor.submit(self.scrape_page, scraper_input, page)
|
||||
for page in range(1, pages_to_process + 1)
|
||||
]
|
||||
)
|
||||
|
||||
def _scrape_page(self, cursor: str | None) -> Tuple[list[JobPost], str | None]:
|
||||
for future in futures:
|
||||
jobs, _ = future.result()
|
||||
|
||||
job_list += jobs
|
||||
|
||||
if len(job_list) > scraper_input.results_wanted:
|
||||
job_list = job_list[: scraper_input.results_wanted]
|
||||
|
||||
job_response = JobResponse(
|
||||
jobs=job_list,
|
||||
total_results=total_results,
|
||||
)
|
||||
return job_response
|
||||
|
||||
def get_description(self, job_page_url: str) -> str | None:
|
||||
"""
|
||||
Scrapes a page of Indeed for jobs with scraper_input criteria
|
||||
:param cursor:
|
||||
:return: jobs found on page, next page cursor
|
||||
Retrieves job description by going to the job page url
|
||||
:param job_page_url:
|
||||
:return: description
|
||||
"""
|
||||
jobs = []
|
||||
new_cursor = None
|
||||
filters = self._build_filters()
|
||||
search_term = (
|
||||
self.scraper_input.search_term.replace('"', '\\"')
|
||||
if self.scraper_input.search_term
|
||||
else ""
|
||||
)
|
||||
query = job_search_query.format(
|
||||
what=(f'what: "{search_term}"' if search_term else ""),
|
||||
location=(
|
||||
f'location: {{where: "{self.scraper_input.location}", radius: {self.scraper_input.distance}, radiusUnit: MILES}}'
|
||||
if self.scraper_input.location
|
||||
else ""
|
||||
),
|
||||
dateOnIndeed=self.scraper_input.hours_old,
|
||||
cursor=f'cursor: "{cursor}"' if cursor else "",
|
||||
filters=filters,
|
||||
)
|
||||
payload = {
|
||||
"query": query,
|
||||
}
|
||||
api_headers_temp = api_headers.copy()
|
||||
api_headers_temp["indeed-co"] = self.api_country_code
|
||||
response = self.session.post(
|
||||
self.api_url,
|
||||
headers=api_headers_temp,
|
||||
json=payload,
|
||||
timeout=10,
|
||||
)
|
||||
if not response.ok:
|
||||
logger.info(
|
||||
f"responded with status code: {response.status_code} (submit GitHub issue if this appears to be a bug)"
|
||||
parsed_url = urllib.parse.urlparse(job_page_url)
|
||||
params = urllib.parse.parse_qs(parsed_url.query)
|
||||
jk_value = params.get("jk", [None])[0]
|
||||
formatted_url = f"{self.url}/viewjob?jk={jk_value}&spa=1"
|
||||
session = create_session(self.proxy)
|
||||
|
||||
try:
|
||||
response = session.get(
|
||||
formatted_url,
|
||||
headers=self.get_headers(),
|
||||
allow_redirects=True,
|
||||
timeout_seconds=5,
|
||||
)
|
||||
return jobs, new_cursor
|
||||
data = response.json()
|
||||
jobs = data["data"]["jobSearch"]["results"]
|
||||
new_cursor = data["data"]["jobSearch"]["pageInfo"]["nextCursor"]
|
||||
except Exception as e:
|
||||
return None
|
||||
|
||||
job_list = []
|
||||
for job in jobs:
|
||||
processed_job = self._process_job(job["job"])
|
||||
if processed_job:
|
||||
job_list.append(processed_job)
|
||||
if response.status_code not in range(200, 400):
|
||||
return None
|
||||
|
||||
return job_list, new_cursor
|
||||
|
||||
def _build_filters(self):
|
||||
"""
|
||||
Builds the filters dict for job type/is_remote. If hours_old is provided, composite filter for job_type/is_remote is not possible.
|
||||
IndeedApply: filters: { keyword: { field: "indeedApplyScope", keys: ["DESKTOP"] } }
|
||||
"""
|
||||
filters_str = ""
|
||||
if self.scraper_input.hours_old:
|
||||
filters_str = """
|
||||
filters: {{
|
||||
date: {{
|
||||
field: "dateOnIndeed",
|
||||
start: "{start}h"
|
||||
}}
|
||||
}}
|
||||
""".format(
|
||||
start=self.scraper_input.hours_old
|
||||
)
|
||||
elif self.scraper_input.easy_apply:
|
||||
filters_str = """
|
||||
filters: {
|
||||
keyword: {
|
||||
field: "indeedApplyScope",
|
||||
keys: ["DESKTOP"]
|
||||
}
|
||||
}
|
||||
"""
|
||||
elif self.scraper_input.job_type or self.scraper_input.is_remote:
|
||||
job_type_key_mapping = {
|
||||
JobType.FULL_TIME: "CF3CP",
|
||||
JobType.PART_TIME: "75GKK",
|
||||
JobType.CONTRACT: "NJXCK",
|
||||
JobType.INTERNSHIP: "VDTG7",
|
||||
}
|
||||
|
||||
keys = []
|
||||
if self.scraper_input.job_type:
|
||||
key = job_type_key_mapping[self.scraper_input.job_type]
|
||||
keys.append(key)
|
||||
|
||||
if self.scraper_input.is_remote:
|
||||
keys.append("DSQF7")
|
||||
|
||||
if keys:
|
||||
keys_str = '", "'.join(keys)
|
||||
filters_str = f"""
|
||||
filters: {{
|
||||
composite: {{
|
||||
filters: [{{
|
||||
keyword: {{
|
||||
field: "attributes",
|
||||
keys: ["{keys_str}"]
|
||||
}}
|
||||
}}]
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
return filters_str
|
||||
|
||||
def _process_job(self, job: dict) -> JobPost | None:
|
||||
"""
|
||||
Parses the job dict into JobPost model
|
||||
:param job: dict to parse
|
||||
:return: JobPost if it's a new job
|
||||
"""
|
||||
job_url = f'{self.base_url}/viewjob?jk={job["key"]}'
|
||||
if job_url in self.seen_urls:
|
||||
return
|
||||
self.seen_urls.add(job_url)
|
||||
description = job["description"]["html"]
|
||||
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
|
||||
description = markdown_converter(description)
|
||||
|
||||
job_type = self._get_job_type(job["attributes"])
|
||||
timestamp_seconds = job["datePublished"] / 1000
|
||||
date_posted = datetime.fromtimestamp(timestamp_seconds).strftime("%Y-%m-%d")
|
||||
employer = job["employer"].get("dossier") if job["employer"] else None
|
||||
employer_details = employer.get("employerDetails", {}) if employer else {}
|
||||
rel_url = job["employer"]["relativeCompanyPageUrl"] if job["employer"] else None
|
||||
return JobPost(
|
||||
id=f'in-{job["key"]}',
|
||||
title=job["title"],
|
||||
description=description,
|
||||
company_name=job["employer"].get("name") if job.get("employer") else None,
|
||||
company_url=(f"{self.base_url}{rel_url}" if job["employer"] else None),
|
||||
company_url_direct=(
|
||||
employer["links"]["corporateWebsite"] if employer else None
|
||||
),
|
||||
location=Location(
|
||||
city=job.get("location", {}).get("city"),
|
||||
state=job.get("location", {}).get("admin1Code"),
|
||||
country=job.get("location", {}).get("countryCode"),
|
||||
),
|
||||
job_type=job_type,
|
||||
compensation=self._get_compensation(job["compensation"]),
|
||||
date_posted=date_posted,
|
||||
job_url=job_url,
|
||||
job_url_direct=(
|
||||
job["recruit"].get("viewJobUrl") if job.get("recruit") else None
|
||||
),
|
||||
emails=extract_emails_from_text(description) if description else None,
|
||||
is_remote=self._is_job_remote(job, description),
|
||||
company_addresses=(
|
||||
employer_details["addresses"][0]
|
||||
if employer_details.get("addresses")
|
||||
else None
|
||||
),
|
||||
company_industry=(
|
||||
employer_details["industry"]
|
||||
.replace("Iv1", "")
|
||||
.replace("_", " ")
|
||||
.title()
|
||||
.strip()
|
||||
if employer_details.get("industry")
|
||||
else None
|
||||
),
|
||||
company_num_employees=employer_details.get("employeesLocalizedLabel"),
|
||||
company_revenue=employer_details.get("revenueLocalizedLabel"),
|
||||
company_description=employer_details.get("briefDescription"),
|
||||
company_logo=(
|
||||
employer["images"].get("squareLogoUrl")
|
||||
if employer and employer.get("images")
|
||||
else None
|
||||
),
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
script_tag = soup.find(
|
||||
"script", text=lambda x: x and "window._initialData" in x
|
||||
)
|
||||
|
||||
if not script_tag:
|
||||
return None
|
||||
|
||||
script_code = script_tag.string
|
||||
match = re.search(r"window\._initialData\s*=\s*({.*?})\s*;", script_code, re.S)
|
||||
|
||||
if not match:
|
||||
return None
|
||||
|
||||
json_string = match.group(1)
|
||||
data = json.loads(json_string)
|
||||
try:
|
||||
job_description = data["jobInfoWrapperModel"]["jobInfoModel"][
|
||||
"sanitizedJobDescription"
|
||||
]
|
||||
except (KeyError, TypeError, IndexError):
|
||||
return None
|
||||
|
||||
soup = BeautifulSoup(
|
||||
job_description, "html.parser"
|
||||
)
|
||||
text_content = " ".join(
|
||||
soup.get_text(separator=" ").split()
|
||||
).strip()
|
||||
|
||||
return text_content
|
||||
|
||||
@staticmethod
|
||||
def _get_job_type(attributes: list) -> list[JobType]:
|
||||
def get_job_type(job: dict) -> list[JobType] | None:
|
||||
"""
|
||||
Parses the attributes to get list of job types
|
||||
:param attributes:
|
||||
:return: list of JobType
|
||||
Parses the job to get list of job types
|
||||
:param job:
|
||||
:return:
|
||||
"""
|
||||
job_types: list[JobType] = []
|
||||
for attribute in attributes:
|
||||
job_type_str = attribute["label"].replace("-", "").replace(" ", "").lower()
|
||||
job_type = get_enum_from_job_type(job_type_str)
|
||||
if job_type:
|
||||
job_types.append(job_type)
|
||||
for taxonomy in job["taxonomyAttributes"]:
|
||||
if taxonomy["label"] == "job-types":
|
||||
for i in range(len(taxonomy["attributes"])):
|
||||
label = taxonomy["attributes"][i].get("label")
|
||||
if label:
|
||||
job_type_str = label.replace("-", "").replace(" ", "").lower()
|
||||
job_type = get_enum_from_job_type(job_type_str)
|
||||
if job_type:
|
||||
job_types.append(job_type)
|
||||
return job_types
|
||||
|
||||
@staticmethod
|
||||
def _get_compensation(compensation: dict) -> Compensation | None:
|
||||
def parse_jobs(soup: BeautifulSoup) -> dict:
|
||||
"""
|
||||
Parses the job to get compensation
|
||||
:param job:
|
||||
:return: compensation object
|
||||
Parses the jobs from the soup object
|
||||
:param soup:
|
||||
:return: jobs
|
||||
"""
|
||||
if not compensation["baseSalary"] and not compensation["estimated"]:
|
||||
return None
|
||||
comp = (
|
||||
compensation["baseSalary"]
|
||||
if compensation["baseSalary"]
|
||||
else compensation["estimated"]["baseSalary"]
|
||||
)
|
||||
if not comp:
|
||||
return None
|
||||
interval = IndeedScraper._get_compensation_interval(comp["unitOfWork"])
|
||||
if not interval:
|
||||
return None
|
||||
min_range = comp["range"].get("min")
|
||||
max_range = comp["range"].get("max")
|
||||
return Compensation(
|
||||
interval=interval,
|
||||
min_amount=int(min_range) if min_range is not None else None,
|
||||
max_amount=int(max_range) if max_range is not None else None,
|
||||
currency=(
|
||||
compensation["estimated"]["currencyCode"]
|
||||
if compensation["estimated"]
|
||||
else compensation["currencyCode"]
|
||||
),
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _is_job_remote(job: dict, description: str) -> bool:
|
||||
"""
|
||||
Searches the description, location, and attributes to check if job is remote
|
||||
"""
|
||||
remote_keywords = ["remote", "work from home", "wfh"]
|
||||
is_remote_in_attributes = any(
|
||||
any(keyword in attr["label"].lower() for keyword in remote_keywords)
|
||||
for attr in job["attributes"]
|
||||
)
|
||||
is_remote_in_description = any(
|
||||
keyword in description.lower() for keyword in remote_keywords
|
||||
)
|
||||
is_remote_in_location = any(
|
||||
keyword in job["location"]["formatted"]["long"].lower()
|
||||
for keyword in remote_keywords
|
||||
)
|
||||
return (
|
||||
is_remote_in_attributes or is_remote_in_description or is_remote_in_location
|
||||
)
|
||||
def find_mosaic_script() -> Tag | None:
|
||||
"""
|
||||
Finds jobcards script tag
|
||||
:return: script_tag
|
||||
"""
|
||||
script_tags = soup.find_all("script")
|
||||
|
||||
@staticmethod
|
||||
def _get_compensation_interval(interval: str) -> CompensationInterval:
|
||||
interval_mapping = {
|
||||
"DAY": "DAILY",
|
||||
"YEAR": "YEARLY",
|
||||
"HOUR": "HOURLY",
|
||||
"WEEK": "WEEKLY",
|
||||
"MONTH": "MONTHLY",
|
||||
}
|
||||
mapped_interval = interval_mapping.get(interval.upper(), None)
|
||||
if mapped_interval and mapped_interval in CompensationInterval.__members__:
|
||||
return CompensationInterval[mapped_interval]
|
||||
for tag in script_tags:
|
||||
if (
|
||||
tag.string
|
||||
and "mosaic.providerData" in tag.string
|
||||
and "mosaic-provider-jobcards" in tag.string
|
||||
):
|
||||
return tag
|
||||
return None
|
||||
|
||||
script_tag = find_mosaic_script()
|
||||
|
||||
if script_tag:
|
||||
script_str = script_tag.string
|
||||
pattern = r'window.mosaic.providerData\["mosaic-provider-jobcards"\]\s*=\s*({.*?});'
|
||||
p = re.compile(pattern, re.DOTALL)
|
||||
m = p.search(script_str)
|
||||
if m:
|
||||
jobs = json.loads(m.group(1).strip())
|
||||
return jobs
|
||||
else:
|
||||
raise IndeedException("Could not find mosaic provider job cards data")
|
||||
else:
|
||||
raise ValueError(f"Unsupported interval: {interval}")
|
||||
raise IndeedException(
|
||||
"Could not find a script tag containing mosaic provider data"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def total_jobs(soup: BeautifulSoup) -> int:
|
||||
"""
|
||||
Parses the total jobs for that search from soup object
|
||||
:param soup:
|
||||
:return: total_num_jobs
|
||||
"""
|
||||
script = soup.find("script", string=lambda t: t and "window._initialData" in t)
|
||||
|
||||
pattern = re.compile(r"window._initialData\s*=\s*({.*})\s*;", re.DOTALL)
|
||||
match = pattern.search(script.string)
|
||||
total_num_jobs = 0
|
||||
if match:
|
||||
json_str = match.group(1)
|
||||
data = json.loads(json_str)
|
||||
total_num_jobs = int(data["searchTitleBarModel"]["totalNumResults"])
|
||||
return total_num_jobs
|
||||
|
||||
@staticmethod
|
||||
def get_headers():
|
||||
return {
|
||||
"authority": "www.indeed.com",
|
||||
"accept": "*/*",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"referer": "https://www.indeed.com/viewjob?jk=fe6182337d72c7b1&tk=1hcbfcmd0k62t802&from=serp&vjs=3&advn=8132938064490989&adid=408692607&ad=-6NYlbfkN0A3Osc99MJFDKjquSk4WOGT28ALb_ad4QMtrHreCb9ICg6MiSVy9oDAp3evvOrI7Q-O9qOtQTg1EPbthP9xWtBN2cOuVeHQijxHjHpJC65TjDtftH3AXeINjBvAyDrE8DrRaAXl8LD3Fs1e_xuDHQIssdZ2Mlzcav8m5jHrA0fA64ZaqJV77myldaNlM7-qyQpy4AsJQfvg9iR2MY7qeC5_FnjIgjKIy_lNi9OPMOjGRWXA94CuvC7zC6WeiJmBQCHISl8IOBxf7EdJZlYdtzgae3593TFxbkd6LUwbijAfjax39aAuuCXy3s9C4YgcEP3TwEFGQoTpYu9Pmle-Ae1tHGPgsjxwXkgMm7Cz5mBBdJioglRCj9pssn-1u1blHZM4uL1nK9p1Y6HoFgPUU9xvKQTHjKGdH8d4y4ETyCMoNF4hAIyUaysCKdJKitC8PXoYaWhDqFtSMR4Jys8UPqUV&xkcb=SoDD-_M3JLQfWnQTDh0LbzkdCdPP&xpse=SoBa6_I3JLW9FlWZlB0PbzkdCdPP&sjdu=i6xVERweJM_pVUvgf-MzuaunBTY7G71J5eEX6t4DrDs5EMPQdODrX7Nn-WIPMezoqr5wA_l7Of-3CtoiUawcHw",
|
||||
"sec-ch-ua": '"Google Chrome";v="119", "Chromium";v="119", "Not?A_Brand";v="24"',
|
||||
"sec-ch-ua-mobile": "?0",
|
||||
"sec-ch-ua-platform": '"Windows"',
|
||||
"sec-fetch-dest": "empty",
|
||||
"sec-fetch-mode": "cors",
|
||||
"sec-fetch-site": "same-origin",
|
||||
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36",
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def is_remote_job(job: dict) -> bool:
|
||||
"""
|
||||
:param job:
|
||||
:return: bool
|
||||
"""
|
||||
for taxonomy in job.get("taxonomyAttributes", []):
|
||||
if taxonomy["label"] == "remote" and len(taxonomy["attributes"]) > 0:
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
job_search_query = """
|
||||
query GetJobData {{
|
||||
jobSearch(
|
||||
{what}
|
||||
{location}
|
||||
limit: 100
|
||||
{cursor}
|
||||
sort: RELEVANCE
|
||||
{filters}
|
||||
) {{
|
||||
pageInfo {{
|
||||
nextCursor
|
||||
}}
|
||||
results {{
|
||||
trackingKey
|
||||
job {{
|
||||
source {{
|
||||
name
|
||||
}}
|
||||
key
|
||||
title
|
||||
datePublished
|
||||
dateOnIndeed
|
||||
description {{
|
||||
html
|
||||
}}
|
||||
location {{
|
||||
countryName
|
||||
countryCode
|
||||
admin1Code
|
||||
city
|
||||
postalCode
|
||||
streetAddress
|
||||
formatted {{
|
||||
short
|
||||
long
|
||||
}}
|
||||
}}
|
||||
compensation {{
|
||||
estimated {{
|
||||
currencyCode
|
||||
baseSalary {{
|
||||
unitOfWork
|
||||
range {{
|
||||
... on Range {{
|
||||
min
|
||||
max
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
baseSalary {{
|
||||
unitOfWork
|
||||
range {{
|
||||
... on Range {{
|
||||
min
|
||||
max
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
currencyCode
|
||||
}}
|
||||
attributes {{
|
||||
key
|
||||
label
|
||||
}}
|
||||
employer {{
|
||||
relativeCompanyPageUrl
|
||||
name
|
||||
dossier {{
|
||||
employerDetails {{
|
||||
addresses
|
||||
industry
|
||||
employeesLocalizedLabel
|
||||
revenueLocalizedLabel
|
||||
briefDescription
|
||||
ceoName
|
||||
ceoPhotoUrl
|
||||
}}
|
||||
images {{
|
||||
headerImageUrl
|
||||
squareLogoUrl
|
||||
}}
|
||||
links {{
|
||||
corporateWebsite
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
recruit {{
|
||||
viewJobUrl
|
||||
detailedSalary
|
||||
workSchedule
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
|
||||
api_headers = {
|
||||
"Host": "apis.indeed.com",
|
||||
"content-type": "application/json",
|
||||
"indeed-api-key": "161092c2017b5bbab13edb12461a62d5a833871e7cad6d9d475304573de67ac8",
|
||||
"accept": "application/json",
|
||||
"indeed-locale": "en-US",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 193.1",
|
||||
"indeed-app-info": "appv=193.1; appid=com.indeed.jobsearch; osv=16.6.1; os=ios; dtype=phone",
|
||||
}
|
||||
@@ -4,68 +4,40 @@ jobspy.scrapers.linkedin
|
||||
|
||||
This module contains routines to scrape LinkedIn.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import math
|
||||
import time
|
||||
import random
|
||||
import regex as re
|
||||
from typing import Optional
|
||||
from datetime import datetime
|
||||
|
||||
from bs4.element import Tag
|
||||
import requests
|
||||
import time
|
||||
from requests.exceptions import ProxyError
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from bs4 import BeautifulSoup
|
||||
from urllib.parse import urlparse, urlunparse, unquote
|
||||
from bs4.element import Tag
|
||||
from threading import Lock
|
||||
|
||||
from .constants import headers
|
||||
from .. import Scraper, ScraperInput, Site
|
||||
from ..utils import count_urgent_words, extract_emails_from_text, get_enum_from_job_type
|
||||
from ..exceptions import LinkedInException
|
||||
from ..utils import create_session, remove_attributes, create_logger
|
||||
from ...jobs import (
|
||||
JobPost,
|
||||
Location,
|
||||
JobResponse,
|
||||
JobType,
|
||||
Country,
|
||||
Compensation,
|
||||
DescriptionFormat,
|
||||
)
|
||||
from ..utils import (
|
||||
extract_emails_from_text,
|
||||
get_enum_from_job_type,
|
||||
currency_parser,
|
||||
markdown_converter,
|
||||
)
|
||||
|
||||
logger = create_logger("LinkedIn")
|
||||
|
||||
|
||||
class LinkedInScraper(Scraper):
|
||||
base_url = "https://www.linkedin.com"
|
||||
delay = 3
|
||||
band_delay = 4
|
||||
jobs_per_page = 25
|
||||
MAX_RETRIES = 3
|
||||
DELAY = 10
|
||||
|
||||
def __init__(
|
||||
self, proxies: list[str] | str | None = None, ca_cert: str | None = None
|
||||
):
|
||||
def __init__(self, proxy: Optional[str] = None):
|
||||
"""
|
||||
Initializes LinkedInScraper with the LinkedIn job search url
|
||||
"""
|
||||
super().__init__(Site.LINKEDIN, proxies=proxies, ca_cert=ca_cert)
|
||||
self.session = create_session(
|
||||
proxies=self.proxies,
|
||||
ca_cert=ca_cert,
|
||||
is_tls=False,
|
||||
has_retry=True,
|
||||
delay=5,
|
||||
clear_cookies=True,
|
||||
)
|
||||
self.session.headers.update(headers)
|
||||
self.scraper_input = None
|
||||
site = Site(Site.LINKEDIN)
|
||||
self.country = "worldwide"
|
||||
self.job_url_direct_regex = re.compile(r'(?<=\?url=)[^"]+')
|
||||
self.url = "https://www.linkedin.com"
|
||||
super().__init__(site, proxy=proxy)
|
||||
|
||||
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
|
||||
"""
|
||||
@@ -73,133 +45,117 @@ class LinkedInScraper(Scraper):
|
||||
:param scraper_input:
|
||||
:return: job_response
|
||||
"""
|
||||
self.scraper_input = scraper_input
|
||||
job_list: list[JobPost] = []
|
||||
seen_ids = set()
|
||||
start = scraper_input.offset // 10 * 10 if scraper_input.offset else 0
|
||||
request_count = 0
|
||||
seconds_old = (
|
||||
scraper_input.hours_old * 3600 if scraper_input.hours_old else None
|
||||
)
|
||||
continue_search = (
|
||||
lambda: len(job_list) < scraper_input.results_wanted and start < 1000
|
||||
)
|
||||
while continue_search():
|
||||
request_count += 1
|
||||
logger.info(
|
||||
f"search page: {request_count} / {math.ceil(scraper_input.results_wanted / 10)}"
|
||||
)
|
||||
seen_urls = set()
|
||||
url_lock = Lock()
|
||||
page = scraper_input.offset // 25 + 25 if scraper_input.offset else 0
|
||||
|
||||
def job_type_code(job_type_enum):
|
||||
mapping = {
|
||||
JobType.FULL_TIME: "F",
|
||||
JobType.PART_TIME: "P",
|
||||
JobType.INTERNSHIP: "I",
|
||||
JobType.CONTRACT: "C",
|
||||
JobType.TEMPORARY: "T",
|
||||
}
|
||||
|
||||
return mapping.get(job_type_enum, "")
|
||||
|
||||
while len(job_list) < scraper_input.results_wanted and page < 1000:
|
||||
params = {
|
||||
"keywords": scraper_input.search_term,
|
||||
"location": scraper_input.location,
|
||||
"distance": scraper_input.distance,
|
||||
"f_WT": 2 if scraper_input.is_remote else None,
|
||||
"f_JT": (
|
||||
self.job_type_code(scraper_input.job_type)
|
||||
if scraper_input.job_type
|
||||
else None
|
||||
),
|
||||
"f_JT": job_type_code(scraper_input.job_type)
|
||||
if scraper_input.job_type
|
||||
else None,
|
||||
"pageNum": 0,
|
||||
"start": start,
|
||||
page: page + scraper_input.offset,
|
||||
"f_AL": "true" if scraper_input.easy_apply else None,
|
||||
"f_C": (
|
||||
",".join(map(str, scraper_input.linkedin_company_ids))
|
||||
if scraper_input.linkedin_company_ids
|
||||
else None
|
||||
),
|
||||
}
|
||||
if seconds_old is not None:
|
||||
params["f_TPR"] = f"r{seconds_old}"
|
||||
|
||||
params = {k: v for k, v in params.items() if v is not None}
|
||||
try:
|
||||
response = self.session.get(
|
||||
f"{self.base_url}/jobs-guest/jobs/api/seeMoreJobPostings/search?",
|
||||
params=params,
|
||||
timeout=10,
|
||||
)
|
||||
if response.status_code not in range(200, 400):
|
||||
if response.status_code == 429:
|
||||
err = (
|
||||
f"429 Response - Blocked by LinkedIn for too many requests"
|
||||
)
|
||||
|
||||
params = {k: v for k, v in params.items() if v is not None}
|
||||
retries = 0
|
||||
while retries < self.MAX_RETRIES:
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{self.url}/jobs-guest/jobs/api/seeMoreJobPostings/search?",
|
||||
params=params,
|
||||
allow_redirects=True,
|
||||
proxies=self.proxy,
|
||||
timeout=10,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
break
|
||||
except requests.HTTPError as e:
|
||||
if hasattr(e, "response") and e.response is not None:
|
||||
if e.response.status_code == 429:
|
||||
time.sleep(self.DELAY)
|
||||
retries += 1
|
||||
continue
|
||||
else:
|
||||
raise LinkedInException(
|
||||
f"bad response status code: {e.response.status_code}"
|
||||
)
|
||||
else:
|
||||
err = f"LinkedIn response status code {response.status_code}"
|
||||
err += f" - {response.text}"
|
||||
logger.error(err)
|
||||
return JobResponse(jobs=job_list)
|
||||
except Exception as e:
|
||||
if "Proxy responded with" in str(e):
|
||||
logger.error(f"LinkedIn: Bad proxy")
|
||||
else:
|
||||
logger.error(f"LinkedIn: {str(e)}")
|
||||
return JobResponse(jobs=job_list)
|
||||
raise
|
||||
except ProxyError as e:
|
||||
raise LinkedInException("bad proxy")
|
||||
except Exception as e:
|
||||
raise LinkedInException(str(e))
|
||||
else:
|
||||
# Raise an exception if the maximum number of retries is reached
|
||||
raise LinkedInException(
|
||||
"Max retries reached, failed to get a valid response"
|
||||
)
|
||||
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
job_cards = soup.find_all("div", class_="base-search-card")
|
||||
if len(job_cards) == 0:
|
||||
return JobResponse(jobs=job_list)
|
||||
|
||||
for job_card in job_cards:
|
||||
href_tag = job_card.find("a", class_="base-card__full-link")
|
||||
if href_tag and "href" in href_tag.attrs:
|
||||
href = href_tag.attrs["href"].split("?")[0]
|
||||
job_id = href.split("-")[-1]
|
||||
with ThreadPoolExecutor(max_workers=5) as executor:
|
||||
futures = []
|
||||
for job_card in soup.find_all("div", class_="base-search-card"):
|
||||
job_url = None
|
||||
href_tag = job_card.find("a", class_="base-card__full-link")
|
||||
if href_tag and "href" in href_tag.attrs:
|
||||
href = href_tag.attrs["href"].split("?")[0]
|
||||
job_id = href.split("-")[-1]
|
||||
job_url = f"{self.url}/jobs/view/{job_id}"
|
||||
|
||||
if job_id in seen_ids:
|
||||
continue
|
||||
seen_ids.add(job_id)
|
||||
with url_lock:
|
||||
if job_url in seen_urls:
|
||||
continue
|
||||
seen_urls.add(job_url)
|
||||
|
||||
futures.append(executor.submit(self.process_job, job_card, job_url))
|
||||
|
||||
for future in as_completed(futures):
|
||||
try:
|
||||
fetch_desc = scraper_input.linkedin_fetch_description
|
||||
job_post = self._process_job(job_card, job_id, fetch_desc)
|
||||
job_post = future.result()
|
||||
if job_post:
|
||||
job_list.append(job_post)
|
||||
if not continue_search():
|
||||
break
|
||||
except Exception as e:
|
||||
raise LinkedInException(str(e))
|
||||
|
||||
if continue_search():
|
||||
time.sleep(random.uniform(self.delay, self.delay + self.band_delay))
|
||||
start += len(job_list)
|
||||
raise LinkedInException(
|
||||
"Exception occurred while processing jobs"
|
||||
)
|
||||
page += 25
|
||||
|
||||
job_list = job_list[: scraper_input.results_wanted]
|
||||
return JobResponse(jobs=job_list)
|
||||
|
||||
def _process_job(
|
||||
self, job_card: Tag, job_id: str, full_descr: bool
|
||||
) -> Optional[JobPost]:
|
||||
salary_tag = job_card.find("span", class_="job-search-card__salary-info")
|
||||
|
||||
compensation = None
|
||||
if salary_tag:
|
||||
salary_text = salary_tag.get_text(separator=" ").strip()
|
||||
salary_values = [currency_parser(value) for value in salary_text.split("-")]
|
||||
salary_min = salary_values[0]
|
||||
salary_max = salary_values[1]
|
||||
currency = salary_text[0] if salary_text[0] != "$" else "USD"
|
||||
|
||||
compensation = Compensation(
|
||||
min_amount=int(salary_min),
|
||||
max_amount=int(salary_max),
|
||||
currency=currency,
|
||||
)
|
||||
|
||||
def process_job(self, job_card: Tag, job_url: str) -> Optional[JobPost]:
|
||||
title_tag = job_card.find("span", class_="sr-only")
|
||||
title = title_tag.get_text(strip=True) if title_tag else "N/A"
|
||||
|
||||
company_tag = job_card.find("h4", class_="base-search-card__subtitle")
|
||||
company_a_tag = company_tag.find("a") if company_tag else None
|
||||
company_url = (
|
||||
urlunparse(urlparse(company_a_tag.get("href"))._replace(query=""))
|
||||
if company_a_tag and company_a_tag.has_attr("href")
|
||||
else ""
|
||||
)
|
||||
company = company_a_tag.get_text(strip=True) if company_a_tag else "N/A"
|
||||
|
||||
metadata_card = job_card.find("div", class_="base-search-card__metadata")
|
||||
location = self._get_location(metadata_card)
|
||||
location = self.get_location(metadata_card)
|
||||
|
||||
datetime_tag = (
|
||||
metadata_card.find("time", class_="job-search-card__listdate")
|
||||
@@ -211,92 +167,86 @@ class LinkedInScraper(Scraper):
|
||||
datetime_str = datetime_tag["datetime"]
|
||||
try:
|
||||
date_posted = datetime.strptime(datetime_str, "%Y-%m-%d")
|
||||
except:
|
||||
except Exception as e:
|
||||
date_posted = None
|
||||
job_details = {}
|
||||
if full_descr:
|
||||
job_details = self._get_job_details(job_id)
|
||||
benefits_tag = job_card.find("span", class_="result-benefits__text")
|
||||
benefits = " ".join(benefits_tag.get_text().split()) if benefits_tag else None
|
||||
|
||||
description, job_type = self.get_job_description(job_url)
|
||||
|
||||
return JobPost(
|
||||
id=f"li-{job_id}",
|
||||
title=title,
|
||||
description=description,
|
||||
company_name=company,
|
||||
company_url=company_url,
|
||||
location=location,
|
||||
date_posted=date_posted,
|
||||
job_url=f"{self.base_url}/jobs/view/{job_id}",
|
||||
compensation=compensation,
|
||||
job_type=job_details.get("job_type"),
|
||||
job_level=job_details.get("job_level", "").lower(),
|
||||
company_industry=job_details.get("company_industry"),
|
||||
description=job_details.get("description"),
|
||||
job_url_direct=job_details.get("job_url_direct"),
|
||||
emails=extract_emails_from_text(job_details.get("description")),
|
||||
company_logo=job_details.get("company_logo"),
|
||||
job_function=job_details.get("job_function"),
|
||||
job_url=job_url,
|
||||
# job_type=[JobType.FULL_TIME],
|
||||
job_type=job_type,
|
||||
benefits=benefits,
|
||||
emails=extract_emails_from_text(description) if description else None,
|
||||
num_urgent_words=count_urgent_words(description) if description else None,
|
||||
)
|
||||
|
||||
def _get_job_details(self, job_id: str) -> dict:
|
||||
def get_job_description(
|
||||
self, job_page_url: str
|
||||
) -> tuple[None, None] | tuple[str | None, tuple[str | None, JobType | None]]:
|
||||
"""
|
||||
Retrieves job description and other job details by going to the job page url
|
||||
Retrieves job description by going to the job page url
|
||||
:param job_page_url:
|
||||
:return: dict
|
||||
:return: description or None
|
||||
"""
|
||||
try:
|
||||
response = self.session.get(
|
||||
f"{self.base_url}/jobs/view/{job_id}", timeout=5
|
||||
)
|
||||
response = requests.get(job_page_url, timeout=5, proxies=self.proxy)
|
||||
response.raise_for_status()
|
||||
except:
|
||||
return {}
|
||||
if "linkedin.com/signup" in response.url:
|
||||
return {}
|
||||
except Exception as e:
|
||||
return None, None
|
||||
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
div_content = soup.find(
|
||||
"div", class_=lambda x: x and "show-more-less-html__markup" in x
|
||||
)
|
||||
|
||||
description = None
|
||||
if div_content is not None:
|
||||
div_content = remove_attributes(div_content)
|
||||
description = div_content.prettify(formatter="html")
|
||||
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
|
||||
description = markdown_converter(description)
|
||||
if div_content:
|
||||
description = " ".join(div_content.get_text().split()).strip()
|
||||
|
||||
h3_tag = soup.find(
|
||||
"h3", text=lambda text: text and "Job function" in text.strip()
|
||||
)
|
||||
|
||||
job_function = None
|
||||
if h3_tag:
|
||||
job_function_span = h3_tag.find_next(
|
||||
"span", class_="description__job-criteria-text"
|
||||
def get_job_type(
|
||||
soup_job_type: BeautifulSoup,
|
||||
) -> list[JobType] | None:
|
||||
"""
|
||||
Gets the job type from job page
|
||||
:param soup_job_type:
|
||||
:return: JobType
|
||||
"""
|
||||
h3_tag = soup_job_type.find(
|
||||
"h3",
|
||||
class_="description__job-criteria-subheader",
|
||||
string=lambda text: "Employment type" in text,
|
||||
)
|
||||
if job_function_span:
|
||||
job_function = job_function_span.text.strip()
|
||||
|
||||
company_logo = (
|
||||
logo_image.get("data-delayed-url")
|
||||
if (logo_image := soup.find("img", {"class": "artdeco-entity-image"}))
|
||||
else None
|
||||
)
|
||||
return {
|
||||
"description": description,
|
||||
"job_level": self._parse_job_level(soup),
|
||||
"company_industry": self._parse_company_industry(soup),
|
||||
"job_type": self._parse_job_type(soup),
|
||||
"job_url_direct": self._parse_job_url_direct(soup),
|
||||
"company_logo": company_logo,
|
||||
"job_function": job_function,
|
||||
}
|
||||
employment_type = None
|
||||
if h3_tag:
|
||||
employment_type_span = h3_tag.find_next_sibling(
|
||||
"span",
|
||||
class_="description__job-criteria-text description__job-criteria-text--criteria",
|
||||
)
|
||||
if employment_type_span:
|
||||
employment_type = employment_type_span.get_text(strip=True)
|
||||
employment_type = employment_type.lower()
|
||||
employment_type = employment_type.replace("-", "")
|
||||
|
||||
def _get_location(self, metadata_card: Optional[Tag]) -> Location:
|
||||
return [get_enum_from_job_type(employment_type)]
|
||||
|
||||
return description, get_job_type(soup)
|
||||
|
||||
def get_location(self, metadata_card: Optional[Tag]) -> Location:
|
||||
"""
|
||||
Extracts the location data from the job metadata card.
|
||||
:param metadata_card
|
||||
:return: location
|
||||
"""
|
||||
location = Location(country=Country.from_string(self.country))
|
||||
location = Location(country=self.country)
|
||||
if metadata_card is not None:
|
||||
location_tag = metadata_card.find(
|
||||
"span", class_="job-search-card__location"
|
||||
@@ -308,108 +258,7 @@ class LinkedInScraper(Scraper):
|
||||
location = Location(
|
||||
city=city,
|
||||
state=state,
|
||||
country=Country.from_string(self.country),
|
||||
country=self.country,
|
||||
)
|
||||
elif len(parts) == 3:
|
||||
city, state, country = parts
|
||||
country = Country.from_string(country)
|
||||
location = Location(city=city, state=state, country=country)
|
||||
|
||||
return location
|
||||
|
||||
@staticmethod
|
||||
def _parse_job_type(soup_job_type: BeautifulSoup) -> list[JobType] | None:
|
||||
"""
|
||||
Gets the job type from job page
|
||||
:param soup_job_type:
|
||||
:return: JobType
|
||||
"""
|
||||
h3_tag = soup_job_type.find(
|
||||
"h3",
|
||||
class_="description__job-criteria-subheader",
|
||||
string=lambda text: "Employment type" in text,
|
||||
)
|
||||
employment_type = None
|
||||
if h3_tag:
|
||||
employment_type_span = h3_tag.find_next_sibling(
|
||||
"span",
|
||||
class_="description__job-criteria-text description__job-criteria-text--criteria",
|
||||
)
|
||||
if employment_type_span:
|
||||
employment_type = employment_type_span.get_text(strip=True)
|
||||
employment_type = employment_type.lower()
|
||||
employment_type = employment_type.replace("-", "")
|
||||
|
||||
return [get_enum_from_job_type(employment_type)] if employment_type else []
|
||||
|
||||
@staticmethod
|
||||
def _parse_job_level(soup_job_level: BeautifulSoup) -> str | None:
|
||||
"""
|
||||
Gets the job level from job page
|
||||
:param soup_job_level:
|
||||
:return: str
|
||||
"""
|
||||
h3_tag = soup_job_level.find(
|
||||
"h3",
|
||||
class_="description__job-criteria-subheader",
|
||||
string=lambda text: "Seniority level" in text,
|
||||
)
|
||||
job_level = None
|
||||
if h3_tag:
|
||||
job_level_span = h3_tag.find_next_sibling(
|
||||
"span",
|
||||
class_="description__job-criteria-text description__job-criteria-text--criteria",
|
||||
)
|
||||
if job_level_span:
|
||||
job_level = job_level_span.get_text(strip=True)
|
||||
|
||||
return job_level
|
||||
|
||||
@staticmethod
|
||||
def _parse_company_industry(soup_industry: BeautifulSoup) -> str | None:
|
||||
"""
|
||||
Gets the company industry from job page
|
||||
:param soup_industry:
|
||||
:return: str
|
||||
"""
|
||||
h3_tag = soup_industry.find(
|
||||
"h3",
|
||||
class_="description__job-criteria-subheader",
|
||||
string=lambda text: "Industries" in text,
|
||||
)
|
||||
industry = None
|
||||
if h3_tag:
|
||||
industry_span = h3_tag.find_next_sibling(
|
||||
"span",
|
||||
class_="description__job-criteria-text description__job-criteria-text--criteria",
|
||||
)
|
||||
if industry_span:
|
||||
industry = industry_span.get_text(strip=True)
|
||||
|
||||
return industry
|
||||
|
||||
def _parse_job_url_direct(self, soup: BeautifulSoup) -> str | None:
|
||||
"""
|
||||
Gets the job url direct from job page
|
||||
:param soup:
|
||||
:return: str
|
||||
"""
|
||||
job_url_direct = None
|
||||
job_url_direct_content = soup.find("code", id="applyUrl")
|
||||
if job_url_direct_content:
|
||||
job_url_direct_match = self.job_url_direct_regex.search(
|
||||
job_url_direct_content.decode_contents().strip()
|
||||
)
|
||||
if job_url_direct_match:
|
||||
job_url_direct = unquote(job_url_direct_match.group())
|
||||
|
||||
return job_url_direct
|
||||
|
||||
@staticmethod
|
||||
def job_type_code(job_type_enum: JobType) -> str:
|
||||
return {
|
||||
JobType.FULL_TIME: "F",
|
||||
JobType.PART_TIME: "P",
|
||||
JobType.INTERNSHIP: "I",
|
||||
JobType.CONTRACT: "C",
|
||||
JobType.TEMPORARY: "T",
|
||||
}.get(job_type_enum, "")
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
headers = {
|
||||
"authority": "www.linkedin.com",
|
||||
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
"cache-control": "max-age=0",
|
||||
"upgrade-insecure-requests": "1",
|
||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
}
|
||||
@@ -1,158 +1,20 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
import logging
|
||||
from itertools import cycle
|
||||
|
||||
import requests
|
||||
import tls_client
|
||||
import numpy as np
|
||||
from markdownify import markdownify as md
|
||||
from requests.adapters import HTTPAdapter, Retry
|
||||
|
||||
from ..jobs import CompensationInterval, JobType
|
||||
from ..jobs import JobType
|
||||
|
||||
|
||||
def create_logger(name: str):
|
||||
logger = logging.getLogger(f"JobSpy:{name}")
|
||||
logger.propagate = False
|
||||
if not logger.handlers:
|
||||
logger.setLevel(logging.INFO)
|
||||
console_handler = logging.StreamHandler()
|
||||
format = "%(asctime)s - %(levelname)s - %(name)s - %(message)s"
|
||||
formatter = logging.Formatter(format)
|
||||
console_handler.setFormatter(formatter)
|
||||
logger.addHandler(console_handler)
|
||||
return logger
|
||||
|
||||
|
||||
class RotatingProxySession:
|
||||
def __init__(self, proxies=None):
|
||||
if isinstance(proxies, str):
|
||||
self.proxy_cycle = cycle([self.format_proxy(proxies)])
|
||||
elif isinstance(proxies, list):
|
||||
self.proxy_cycle = (
|
||||
cycle([self.format_proxy(proxy) for proxy in proxies])
|
||||
if proxies
|
||||
else None
|
||||
)
|
||||
else:
|
||||
self.proxy_cycle = None
|
||||
|
||||
@staticmethod
|
||||
def format_proxy(proxy):
|
||||
"""Utility method to format a proxy string into a dictionary."""
|
||||
if proxy.startswith("http://") or proxy.startswith("https://"):
|
||||
return {"http": proxy, "https": proxy}
|
||||
return {"http": f"http://{proxy}", "https": f"http://{proxy}"}
|
||||
|
||||
|
||||
class RequestsRotating(RotatingProxySession, requests.Session):
|
||||
|
||||
def __init__(self, proxies=None, has_retry=False, delay=1, clear_cookies=False):
|
||||
RotatingProxySession.__init__(self, proxies=proxies)
|
||||
requests.Session.__init__(self)
|
||||
self.clear_cookies = clear_cookies
|
||||
self.allow_redirects = True
|
||||
self.setup_session(has_retry, delay)
|
||||
|
||||
def setup_session(self, has_retry, delay):
|
||||
if has_retry:
|
||||
retries = Retry(
|
||||
total=3,
|
||||
connect=3,
|
||||
status=3,
|
||||
status_forcelist=[500, 502, 503, 504, 429],
|
||||
backoff_factor=delay,
|
||||
)
|
||||
adapter = HTTPAdapter(max_retries=retries)
|
||||
self.mount("http://", adapter)
|
||||
self.mount("https://", adapter)
|
||||
|
||||
def request(self, method, url, **kwargs):
|
||||
if self.clear_cookies:
|
||||
self.cookies.clear()
|
||||
|
||||
if self.proxy_cycle:
|
||||
next_proxy = next(self.proxy_cycle)
|
||||
if next_proxy["http"] != "http://localhost":
|
||||
self.proxies = next_proxy
|
||||
else:
|
||||
self.proxies = {}
|
||||
return requests.Session.request(self, method, url, **kwargs)
|
||||
|
||||
|
||||
class TLSRotating(RotatingProxySession, tls_client.Session):
|
||||
|
||||
def __init__(self, proxies=None):
|
||||
RotatingProxySession.__init__(self, proxies=proxies)
|
||||
tls_client.Session.__init__(self, random_tls_extension_order=True)
|
||||
|
||||
def execute_request(self, *args, **kwargs):
|
||||
if self.proxy_cycle:
|
||||
next_proxy = next(self.proxy_cycle)
|
||||
if next_proxy["http"] != "http://localhost":
|
||||
self.proxies = next_proxy
|
||||
else:
|
||||
self.proxies = {}
|
||||
response = tls_client.Session.execute_request(self, *args, **kwargs)
|
||||
response.ok = response.status_code in range(200, 400)
|
||||
return response
|
||||
|
||||
|
||||
def create_session(
|
||||
*,
|
||||
proxies: dict | str | None = None,
|
||||
ca_cert: str | None = None,
|
||||
is_tls: bool = True,
|
||||
has_retry: bool = False,
|
||||
delay: int = 1,
|
||||
clear_cookies: bool = False,
|
||||
) -> requests.Session:
|
||||
def count_urgent_words(description: str) -> int:
|
||||
"""
|
||||
Creates a requests session with optional tls, proxy, and retry settings.
|
||||
:return: A session object
|
||||
Count the number of urgent words or phrases in a job description.
|
||||
"""
|
||||
if is_tls:
|
||||
session = TLSRotating(proxies=proxies)
|
||||
else:
|
||||
session = RequestsRotating(
|
||||
proxies=proxies,
|
||||
has_retry=has_retry,
|
||||
delay=delay,
|
||||
clear_cookies=clear_cookies,
|
||||
)
|
||||
urgent_patterns = re.compile(
|
||||
r"\burgen(t|cy)|\bimmediate(ly)?\b|start asap|\bhiring (now|immediate(ly)?)\b",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
matches = re.findall(urgent_patterns, description)
|
||||
count = len(matches)
|
||||
|
||||
if ca_cert:
|
||||
session.verify = ca_cert
|
||||
|
||||
return session
|
||||
|
||||
|
||||
def set_logger_level(verbose: int = 2):
|
||||
"""
|
||||
Adjusts the logger's level. This function allows the logging level to be changed at runtime.
|
||||
|
||||
Parameters:
|
||||
- verbose: int {0, 1, 2} (default=2, all logs)
|
||||
"""
|
||||
if verbose is None:
|
||||
return
|
||||
level_name = {2: "INFO", 1: "WARNING", 0: "ERROR"}.get(verbose, "INFO")
|
||||
level = getattr(logging, level_name.upper(), None)
|
||||
if level is not None:
|
||||
for logger_name in logging.root.manager.loggerDict:
|
||||
if logger_name.startswith("JobSpy:"):
|
||||
logging.getLogger(logger_name).setLevel(level)
|
||||
else:
|
||||
raise ValueError(f"Invalid log level: {level_name}")
|
||||
|
||||
|
||||
def markdown_converter(description_html: str):
|
||||
if description_html is None:
|
||||
return None
|
||||
markdown = md(description_html)
|
||||
return markdown.strip()
|
||||
return count
|
||||
|
||||
|
||||
def extract_emails_from_text(text: str) -> list[str] | None:
|
||||
@@ -162,6 +24,27 @@ def extract_emails_from_text(text: str) -> list[str] | None:
|
||||
return email_regex.findall(text)
|
||||
|
||||
|
||||
def create_session(proxy: str | None = None):
|
||||
"""
|
||||
Creates a tls client session
|
||||
|
||||
:return: A session object with or without proxies.
|
||||
"""
|
||||
session = tls_client.Session(
|
||||
client_identifier="chrome112",
|
||||
random_tls_extension_order=True,
|
||||
)
|
||||
session.proxies = proxy
|
||||
# TODO multiple proxies
|
||||
# if self.proxies:
|
||||
# session.proxies = {
|
||||
# "http": random.choice(self.proxies),
|
||||
# "https": random.choice(self.proxies),
|
||||
# }
|
||||
|
||||
return session
|
||||
|
||||
|
||||
def get_enum_from_job_type(job_type_str: str) -> JobType | None:
|
||||
"""
|
||||
Given a string, returns the corresponding JobType enum member if a match is found.
|
||||
@@ -171,115 +54,3 @@ def get_enum_from_job_type(job_type_str: str) -> JobType | None:
|
||||
if job_type_str in job_type.value:
|
||||
res = job_type
|
||||
return res
|
||||
|
||||
|
||||
def currency_parser(cur_str):
|
||||
# Remove any non-numerical characters
|
||||
# except for ',' '.' or '-' (e.g. EUR)
|
||||
cur_str = re.sub("[^-0-9.,]", "", cur_str)
|
||||
# Remove any 000s separators (either , or .)
|
||||
cur_str = re.sub("[.,]", "", cur_str[:-3]) + cur_str[-3:]
|
||||
|
||||
if "." in list(cur_str[-3:]):
|
||||
num = float(cur_str)
|
||||
elif "," in list(cur_str[-3:]):
|
||||
num = float(cur_str.replace(",", "."))
|
||||
else:
|
||||
num = float(cur_str)
|
||||
|
||||
return np.round(num, 2)
|
||||
|
||||
|
||||
def remove_attributes(tag):
|
||||
for attr in list(tag.attrs):
|
||||
del tag[attr]
|
||||
return tag
|
||||
|
||||
|
||||
def extract_salary(
|
||||
salary_str,
|
||||
lower_limit=1000,
|
||||
upper_limit=700000,
|
||||
hourly_threshold=350,
|
||||
monthly_threshold=30000,
|
||||
enforce_annual_salary=False,
|
||||
):
|
||||
"""
|
||||
Extracts salary information from a string and returns the salary interval, min and max salary values, and currency.
|
||||
(TODO: Needs test cases as the regex is complicated and may not cover all edge cases)
|
||||
"""
|
||||
if not salary_str:
|
||||
return None, None, None, None
|
||||
|
||||
annual_max_salary = None
|
||||
min_max_pattern = r"\$(\d+(?:,\d+)?(?:\.\d+)?)([kK]?)\s*[-—–]\s*(?:\$)?(\d+(?:,\d+)?(?:\.\d+)?)([kK]?)"
|
||||
|
||||
def to_int(s):
|
||||
return int(float(s.replace(",", "")))
|
||||
|
||||
def convert_hourly_to_annual(hourly_wage):
|
||||
return hourly_wage * 2080
|
||||
|
||||
def convert_monthly_to_annual(monthly_wage):
|
||||
return monthly_wage * 12
|
||||
|
||||
match = re.search(min_max_pattern, salary_str)
|
||||
|
||||
if match:
|
||||
min_salary = to_int(match.group(1))
|
||||
max_salary = to_int(match.group(3))
|
||||
# Handle 'k' suffix for min and max salaries independently
|
||||
if "k" in match.group(2).lower() or "k" in match.group(4).lower():
|
||||
min_salary *= 1000
|
||||
max_salary *= 1000
|
||||
|
||||
# Convert to annual if less than the hourly threshold
|
||||
if min_salary < hourly_threshold:
|
||||
interval = CompensationInterval.HOURLY.value
|
||||
annual_min_salary = convert_hourly_to_annual(min_salary)
|
||||
if max_salary < hourly_threshold:
|
||||
annual_max_salary = convert_hourly_to_annual(max_salary)
|
||||
|
||||
elif min_salary < monthly_threshold:
|
||||
interval = CompensationInterval.MONTHLY.value
|
||||
annual_min_salary = convert_monthly_to_annual(min_salary)
|
||||
if max_salary < monthly_threshold:
|
||||
annual_max_salary = convert_monthly_to_annual(max_salary)
|
||||
|
||||
else:
|
||||
interval = CompensationInterval.YEARLY.value
|
||||
annual_min_salary = min_salary
|
||||
annual_max_salary = max_salary
|
||||
|
||||
# Ensure salary range is within specified limits
|
||||
if not annual_max_salary:
|
||||
return None, None, None, None
|
||||
if (
|
||||
lower_limit <= annual_min_salary <= upper_limit
|
||||
and lower_limit <= annual_max_salary <= upper_limit
|
||||
and annual_min_salary < annual_max_salary
|
||||
):
|
||||
if enforce_annual_salary:
|
||||
return interval, annual_min_salary, annual_max_salary, "USD"
|
||||
else:
|
||||
return interval, min_salary, max_salary, "USD"
|
||||
return None, None, None, None
|
||||
|
||||
|
||||
def extract_job_type(description: str):
|
||||
if not description:
|
||||
return []
|
||||
|
||||
keywords = {
|
||||
JobType.FULL_TIME: r"full\s?time",
|
||||
JobType.PART_TIME: r"part\s?time",
|
||||
JobType.INTERNSHIP: r"internship",
|
||||
JobType.CONTRACT: r"contract",
|
||||
}
|
||||
|
||||
listing_types = []
|
||||
for key, pattern in keywords.items():
|
||||
if re.search(pattern, description, re.IGNORECASE):
|
||||
listing_types.append(key)
|
||||
|
||||
return listing_types if listing_types else None
|
||||
|
||||
@@ -4,244 +4,308 @@ jobspy.scrapers.ziprecruiter
|
||||
|
||||
This module contains routines to scrape ZipRecruiter.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import math
|
||||
import json
|
||||
import re
|
||||
import time
|
||||
from datetime import datetime
|
||||
from datetime import datetime, date
|
||||
from typing import Optional, Tuple, Any
|
||||
from urllib.parse import urlparse, parse_qs, urlunparse
|
||||
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
from bs4.element import Tag
|
||||
from concurrent.futures import ThreadPoolExecutor, Future
|
||||
|
||||
from .constants import headers
|
||||
from .. import Scraper, ScraperInput, Site
|
||||
from ..utils import (
|
||||
extract_emails_from_text,
|
||||
create_session,
|
||||
markdown_converter,
|
||||
remove_attributes,
|
||||
create_logger,
|
||||
)
|
||||
from ..exceptions import ZipRecruiterException
|
||||
from ..utils import count_urgent_words, extract_emails_from_text, create_session
|
||||
from ...jobs import (
|
||||
JobPost,
|
||||
Compensation,
|
||||
CompensationInterval,
|
||||
Location,
|
||||
JobResponse,
|
||||
JobType,
|
||||
Country,
|
||||
DescriptionFormat,
|
||||
)
|
||||
|
||||
logger = create_logger("ZipRecruiter")
|
||||
|
||||
|
||||
class ZipRecruiterScraper(Scraper):
|
||||
base_url = "https://www.ziprecruiter.com"
|
||||
api_url = "https://api.ziprecruiter.com"
|
||||
|
||||
def __init__(
|
||||
self, proxies: list[str] | str | None = None, ca_cert: str | None = None
|
||||
):
|
||||
def __init__(self, proxy: Optional[str] = None):
|
||||
"""
|
||||
Initializes ZipRecruiterScraper with the ZipRecruiter job search url
|
||||
Initializes LinkedInScraper with the ZipRecruiter job search url
|
||||
"""
|
||||
super().__init__(Site.ZIP_RECRUITER, proxies=proxies)
|
||||
site = Site(Site.ZIP_RECRUITER)
|
||||
self.url = "https://www.ziprecruiter.com"
|
||||
super().__init__(site, proxy=proxy)
|
||||
|
||||
self.scraper_input = None
|
||||
self.session = create_session(proxies=proxies, ca_cert=ca_cert)
|
||||
self.session.headers.update(headers)
|
||||
self._get_cookies()
|
||||
|
||||
self.delay = 5
|
||||
self.jobs_per_page = 20
|
||||
self.seen_urls = set()
|
||||
|
||||
def find_jobs_in_page(self, scraper_input: ScraperInput, continue_token: Optional[str] = None) -> Tuple[list[JobPost], Optional[str]]:
|
||||
"""
|
||||
Scrapes a page of ZipRecruiter for jobs with scraper_input criteria
|
||||
:param scraper_input:
|
||||
:return: jobs found on page
|
||||
"""
|
||||
params = self.add_params(scraper_input)
|
||||
if continue_token:
|
||||
params['continue'] = continue_token
|
||||
try:
|
||||
response = requests.get(
|
||||
f"https://api.ziprecruiter.com/jobs-app/jobs",
|
||||
headers=self.headers(),
|
||||
params=self.add_params(scraper_input),
|
||||
allow_redirects=True,
|
||||
timeout=10,
|
||||
)
|
||||
if response.status_code != 200:
|
||||
raise ZipRecruiterException(
|
||||
f"bad response status code: {response.status_code}"
|
||||
)
|
||||
except Exception as e:
|
||||
if "Proxy responded with non 200 code" in str(e):
|
||||
raise ZipRecruiterException("bad proxy")
|
||||
raise ZipRecruiterException(str(e))
|
||||
|
||||
response_data = response.json()
|
||||
jobs_list = response_data.get("jobs", [])
|
||||
next_continue_token = response_data.get('continue', None)
|
||||
|
||||
with ThreadPoolExecutor(max_workers=10) as executor:
|
||||
job_results = [
|
||||
executor.submit(self.process_job, job)
|
||||
for job in jobs_list
|
||||
]
|
||||
|
||||
job_list = [result.result() for result in job_results if result.result()]
|
||||
return job_list, next_continue_token
|
||||
|
||||
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
|
||||
"""
|
||||
Scrapes ZipRecruiter for jobs with scraper_input criteria.
|
||||
:param scraper_input: Information about job search criteria.
|
||||
:return: JobResponse containing a list of jobs.
|
||||
"""
|
||||
self.scraper_input = scraper_input
|
||||
job_list: list[JobPost] = []
|
||||
continue_token = None
|
||||
|
||||
max_pages = math.ceil(scraper_input.results_wanted / self.jobs_per_page)
|
||||
|
||||
for page in range(1, max_pages + 1):
|
||||
if len(job_list) >= scraper_input.results_wanted:
|
||||
break
|
||||
if page > 1:
|
||||
time.sleep(self.delay)
|
||||
logger.info(f"search page: {page} / {max_pages}")
|
||||
jobs_on_page, continue_token = self._find_jobs_in_page(
|
||||
scraper_input, continue_token
|
||||
)
|
||||
|
||||
jobs_on_page, continue_token = self.find_jobs_in_page(scraper_input, continue_token)
|
||||
if jobs_on_page:
|
||||
job_list.extend(jobs_on_page)
|
||||
else:
|
||||
break
|
||||
|
||||
if not continue_token:
|
||||
break
|
||||
return JobResponse(jobs=job_list[: scraper_input.results_wanted])
|
||||
|
||||
def _find_jobs_in_page(
|
||||
self, scraper_input: ScraperInput, continue_token: str | None = None
|
||||
) -> Tuple[list[JobPost], Optional[str]]:
|
||||
"""
|
||||
Scrapes a page of ZipRecruiter for jobs with scraper_input criteria
|
||||
:param scraper_input:
|
||||
:param continue_token:
|
||||
:return: jobs found on page
|
||||
"""
|
||||
jobs_list = []
|
||||
params = self._add_params(scraper_input)
|
||||
if continue_token:
|
||||
params["continue_from"] = continue_token
|
||||
try:
|
||||
res = self.session.get(f"{self.api_url}/jobs-app/jobs", params=params)
|
||||
if res.status_code not in range(200, 400):
|
||||
if res.status_code == 429:
|
||||
err = "429 Response - Blocked by ZipRecruiter for too many requests"
|
||||
else:
|
||||
err = f"ZipRecruiter response status code {res.status_code}"
|
||||
err += f" with response: {res.text}" # ZipRecruiter likely not available in EU
|
||||
logger.error(err)
|
||||
return jobs_list, ""
|
||||
except Exception as e:
|
||||
if "Proxy responded with" in str(e):
|
||||
logger.error(f"Indeed: Bad proxy")
|
||||
else:
|
||||
logger.error(f"Indeed: {str(e)}")
|
||||
return jobs_list, ""
|
||||
if len(job_list) > scraper_input.results_wanted:
|
||||
job_list = job_list[:scraper_input.results_wanted]
|
||||
|
||||
res_data = res.json()
|
||||
jobs_list = res_data.get("jobs", [])
|
||||
next_continue_token = res_data.get("continue", None)
|
||||
with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
|
||||
job_results = [executor.submit(self._process_job, job) for job in jobs_list]
|
||||
return JobResponse(jobs=job_list)
|
||||
|
||||
job_list = list(filter(None, (result.result() for result in job_results)))
|
||||
return job_list, next_continue_token
|
||||
|
||||
def _process_job(self, job: dict) -> JobPost | None:
|
||||
"""
|
||||
Processes an individual job dict from the response
|
||||
"""
|
||||
def process_job(self, job: dict) -> JobPost:
|
||||
"""the most common type of jobs page on ZR"""
|
||||
title = job.get("name")
|
||||
job_url = f"{self.base_url}/jobs//j?lvk={job['listing_key']}"
|
||||
if job_url in self.seen_urls:
|
||||
return
|
||||
self.seen_urls.add(job_url)
|
||||
job_url = job.get("job_url")
|
||||
|
||||
description = job.get("job_description", "").strip()
|
||||
listing_type = job.get("buyer_type", "")
|
||||
description = (
|
||||
markdown_converter(description)
|
||||
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN
|
||||
else description
|
||||
)
|
||||
company = job.get("hiring_company", {}).get("name")
|
||||
country_value = "usa" if job.get("job_country") == "US" else "canada"
|
||||
country_enum = Country.from_string(country_value)
|
||||
description = BeautifulSoup(
|
||||
job.get("job_description", "").strip(), "html.parser"
|
||||
).get_text()
|
||||
|
||||
company = job['hiring_company'].get("name") if "hiring_company" in job else None
|
||||
location = Location(
|
||||
city=job.get("job_city"), state=job.get("job_state"), country=country_enum
|
||||
city=job.get("job_city"), state=job.get("job_state"), country='usa' if job.get("job_country") == 'US' else 'canada'
|
||||
)
|
||||
job_type = self._get_job_type_enum(
|
||||
job_type = ZipRecruiterScraper.get_job_type_enum(
|
||||
job.get("employment_type", "").replace("_", "").lower()
|
||||
)
|
||||
date_posted = datetime.fromisoformat(job["posted_time"].rstrip("Z")).date()
|
||||
comp_interval = job.get("compensation_interval")
|
||||
comp_interval = "yearly" if comp_interval == "annual" else comp_interval
|
||||
comp_min = int(job["compensation_min"]) if "compensation_min" in job else None
|
||||
comp_max = int(job["compensation_max"]) if "compensation_max" in job else None
|
||||
comp_currency = job.get("compensation_currency")
|
||||
description_full, job_url_direct = self._get_descr(job_url)
|
||||
|
||||
save_job_url = job.get("SaveJobURL", "")
|
||||
posted_time_match = re.search(
|
||||
r"posted_time=(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z)", save_job_url
|
||||
)
|
||||
if posted_time_match:
|
||||
date_time_str = posted_time_match.group(1)
|
||||
date_posted_obj = datetime.strptime(date_time_str, "%Y-%m-%dT%H:%M:%SZ")
|
||||
date_posted = date_posted_obj.date()
|
||||
else:
|
||||
date_posted = date.today()
|
||||
|
||||
|
||||
return JobPost(
|
||||
id=f'zr-{job["listing_key"]}',
|
||||
title=title,
|
||||
company_name=company,
|
||||
location=location,
|
||||
job_type=job_type,
|
||||
compensation=Compensation(
|
||||
interval=comp_interval,
|
||||
min_amount=comp_min,
|
||||
max_amount=comp_max,
|
||||
currency=comp_currency,
|
||||
interval="yearly" if job.get("compensation_interval") == "annual" else job.get("compensation_interval") ,
|
||||
min_amount=int(job["compensation_min"]) if "compensation_min" in job else None,
|
||||
max_amount=int(job["compensation_max"]) if "compensation_max" in job else None,
|
||||
currency=job.get("compensation_currency"),
|
||||
),
|
||||
date_posted=date_posted,
|
||||
job_url=job_url,
|
||||
description=description_full if description_full else description,
|
||||
description=description,
|
||||
emails=extract_emails_from_text(description) if description else None,
|
||||
job_url_direct=job_url_direct,
|
||||
listing_type=listing_type,
|
||||
num_urgent_words=count_urgent_words(description) if description else None,
|
||||
)
|
||||
|
||||
def _get_descr(self, job_url):
|
||||
res = self.session.get(job_url, allow_redirects=True)
|
||||
description_full = job_url_direct = None
|
||||
if res.ok:
|
||||
soup = BeautifulSoup(res.text, "html.parser")
|
||||
job_descr_div = soup.find("div", class_="job_description")
|
||||
company_descr_section = soup.find("section", class_="company_description")
|
||||
job_description_clean = (
|
||||
remove_attributes(job_descr_div).prettify(formatter="html")
|
||||
if job_descr_div
|
||||
else ""
|
||||
)
|
||||
company_description_clean = (
|
||||
remove_attributes(company_descr_section).prettify(formatter="html")
|
||||
if company_descr_section
|
||||
else ""
|
||||
)
|
||||
description_full = job_description_clean + company_description_clean
|
||||
script_tag = soup.find("script", type="application/json")
|
||||
if script_tag:
|
||||
job_json = json.loads(script_tag.string)
|
||||
job_url_val = job_json["model"].get("saveJobURL", "")
|
||||
m = re.search(r"job_url=(.+)", job_url_val)
|
||||
if m:
|
||||
job_url_direct = m.group(1)
|
||||
|
||||
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
|
||||
description_full = markdown_converter(description_full)
|
||||
|
||||
return description_full, job_url_direct
|
||||
|
||||
def _get_cookies(self):
|
||||
data = "event_type=session&logged_in=false&number_of_retry=1&property=model%3AiPhone&property=os%3AiOS&property=locale%3Aen_us&property=app_build_number%3A4734&property=app_version%3A91.0&property=manufacturer%3AApple&property=timestamp%3A2024-01-12T12%3A04%3A42-06%3A00&property=screen_height%3A852&property=os_version%3A16.6.1&property=source%3Ainstall&property=screen_width%3A393&property=device_model%3AiPhone%2014%20Pro&property=brand%3AApple"
|
||||
url = f"{self.api_url}/jobs-app/event"
|
||||
self.session.post(url, data=data)
|
||||
|
||||
@staticmethod
|
||||
def _get_job_type_enum(job_type_str: str) -> list[JobType] | None:
|
||||
def get_job_type_enum(job_type_str: str) -> list[JobType] | None:
|
||||
for job_type in JobType:
|
||||
if job_type_str in job_type.value:
|
||||
return [job_type]
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _add_params(scraper_input) -> dict[str, str | Any]:
|
||||
def add_params(scraper_input) -> dict[str, str | Any]:
|
||||
params = {
|
||||
"search": scraper_input.search_term,
|
||||
"location": scraper_input.location,
|
||||
"form": "jobs-landing",
|
||||
}
|
||||
if scraper_input.hours_old:
|
||||
params["days"] = max(scraper_input.hours_old // 24, 1)
|
||||
job_type_map = {JobType.FULL_TIME: "full_time", JobType.PART_TIME: "part_time"}
|
||||
job_type_value = None
|
||||
if scraper_input.job_type:
|
||||
job_type = scraper_input.job_type
|
||||
params["employment_type"] = job_type_map.get(job_type, job_type.value[0])
|
||||
if scraper_input.easy_apply:
|
||||
params["zipapply"] = 1
|
||||
if scraper_input.job_type.value == "fulltime":
|
||||
job_type_value = "full_time"
|
||||
elif scraper_input.job_type.value == "parttime":
|
||||
job_type_value = "part_time"
|
||||
else:
|
||||
job_type_value = scraper_input.job_type.value
|
||||
|
||||
if job_type_value:
|
||||
params[
|
||||
"refine_by_employment"
|
||||
] = f"employment_type:employment_type:{job_type_value}"
|
||||
|
||||
if scraper_input.is_remote:
|
||||
params["remote"] = 1
|
||||
params["refine_by_location_type"] = "only_remote"
|
||||
|
||||
if scraper_input.distance:
|
||||
params["radius"] = scraper_input.distance
|
||||
return {k: v for k, v in params.items() if v is not None}
|
||||
|
||||
return params
|
||||
|
||||
@staticmethod
|
||||
def get_interval(interval_str: str):
|
||||
"""
|
||||
Maps the interval alias to its appropriate CompensationInterval.
|
||||
:param interval_str
|
||||
:return: CompensationInterval
|
||||
"""
|
||||
interval_alias = {"annually": CompensationInterval.YEARLY}
|
||||
interval_str = interval_str.lower()
|
||||
|
||||
if interval_str in interval_alias:
|
||||
return interval_alias[interval_str]
|
||||
|
||||
return CompensationInterval(interval_str)
|
||||
|
||||
@staticmethod
|
||||
def get_date_posted(job: Tag) -> Optional[datetime.date]:
|
||||
"""
|
||||
Extracts the date a job was posted
|
||||
:param job
|
||||
:return: date the job was posted or None
|
||||
"""
|
||||
button = job.find(
|
||||
"button", {"class": "action_input save_job zrs_btn_secondary_200"}
|
||||
)
|
||||
if not button:
|
||||
return None
|
||||
|
||||
url_time = button.get("data-href", "")
|
||||
url_components = urlparse(url_time)
|
||||
params = parse_qs(url_components.query)
|
||||
posted_time_str = params.get("posted_time", [None])[0]
|
||||
|
||||
if posted_time_str:
|
||||
posted_date = datetime.strptime(
|
||||
posted_time_str, "%Y-%m-%dT%H:%M:%SZ"
|
||||
).date()
|
||||
return posted_date
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def get_compensation(job: Tag) -> Optional[Compensation]:
|
||||
"""
|
||||
Parses the compensation tag from the job BeautifulSoup object
|
||||
:param job
|
||||
:return: Compensation object or None
|
||||
"""
|
||||
pay_element = job.find("li", {"class": "perk_item perk_pay"})
|
||||
if pay_element is None:
|
||||
return None
|
||||
pay = pay_element.find("div", {"class": "value"}).find("span").text.strip()
|
||||
|
||||
def create_compensation_object(pay_string: str) -> Compensation:
|
||||
"""
|
||||
Creates a Compensation object from a pay_string
|
||||
:param pay_string
|
||||
:return: compensation
|
||||
"""
|
||||
interval = ZipRecruiterScraper.get_interval(pay_string.split()[-1])
|
||||
|
||||
amounts = []
|
||||
for amount in pay_string.split("to"):
|
||||
amount = amount.replace(",", "").strip("$ ").split(" ")[0]
|
||||
if "K" in amount:
|
||||
amount = amount.replace("K", "")
|
||||
amount = int(float(amount)) * 1000
|
||||
else:
|
||||
amount = int(float(amount))
|
||||
amounts.append(amount)
|
||||
|
||||
compensation = Compensation(
|
||||
interval=interval,
|
||||
min_amount=min(amounts),
|
||||
max_amount=max(amounts),
|
||||
currency="USD/CAD",
|
||||
)
|
||||
|
||||
return compensation
|
||||
|
||||
return create_compensation_object(pay)
|
||||
|
||||
@staticmethod
|
||||
def get_location(job: Tag) -> Location:
|
||||
"""
|
||||
Extracts the job location from BeatifulSoup object
|
||||
:param job:
|
||||
:return: location
|
||||
"""
|
||||
location_link = job.find("a", {"class": "company_location"})
|
||||
if location_link is not None:
|
||||
location_string = location_link.text.strip()
|
||||
parts = location_string.split(", ")
|
||||
if len(parts) == 2:
|
||||
city, state = parts
|
||||
else:
|
||||
city, state = None, None
|
||||
else:
|
||||
city, state = None, None
|
||||
return Location(city=city, state=state, country=Country.US_CANADA)
|
||||
|
||||
@staticmethod
|
||||
def headers() -> dict:
|
||||
"""
|
||||
Returns headers needed for requests
|
||||
:return: dict - Dictionary containing headers
|
||||
"""
|
||||
return {
|
||||
'Host': 'api.ziprecruiter.com',
|
||||
'Cookie': 'ziprecruiter_browser=018188e0-045b-4ad7-aa50-627a6c3d43aa; ziprecruiter_session=5259b2219bf95b6d2299a1417424bc2edc9f4b38; SplitSV=2016-10-19%3AU2FsdGVkX19f9%2Bx70knxc%2FeR3xXR8lWoTcYfq5QjmLU%3D%0A; __cf_bm=qXim3DtLPbOL83GIp.ddQEOFVFTc1OBGPckiHYxcz3o-1698521532-0-AfUOCkgCZyVbiW1ziUwyefCfzNrJJTTKPYnif1FZGQkT60dMowmSU/Y/lP+WiygkFPW/KbYJmyc+MQSkkad5YygYaARflaRj51abnD+SyF9V; zglobalid=68d49bd5-0326-428e-aba8-8a04b64bc67c.af2d99ff7c03.653d61bb; ziprecruiter_browser=018188e0-045b-4ad7-aa50-627a6c3d43aa; ziprecruiter_session=5259b2219bf95b6d2299a1417424bc2edc9f4b38',
|
||||
'accept': '*/*',
|
||||
'x-zr-zva-override': '100000000;vid:ZT1huzm_EQlDTVEc',
|
||||
'x-pushnotificationid': '0ff4983d38d7fc5b3370297f2bcffcf4b3321c418f5c22dd152a0264707602a0',
|
||||
'x-deviceid': 'D77B3A92-E589-46A4-8A39-6EF6F1D86006',
|
||||
'user-agent': 'Job Search/87.0 (iPhone; CPU iOS 16_6_1 like Mac OS X)',
|
||||
'authorization': 'Basic YTBlZjMyZDYtN2I0Yy00MWVkLWEyODMtYTI1NDAzMzI0YTcyOg==',
|
||||
'accept-language': 'en-US,en;q=0.9'
|
||||
}
|
||||
|
||||
@@ -1,10 +0,0 @@
|
||||
headers = {
|
||||
"Host": "api.ziprecruiter.com",
|
||||
"accept": "*/*",
|
||||
"x-zr-zva-override": "100000000;vid:ZT1huzm_EQlDTVEc",
|
||||
"x-pushnotificationid": "0ff4983d38d7fc5b3370297f2bcffcf4b3321c418f5c22dd152a0264707602a0",
|
||||
"x-deviceid": "D77B3A92-E589-46A4-8A39-6EF6F1D86006",
|
||||
"user-agent": "Job Search/87.0 (iPhone; CPU iOS 16_6_1 like Mac OS X)",
|
||||
"authorization": "Basic YTBlZjMyZDYtN2I0Yy00MWVkLWEyODMtYTI1NDAzMzI0YTcyOg==",
|
||||
"accept-language": "en-US,en;q=0.9",
|
||||
}
|
||||
0
src/tests/__init__.py
Normal file
0
src/tests/__init__.py
Normal file
14
src/tests/test_all.py
Normal file
14
src/tests/test_all.py
Normal file
@@ -0,0 +1,14 @@
|
||||
from ..jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_all():
|
||||
result = scrape_jobs(
|
||||
site_name=["linkedin", "indeed", "zip_recruiter"],
|
||||
search_term="software engineer",
|
||||
results_wanted=5,
|
||||
)
|
||||
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and not result.empty
|
||||
), "Result should be a non-empty DataFrame"
|
||||
@@ -1,13 +1,12 @@
|
||||
from jobspy import scrape_jobs
|
||||
from ..jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_indeed():
|
||||
result = scrape_jobs(
|
||||
site_name="indeed",
|
||||
search_term="engineer",
|
||||
results_wanted=5,
|
||||
search_term="software engineer",
|
||||
)
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and len(result) == 5
|
||||
isinstance(result, pd.DataFrame) and not result.empty
|
||||
), "Result should be a non-empty DataFrame"
|
||||
12
src/tests/test_linkedin.py
Normal file
12
src/tests/test_linkedin.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from ..jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_linkedin():
|
||||
result = scrape_jobs(
|
||||
site_name="linkedin",
|
||||
search_term="software engineer",
|
||||
)
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and not result.empty
|
||||
), "Result should be a non-empty DataFrame"
|
||||
13
src/tests/test_ziprecruiter.py
Normal file
13
src/tests/test_ziprecruiter.py
Normal file
@@ -0,0 +1,13 @@
|
||||
from ..jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_ziprecruiter():
|
||||
result = scrape_jobs(
|
||||
site_name="zip_recruiter",
|
||||
search_term="software engineer",
|
||||
)
|
||||
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and not result.empty
|
||||
), "Result should be a non-empty DataFrame"
|
||||
@@ -1,18 +0,0 @@
|
||||
from jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_all():
|
||||
sites = [
|
||||
"indeed",
|
||||
"glassdoor",
|
||||
] # ziprecruiter/linkedin needs good ip, and temp fix to pass test on ci
|
||||
result = scrape_jobs(
|
||||
site_name=sites,
|
||||
search_term="engineer",
|
||||
results_wanted=5,
|
||||
)
|
||||
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and len(result) == len(sites) * 5
|
||||
), "Result should be a non-empty DataFrame"
|
||||
@@ -1,13 +0,0 @@
|
||||
from jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_glassdoor():
|
||||
result = scrape_jobs(
|
||||
site_name="glassdoor",
|
||||
search_term="engineer",
|
||||
results_wanted=5,
|
||||
)
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and len(result) == 5
|
||||
), "Result should be a non-empty DataFrame"
|
||||
@@ -1,12 +0,0 @@
|
||||
from jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_google():
|
||||
result = scrape_jobs(
|
||||
site_name="google", search_term="software engineer", results_wanted=5
|
||||
)
|
||||
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and len(result) == 5
|
||||
), "Result should be a non-empty DataFrame"
|
||||
@@ -1,9 +0,0 @@
|
||||
from jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_linkedin():
|
||||
result = scrape_jobs(site_name="linkedin", search_term="engineer", results_wanted=5)
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and len(result) == 5
|
||||
), "Result should be a non-empty DataFrame"
|
||||
@@ -1,12 +0,0 @@
|
||||
from jobspy import scrape_jobs
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def test_ziprecruiter():
|
||||
result = scrape_jobs(
|
||||
site_name="zip_recruiter", search_term="software engineer", results_wanted=5
|
||||
)
|
||||
|
||||
assert (
|
||||
isinstance(result, pd.DataFrame) and len(result) == 5
|
||||
), "Result should be a non-empty DataFrame"
|
||||
Reference in New Issue
Block a user