Compare commits

..

55 Commits

Author SHA1 Message Date
Cullen Watson
8570c0651e fix:key error (#176) 2024-07-21 13:05:18 -05:00
Cullen Watson
8678b0bbe4 enh: test on pr (#174) 2024-07-19 14:25:25 -05:00
Cullen Watson
60d4d911c9 lock file (#173) 2024-07-17 21:21:22 -05:00
Lluís Salord Quetglas
2a0cba8c7e FEAT: Optional convertion to annual and know salary source (#170) 2024-07-17 21:05:33 -05:00
Mason DePalma
de70189fa2 Update pyproject.toml (#172)
Changed Numpy to the most recent version so the package can properly install
2024-07-17 20:54:08 -05:00
Cullen Watson
b55c0eb86d docs:readme 2024-07-16 19:24:38 -05:00
Cullen Watson
88c95c4ad5 enh: estimated salary (#169) 2024-07-16 19:20:34 -05:00
Cullen Watson
d8d33d602f docs: readme 2024-07-15 21:30:11 -05:00
Cullen Watson
6330c14879 minor fix 2024-07-15 21:19:01 -05:00
Ali Bakhshi Ilani
48631ea271 Add company industry and job level to linkedin scraper (#166) 2024-07-15 21:07:39 -05:00
Cullen Watson
edffe18e65 enh: listing source (#168) 2024-07-15 20:30:04 -05:00
Lluís Salord Quetglas
0988230a24 FEAT: Add Glassdoor logo data if available (#167) 2024-07-15 20:25:18 -05:00
Cullen Watson
d000a81eb3 Salary parse (#163) 2024-06-09 17:45:38 -05:00
Cullen Watson
ccb0c17660 enh: ziprecruiter full description (#162) 2024-06-09 16:21:01 -05:00
Cullen Watson
df339610fa docs: readme 2024-05-29 19:32:32 -05:00
Cullen Watson
c501006bd8 docs: readme 2024-05-28 16:04:26 -05:00
Cullen Watson
89a3ee231c enh(li): job function (#160) 2024-05-28 16:01:29 -05:00
Cullen
6439f71433 chore: version 2024-05-28 15:39:24 -05:00
adamagassi
7f6271b2e0 LinkedIn scraper fixes: (#159)
Correct initial page offset calculation
Separate page variable from request counter
Fix job offset starting value
Increment offset by number of jobs returned instead of expected value
2024-05-28 15:38:13 -05:00
Cullen Watson
5cb7ffe5fd enh: proxies (#157)
* enh: proxies

* enh: proxies
2024-05-25 14:04:09 -05:00
Cullen Watson
cd29f79796 docs: readme 2024-05-25 11:46:23 -05:00
Cullen Watson
65d2e5e707 Update pyproject.toml 2024-05-20 11:46:36 -05:00
fasih hussain
08d63a87a2 chore: id added for JobPost schema (#152) 2024-05-20 11:45:52 -05:00
Cullen
1ffdb1756f fix: dup line 2024-04-30 12:11:48 -05:00
Cullen Watson
1185693422 delete empty file 2024-04-30 12:06:20 -05:00
Lluís Salord Quetglas
dcd7144318 FIX: Allow Indeed search term with complex syntax (#139) 2024-04-30 12:05:43 -05:00
Cullen Watson
bf73c061bd enh: linkedin company logo (#141) 2024-04-30 12:03:10 -05:00
Lluís Salord Quetglas
8dd08ed9fd FEAT: Allow LinkedIn scraper to get external job apply url (#140) 2024-04-30 11:36:01 -05:00
Cullen Watson
5d3df732e6 docs: readme 2024-03-12 20:46:25 -05:00
Kellen Mace
86f858e06d Update scrape_jobs() parameters info in readme (#130) 2024-03-12 20:45:13 -05:00
Cullen
1089d1f0a5 docs: readme 2024-03-11 21:30:57 -05:00
Cullen
3e93454738 fix(indeed): readd param 2024-03-11 21:23:20 -05:00
Cullen Watson
0d150d519f docs: readme 2024-03-11 14:52:20 -05:00
Cullen Watson
cc3497f929 docs: readme 2024-03-11 14:45:17 -05:00
Cullen Watson
5986f75346 docs: readme 2024-03-11 14:41:12 -05:00
VitaminB16
4b7bdb9313 feat: Adjust log verbosity via verbose arg (#128) 2024-03-11 14:38:44 -05:00
Cullen Watson
80213f28d2 chore: version 2024-03-11 09:43:12 -05:00
Cullen Watson
ada38532c3 fix: indeed empty location term 2024-03-11 09:42:43 -05:00
Cullen Watson
3b0017964c fix: indeed empty search term 2024-03-11 09:21:11 -05:00
VitaminB16
94d8f555fd format: Apply Black formatter to the codebase (#127) 2024-03-10 23:36:27 -05:00
Cullen Watson
e8b4b376b8 docs: readme 2024-03-09 13:40:34 -06:00
Cullen Watson
54ac1bad16 docs: readme 2024-03-09 01:49:05 -06:00
Cullen Watson
0a669e9ba8 enh: indeed more fields (#126) 2024-03-09 01:40:01 -06:00
gigaSec
a4f6851c32 Fix GlassDoor Country Vietnam(#122) 2024-03-04 17:35:57 -06:00
troy-conte
db01bc6bbb log search updates, fix glassdoor (#120) 2024-03-04 16:39:38 -06:00
Cullen Watson
f8a4eccc6b Remove pandas warning (#118) 2024-02-29 21:30:56 -06:00
Cullen Watson
ba3a16b228 Description format (#107) 2024-02-14 16:04:23 -06:00
Cullen Watson
aeb1a50d2c fix job type search (#106) 2024-02-12 11:02:48 -06:00
VitaminB16
91b137ef86 feat: Ability to query by time posted for linkedin, indeed, glassdoor, ziprecruiter (#103) 2024-02-09 14:02:03 -06:00
Cullen Watson
2563c5ca08 enh: Indeed company url (#104) 2024-02-09 12:05:10 -06:00
Cullen Watson
32282305c8 docs: readme 2024-02-08 18:13:19 -06:00
Cullen Watson
ccbea51f3c docs: readme 2024-02-04 09:25:10 -06:00
Cullen Watson
6ec7c24f7f enh(linkedin): search by company ids (#99) 2024-02-04 09:21:45 -06:00
Cullen Watson
02caf1b38d fix(zr): date posted (#98) 2024-02-03 07:20:53 -06:00
Cullen Watson
8e2ab277da fix(ziprecruiter): pagination (#97)
* fix(ziprecruiter): pagination

* chore: version
2024-02-02 20:48:28 -06:00
22 changed files with 3175 additions and 2257 deletions

22
.github/workflows/python-test.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Python Tests
on:
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
pip install poetry
poetry install
- name: Run tests
run: poetry run pytest src/tests/

7
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,7 @@
repos:
- repo: https://github.com/psf/black
rev: 24.2.0
hooks:
- id: black
language_version: python
args: [--line-length=88, --quiet]

179
README.md
View File

@@ -11,17 +11,14 @@ work with us.*
- Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, & **ZipRecruiter** simultaneously - Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, & **ZipRecruiter** simultaneously
- Aggregates the job postings in a Pandas DataFrame - Aggregates the job postings in a Pandas DataFrame
- Proxy support (HTTP/S, SOCKS) - Proxies support
[Video Guide for JobSpy](https://www.youtube.com/watch?v=RuP1HrAZnxs&pp=ygUgam9icyBzY3JhcGVyIGJvdCBsaW5rZWRpbiBpbmRlZWQ%3D) -
Updated for release v1.1.3
![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57) ![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57)
### Installation ### Installation
``` ```
pip install python-jobspy pip install -U python-jobspy
``` ```
_Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_
@@ -29,24 +26,30 @@ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/)
### Usage ### Usage
```python ```python
import csv
from jobspy import scrape_jobs from jobspy import scrape_jobs
jobs = scrape_jobs( jobs = scrape_jobs(
site_name=["indeed", "linkedin", "zip_recruiter", "glassdoor"], site_name=["indeed", "linkedin", "zip_recruiter", "glassdoor"],
search_term="software engineer", search_term="software engineer",
location="Dallas, TX", location="Dallas, TX",
results_wanted=10, results_wanted=20,
country_indeed='USA' # only needed for indeed / glassdoor hours_old=72, # (only Linkedin/Indeed is hour specific, others round up to days old)
country_indeed='USA', # only needed for indeed / glassdoor
# linkedin_fetch_description=True # get full description , direct job url , company industry and job level (seniority level) for linkedin (slower)
# proxies=["208.195.175.46:65095", "208.195.175.45:65095", "localhost"],
) )
print(f"Found {len(jobs)} jobs") print(f"Found {len(jobs)} jobs")
print(jobs.head()) print(jobs.head())
jobs.to_csv("jobs.csv", index=False) # to_xlsx jobs.to_csv("jobs.csv", quoting=csv.QUOTE_NONNUMERIC, escapechar="\\", index=False) # to_excel
``` ```
### Output ### Output
``` ```
SITE TITLE COMPANY_NAME CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION SITE TITLE COMPANY CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION
indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!... indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!...
indeed Senior Software Engineer TherapyNotes.com Philadelphia PA fulltime yearly 135000 110000 https://www.indeed.com/viewjob?jk=da39574a40cb... About Us TherapyNotes is the national leader i... indeed Senior Software Engineer TherapyNotes.com Philadelphia PA fulltime yearly 135000 110000 https://www.indeed.com/viewjob?jk=da39574a40cb... About Us TherapyNotes is the national leader i...
linkedin Software Engineer - Early Career Lockheed Martin Sunnyvale CA fulltime yearly None None https://www.linkedin.com/jobs/view/3693012711 Description:By bringing together people that u... linkedin Software Engineer - Early Career Lockheed Martin Sunnyvale CA fulltime yearly None None https://www.linkedin.com/jobs/view/3693012711 Description:By bringing together people that u...
@@ -58,61 +61,121 @@ zip_recruiter Software Developer TEKsystems Phoenix
### Parameters for `scrape_jobs()` ### Parameters for `scrape_jobs()`
```plaintext ```plaintext
Required
├── site_type (List[enum]): linkedin, zip_recruiter, indeed, glassdoor
└── search_term (str)
Optional Optional
├── location (int) ├── site_name (list|str):
├── distance (int): in miles | linkedin, zip_recruiter, indeed, glassdoor
├── job_type (enum): fulltime, parttime, internship, contract | (default is all four)
├── proxy (str): in format 'http://user:pass@host:port' or [https, socks]
├── search_term (str)
├── location (str)
├── distance (int):
| in miles, default 50
├── job_type (str):
| fulltime, parttime, internship, contract
├── proxies (list):
| in format ['user:pass@host:port', 'localhost']
| each job board scraper will round robin through the proxies
├── is_remote (bool) ├── is_remote (bool)
├── full_description (bool): fetches full description for Indeed / LinkedIn (much slower)
├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type' ├── results_wanted (int):
├── easy_apply (bool): filters for jobs that are hosted on the job board site | number of job results to retrieve for each site specified in 'site_name'
├── country_indeed (enum): filters the country on Indeed (see below for correct spelling)
├── offset (num): starts the search from an offset (e.g. 25 will start the search from the 25th result) ├── easy_apply (bool):
| filters for jobs that are hosted on the job board site
├── description_format (str):
| markdown, html (Format type of the job descriptions. Default is markdown.)
├── offset (int):
| starts the search from an offset (e.g. 25 will start the search from the 25th result)
├── hours_old (int):
| filters jobs by the number of hours since the job was posted
| (ZipRecruiter and Glassdoor round up to next day.)
├── verbose (int) {0, 1, 2}:
| Controls the verbosity of the runtime printouts
| (0 prints only errors, 1 is errors+warnings, 2 is all logs. Default is 2.)
├── linkedin_fetch_description (bool):
| fetches full description and direct job url for LinkedIn (Increases requests by O(n))
├── linkedin_company_ids (list[int]):
| searches for linkedin jobs with specific company ids
|
├── country_indeed (str):
| filters the country on Indeed & Glassdoor (see below for correct spelling)
|
├── enforce_annual_salary (bool):
| converts wages to annual salary
``` ```
```
├── Indeed limitations:
| Only one from this list can be used in a search:
| - hours_old
| - job_type & is_remote
| - easy_apply
└── LinkedIn limitations:
| Only one from this list can be used in a search:
| - hours_old
| - easy_apply
```
### JobPost Schema ### JobPost Schema
```plaintext ```plaintext
JobPost JobPost
├── title (str) ├── title
├── company (str) ├── company
├── company_url (str) ├── company_url
├── job_url (str) ├── job_url
├── location (object) ├── location
│ ├── country (str) │ ├── country
│ ├── city (str) │ ├── city
│ ├── state (str) │ ├── state
├── description (str) ├── description
├── job_type (str): fulltime, parttime, internship, contract ├── job_type: fulltime, parttime, internship, contract
├── compensation (object) ├── job_function
│ ├── interval (str): yearly, monthly, weekly, daily, hourly │ ├── interval: yearly, monthly, weekly, daily, hourly
│ ├── min_amount (int) │ ├── min_amount
│ ├── max_amount (int) │ ├── max_amount
── currency (enum) ── currency
└── date_posted (date) │ └── salary_source: direct_data, description (parsed from posting)
── emails (str) ── date_posted
── num_urgent_words (int) ── emails
└── is_remote (bool) └── is_remote
Linkedin specific
└── job_level
Linkedin & Indeed specific
└── company_industry
Indeed specific
├── company_country
├── company_addresses
├── company_employees_label
├── company_revenue_label
├── company_description
├── ceo_name
├── ceo_photo_url
├── logo_photo_url
└── banner_photo_url
``` ```
### Exceptions
The following exceptions may be raised when using JobSpy:
* `LinkedInException`
* `IndeedException`
* `ZipRecruiterException`
* `GlassdoorException`
## Supported Countries for Job Searching ## Supported Countries for Job Searching
### **LinkedIn** ### **LinkedIn**
LinkedIn searches globally & uses only the `location` parameter. You can only fetch 1000 jobs max from the LinkedIn endpoint we're using LinkedIn searches globally & uses only the `location` parameter.
### **ZipRecruiter** ### **ZipRecruiter**
@@ -142,10 +205,14 @@ You can specify the following countries when searching on Indeed (use the exact
| South Korea | Spain* | Sweden | Switzerland* | | South Korea | Spain* | Sweden | Switzerland* |
| Taiwan | Thailand | Turkey | Ukraine | | Taiwan | Thailand | Turkey | Ukraine |
| United Arab Emirates | UK* | USA* | Uruguay | | United Arab Emirates | UK* | USA* | Uruguay |
| Venezuela | Vietnam | | | | Venezuela | Vietnam* | | |
Glassdoor can only fetch 900 jobs from the endpoint we're using on a given search. ## Notes
* Indeed is the best scraper currently with no rate limiting.
* All the job board endpoints are capped at around 1000 jobs on a given search.
* LinkedIn is the most restrictive and usually rate limits around the 10th page with one ip. Proxies are a must basically.
## Frequently Asked Questions ## Frequently Asked Questions
--- ---
@@ -159,11 +226,7 @@ persist, [submit an issue](https://github.com/Bunsly/JobSpy/issues).
**Q: Received a response code 429?** **Q: Received a response code 429?**
**A:** This indicates that you have been blocked by the job board site for sending too many requests. All of the job board sites are aggressive with blocking. We recommend: **A:** This indicates that you have been blocked by the job board site for sending too many requests. All of the job board sites are aggressive with blocking. We recommend:
- Waiting some time between scrapes (site-dependent). - Wait some time between scrapes (site-dependent).
- Trying a VPN or proxy to change your IP address. - Try using the proxies param to change your IP address.
--- ---

View File

@@ -1,30 +0,0 @@
from jobspy import scrape_jobs
import pandas as pd
jobs: pd.DataFrame = scrape_jobs(
site_name=["indeed", "linkedin", "zip_recruiter", "glassdoor"],
search_term="software engineer",
location="Dallas, TX",
results_wanted=25, # be wary the higher it is, the more likey you'll get blocked (rotating proxy can help tho)
country_indeed="USA",
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
)
# formatting for pandas
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", 50) # set to 0 to see full job url / desc
# 1: output to console
print(jobs)
# 2: output to .csv
jobs.to_csv("./jobs.csv", index=False)
print("outputted to jobs.csv")
# 3: output to .xlsx
# jobs.to_xlsx('jobs.xlsx', index=False)
# 4: display in Jupyter Notebook (1. pip install jupyter 2. jupyter notebook)
# display(jobs)

View File

@@ -1,167 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "00a94b47-f47b-420f-ba7e-714ef219c006",
"metadata": {},
"outputs": [],
"source": [
"from jobspy import scrape_jobs\n",
"import pandas as pd\n",
"from IPython.display import display, HTML"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f773e6c-d9fc-42cc-b0ef-63b739e78435",
"metadata": {},
"outputs": [],
"source": [
"pd.set_option('display.max_columns', None)\n",
"pd.set_option('display.max_rows', None)\n",
"pd.set_option('display.width', None)\n",
"pd.set_option('display.max_colwidth', 50)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1253c1f8-9437-492e-9dd3-e7fe51099420",
"metadata": {},
"outputs": [],
"source": [
"# example 1 (no hyperlinks, USA)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\"],\n",
" location='san francisco',\n",
" search_term=\"engineer\",\n",
" results_wanted=5,\n",
"\n",
" # use if you want to use a proxy\n",
" # proxy=\"socks5://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" proxy=\"http://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" #proxy=\"https://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
")\n",
"display(jobs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a581b2d-f7da-4fac-868d-9efe143ee20a",
"metadata": {},
"outputs": [],
"source": [
"# example 2 - remote USA & hyperlinks\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\", \"zip_recruiter\", \"indeed\"],\n",
" # location='san francisco',\n",
" search_term=\"software engineer\",\n",
" country_indeed=\"USA\",\n",
" hyperlinks=True,\n",
" is_remote=True,\n",
" results_wanted=5, \n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fe8289bc-5b64-4202-9a64-7c117c83fd9a",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "951c2fe1-52ff-407d-8bb1-068049b36777",
"metadata": {},
"outputs": [],
"source": [
"# example 3 - with hyperlinks, international - linkedin (no zip_recruiter)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\"],\n",
" location='berlin',\n",
" search_term=\"engineer\",\n",
" hyperlinks=True,\n",
" results_wanted=5,\n",
" easy_apply=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e37a521-caef-441c-8fc2-2eb5b2e7da62",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0650e608-0b58-4bf5-ae86-68348035b16a",
"metadata": {},
"outputs": [],
"source": [
"# example 4 - international indeed (no zip_recruiter)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"indeed\"],\n",
" search_term=\"engineer\",\n",
" country_indeed = \"China\",\n",
" hyperlinks=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40913ac8-3f8a-4d7e-ac47-afb88316432b",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,77 +0,0 @@
from jobspy import scrape_jobs
import pandas as pd
import os
import time
# creates csv a new filename if the jobs.csv already exists.
csv_filename = "jobs.csv"
counter = 1
while os.path.exists(csv_filename):
csv_filename = f"jobs_{counter}.csv"
counter += 1
# results wanted and offset
results_wanted = 1000
offset = 0
all_jobs = []
# max retries
max_retries = 3
# nuumber of results at each iteration
results_in_each_iteration = 30
while len(all_jobs) < results_wanted:
retry_count = 0
while retry_count < max_retries:
print("Doing from", offset, "to", offset + results_in_each_iteration, "jobs")
try:
jobs = scrape_jobs(
site_name=["indeed"],
search_term="software engineer",
# New York, NY
# Dallas, TX
# Los Angeles, CA
location="Los Angeles, CA",
results_wanted=min(results_in_each_iteration, results_wanted - len(all_jobs)),
country_indeed="USA",
offset=offset,
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
)
# Add the scraped jobs to the list
all_jobs.extend(jobs.to_dict('records'))
# Increment the offset for the next page of results
offset += results_in_each_iteration
# Add a delay to avoid rate limiting (you can adjust the delay time as needed)
print(f"Scraped {len(all_jobs)} jobs")
print("Sleeping secs", 100 * (retry_count + 1))
time.sleep(100 * (retry_count + 1)) # Sleep for 2 seconds between requests
break # Break out of the retry loop if successful
except Exception as e:
print(f"Error: {e}")
retry_count += 1
print("Sleeping secs before retry", 100 * (retry_count + 1))
time.sleep(100 * (retry_count + 1))
if retry_count >= max_retries:
print("Max retries reached. Exiting.")
break
# DataFrame from the collected job data
jobs_df = pd.DataFrame(all_jobs)
# Formatting
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", 50)
print(jobs_df)
jobs_df.to_csv(csv_filename, index=False)
print(f"Outputted to {csv_filename}")

2353
poetry.lock generated

File diff suppressed because it is too large Load Diff

2
poetry.toml Normal file
View File

@@ -0,0 +1,2 @@
[virtualenvs]
in-project = true

View File

@@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "python-jobspy" name = "python-jobspy"
version = "1.1.39" version = "1.1.61"
description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter" description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter"
authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"] authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"]
homepage = "https://github.com/Bunsly/JobSpy" homepage = "https://github.com/Bunsly/JobSpy"
@@ -13,17 +13,24 @@ packages = [
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = "^3.10" python = "^3.10"
requests = "^2.31.0" requests = "^2.31.0"
tls-client = "*"
beautifulsoup4 = "^4.12.2" beautifulsoup4 = "^4.12.2"
pandas = "^2.1.0" pandas = "^2.1.0"
NUMPY = "1.24.2" NUMPY = "1.26.3"
pydantic = "^2.3.0" pydantic = "^2.3.0"
tls-client = "^1.0.1"
markdownify = "^0.11.6"
regex = "^2024.4.28"
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]
pytest = "^7.4.1" pytest = "^7.4.1"
jupyter = "^1.0.0" jupyter = "^1.0.0"
black = "*"
pre-commit = "*"
[build-system] [build-system]
requires = ["poetry-core"] requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"
[tool.black]
line-length = 88

View File

@@ -1,14 +1,16 @@
from __future__ import annotations
import pandas as pd import pandas as pd
import concurrent.futures from typing import Tuple
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor, as_completed
from typing import Tuple, Optional
from .jobs import JobType, Location from .jobs import JobType, Location
from .scrapers.utils import logger, set_logger_level, extract_salary
from .scrapers.indeed import IndeedScraper from .scrapers.indeed import IndeedScraper
from .scrapers.ziprecruiter import ZipRecruiterScraper from .scrapers.ziprecruiter import ZipRecruiterScraper
from .scrapers.glassdoor import GlassdoorScraper from .scrapers.glassdoor import GlassdoorScraper
from .scrapers.linkedin import LinkedInScraper from .scrapers.linkedin import LinkedInScraper
from .scrapers import ScraperInput, Site, JobResponse, Country from .scrapers import SalarySource, ScraperInput, Site, JobResponse, Country
from .scrapers.exceptions import ( from .scrapers.exceptions import (
LinkedInException, LinkedInException,
IndeedException, IndeedException,
@@ -16,38 +18,43 @@ from .scrapers.exceptions import (
GlassdoorException, GlassdoorException,
) )
def scrape_jobs(
site_name: str | list[str] | Site | list[Site] | None = None,
search_term: str | None = None,
location: str | None = None,
distance: int | None = 50,
is_remote: bool = False,
job_type: str | None = None,
easy_apply: bool | None = None,
results_wanted: int = 15,
country_indeed: str = "usa",
hyperlinks: bool = False,
proxies: list[str] | str | None = None,
description_format: str = "markdown",
linkedin_fetch_description: bool | None = False,
linkedin_company_ids: list[int] | None = None,
offset: int | None = 0,
hours_old: int = None,
enforce_annual_salary: bool = False,
verbose: int = 2,
**kwargs,
) -> pd.DataFrame:
"""
Simultaneously scrapes job data from multiple job sites.
:return: pandas dataframe containing job data
"""
SCRAPER_MAPPING = { SCRAPER_MAPPING = {
Site.LINKEDIN: LinkedInScraper, Site.LINKEDIN: LinkedInScraper,
Site.INDEED: IndeedScraper, Site.INDEED: IndeedScraper,
Site.ZIP_RECRUITER: ZipRecruiterScraper, Site.ZIP_RECRUITER: ZipRecruiterScraper,
Site.GLASSDOOR: GlassdoorScraper, Site.GLASSDOOR: GlassdoorScraper,
} }
set_logger_level(verbose)
def map_str_to_site(site_name: str) -> Site:
def _map_str_to_site(site_name: str) -> Site:
return Site[site_name.upper()] return Site[site_name.upper()]
def scrape_jobs(
site_name: str | list[str] | Site | list[Site],
search_term: str,
location: str = "",
distance: int = None,
is_remote: bool = False,
job_type: str = None,
easy_apply: bool = False, # linkedin
results_wanted: int = 15,
country_indeed: str = "usa",
hyperlinks: bool = False,
proxy: Optional[str] = None,
full_description: Optional[bool] = False,
offset: Optional[int] = 0,
) -> pd.DataFrame:
"""
Simultaneously scrapes job data from multiple job sites.
:return: results_wanted: pandas dataframe containing job data
"""
def get_enum_from_value(value_str): def get_enum_from_value(value_str):
for job_type in JobType: for job_type in JobType:
if value_str in job_type.value: if value_str in job_type.value:
@@ -56,18 +63,23 @@ def scrape_jobs(
job_type = get_enum_from_value(job_type) if job_type else None job_type = get_enum_from_value(job_type) if job_type else None
if type(site_name) == str: def get_site_type():
site_type = [_map_str_to_site(site_name)] site_types = list(Site)
else: #: if type(site_name) == list if isinstance(site_name, str):
site_type = [ site_types = [map_str_to_site(site_name)]
_map_str_to_site(site) if type(site) == str else site_name elif isinstance(site_name, Site):
site_types = [site_name]
elif isinstance(site_name, list):
site_types = [
map_str_to_site(site) if isinstance(site, str) else site
for site in site_name for site in site_name
] ]
return site_types
country_enum = Country.from_string(country_indeed) country_enum = Country.from_string(country_indeed)
scraper_input = ScraperInput( scraper_input = ScraperInput(
site_type=site_type, site_type=get_site_type(),
country=country_enum, country=country_enum,
search_term=search_term, search_term=search_term,
location=location, location=location,
@@ -75,30 +87,21 @@ def scrape_jobs(
is_remote=is_remote, is_remote=is_remote,
job_type=job_type, job_type=job_type,
easy_apply=easy_apply, easy_apply=easy_apply,
full_description=full_description, description_format=description_format,
linkedin_fetch_description=linkedin_fetch_description,
results_wanted=results_wanted, results_wanted=results_wanted,
linkedin_company_ids=linkedin_company_ids,
offset=offset, offset=offset,
hours_old=hours_old,
) )
def scrape_site(site: Site) -> Tuple[str, JobResponse]: def scrape_site(site: Site) -> Tuple[str, JobResponse]:
scraper_class = SCRAPER_MAPPING[site] scraper_class = SCRAPER_MAPPING[site]
scraper = scraper_class(proxy=proxy) scraper = scraper_class(proxies=proxies)
try:
scraped_data: JobResponse = scraper.scrape(scraper_input) scraped_data: JobResponse = scraper.scrape(scraper_input)
except (LinkedInException, IndeedException, ZipRecruiterException) as lie: cap_name = site.value.capitalize()
raise lie site_name = "ZipRecruiter" if cap_name == "Zip_recruiter" else cap_name
except Exception as e: logger.info(f"{site_name} finished scraping")
if site == Site.LINKEDIN:
raise LinkedInException(str(e))
if site == Site.INDEED:
raise IndeedException(str(e))
if site == Site.ZIP_RECRUITER:
raise ZipRecruiterException(str(e))
if site == Site.GLASSDOOR:
raise GlassdoorException(str(e))
else:
raise e
return site.value, scraped_data return site.value, scraped_data
site_to_jobs_dict = {} site_to_jobs_dict = {}
@@ -112,18 +115,32 @@ def scrape_jobs(
executor.submit(worker, site): site for site in scraper_input.site_type executor.submit(worker, site): site for site in scraper_input.site_type
} }
for future in concurrent.futures.as_completed(future_to_site): for future in as_completed(future_to_site):
site_value, scraped_data = future.result() site_value, scraped_data = future.result()
site_to_jobs_dict[site_value] = scraped_data site_to_jobs_dict[site_value] = scraped_data
def convert_to_annual(job_data: dict):
if job_data["interval"] == "hourly":
job_data["min_amount"] *= 2080
job_data["max_amount"] *= 2080
if job_data["interval"] == "monthly":
job_data["min_amount"] *= 12
job_data["max_amount"] *= 12
if job_data["interval"] == "weekly":
job_data["min_amount"] *= 52
job_data["max_amount"] *= 52
if job_data["interval"] == "daily":
job_data["min_amount"] *= 260
job_data["max_amount"] *= 260
job_data["interval"] = "yearly"
jobs_dfs: list[pd.DataFrame] = [] jobs_dfs: list[pd.DataFrame] = []
for site, job_response in site_to_jobs_dict.items(): for site, job_response in site_to_jobs_dict.items():
for job in job_response.jobs: for job in job_response.jobs:
job_data = job.dict() job_data = job.dict()
job_data[ job_url = job_data["job_url"]
"job_url_hyper" job_data["job_url_hyper"] = f'<a href="{job_url}">{job_url}</a>'
] = f'<a href="{job_data["job_url"]}">{job_data["job_url"]}</a>'
job_data["site"] = site job_data["site"] = site
job_data["company"] = job_data["company_name"] job_data["company"] = job_data["company_name"]
job_data["job_type"] = ( job_data["job_type"] = (
@@ -149,38 +166,87 @@ def scrape_jobs(
job_data["min_amount"] = compensation_obj.get("min_amount") job_data["min_amount"] = compensation_obj.get("min_amount")
job_data["max_amount"] = compensation_obj.get("max_amount") job_data["max_amount"] = compensation_obj.get("max_amount")
job_data["currency"] = compensation_obj.get("currency", "USD") job_data["currency"] = compensation_obj.get("currency", "USD")
else: job_data["salary_source"] = SalarySource.DIRECT_DATA.value
job_data["interval"] = None if enforce_annual_salary and (
job_data["min_amount"] = None job_data["interval"]
job_data["max_amount"] = None and job_data["interval"] != "yearly"
job_data["currency"] = None and job_data["min_amount"]
and job_data["max_amount"]
):
convert_to_annual(job_data)
else:
if country_enum == Country.USA:
(
job_data["interval"],
job_data["min_amount"],
job_data["max_amount"],
job_data["currency"],
) = extract_salary(
job_data["description"],
enforce_annual_salary=enforce_annual_salary,
)
job_data["salary_source"] = SalarySource.DESCRIPTION.value
job_data["salary_source"] = (
job_data["salary_source"]
if "min_amount" in job_data and job_data["min_amount"]
else None
)
job_df = pd.DataFrame([job_data]) job_df = pd.DataFrame([job_data])
jobs_dfs.append(job_df) jobs_dfs.append(job_df)
if jobs_dfs: if jobs_dfs:
jobs_df = pd.concat(jobs_dfs, ignore_index=True) # Step 1: Filter out all-NA columns from each DataFrame before concatenation
desired_order: list[str] = [ filtered_dfs = [df.dropna(axis=1, how="all") for df in jobs_dfs]
"job_url_hyper" if hyperlinks else "job_url",
# Step 2: Concatenate the filtered DataFrames
jobs_df = pd.concat(filtered_dfs, ignore_index=True)
# Desired column order
desired_order = [
"id",
"site", "site",
"job_url_hyper" if hyperlinks else "job_url",
"job_url_direct",
"title", "title",
"company", "company",
"company_url",
"location", "location",
"job_type", "job_type",
"date_posted", "date_posted",
"salary_source",
"interval", "interval",
"min_amount", "min_amount",
"max_amount", "max_amount",
"currency", "currency",
"is_remote", "is_remote",
"num_urgent_words", "job_level",
"benefits", "job_function",
"company_industry",
"listing_type",
"emails", "emails",
"description", "description",
"company_url",
"company_url_direct",
"company_addresses",
"company_num_employees",
"company_revenue",
"company_description",
"logo_photo_url",
"banner_photo_url",
"ceo_name",
"ceo_photo_url",
] ]
jobs_formatted_df = jobs_df[desired_order]
else:
jobs_formatted_df = pd.DataFrame()
return jobs_formatted_df # Step 3: Ensure all desired columns are present, adding missing ones as empty
for column in desired_order:
if column not in jobs_df.columns:
jobs_df[column] = None # Add missing columns as empty
# Reorder the DataFrame according to the desired order
jobs_df = jobs_df[desired_order]
# Step 4: Sort the DataFrame as required
return jobs_df.sort_values(by=["site", "date_posted"], ascending=[True, False])
else:
return pd.DataFrame()

View File

@@ -1,3 +1,5 @@
from __future__ import annotations
from typing import Optional from typing import Optional
from datetime import date from datetime import date
from enum import Enum from enum import Enum
@@ -57,7 +59,7 @@ class JobType(Enum):
class Country(Enum): class Country(Enum):
""" """
Gets the subdomain for Indeed and Glassdoor. Gets the subdomain for Indeed and Glassdoor.
The second item in the tuple is the subdomain for Indeed The second item in the tuple is the subdomain (and API country code if there's a ':' separator) for Indeed
The third item in the tuple is the subdomain (and tld if there's a ':' separator) for Glassdoor The third item in the tuple is the subdomain (and tld if there's a ':' separator) for Glassdoor
""" """
@@ -118,11 +120,11 @@ class Country(Enum):
TURKEY = ("turkey", "tr") TURKEY = ("turkey", "tr")
UKRAINE = ("ukraine", "ua") UKRAINE = ("ukraine", "ua")
UNITEDARABEMIRATES = ("united arab emirates", "ae") UNITEDARABEMIRATES = ("united arab emirates", "ae")
UK = ("uk,united kingdom", "uk", "co.uk") UK = ("uk,united kingdom", "uk:gb", "co.uk")
USA = ("usa,us,united states", "www", "com") USA = ("usa,us,united states", "www:us", "com")
URUGUAY = ("uruguay", "uy") URUGUAY = ("uruguay", "uy")
VENEZUELA = ("venezuela", "ve") VENEZUELA = ("venezuela", "ve")
VIETNAM = ("vietnam", "vn") VIETNAM = ("vietnam", "vn", "com")
# internal for ziprecruiter # internal for ziprecruiter
US_CANADA = ("usa/ca", "www") US_CANADA = ("usa/ca", "www")
@@ -132,7 +134,10 @@ class Country(Enum):
@property @property
def indeed_domain_value(self): def indeed_domain_value(self):
return self.value[1] subdomain, _, api_country_code = self.value[1].partition(":")
if subdomain and api_country_code:
return subdomain, api_country_code.upper()
return self.value[1], self.value[1].upper()
@property @property
def glassdoor_domain_value(self): def glassdoor_domain_value(self):
@@ -145,7 +150,7 @@ class Country(Enum):
else: else:
raise Exception(f"Glassdoor is not available for {self.name}") raise Exception(f"Glassdoor is not available for {self.name}")
def get_url(self): def get_glassdoor_url(self):
return f"https://{self.glassdoor_domain_value}/" return f"https://{self.glassdoor_domain_value}/"
@classmethod @classmethod
@@ -153,7 +158,7 @@ class Country(Enum):
"""Convert a string to the corresponding Country enum.""" """Convert a string to the corresponding Country enum."""
country_str = country_str.strip().lower() country_str = country_str.strip().lower()
for country in cls: for country in cls:
country_names = country.value[0].split(',') country_names = country.value[0].split(",")
if country_str in country_names: if country_str in country_names:
return country return country
valid_countries = [country.value for country in cls] valid_countries = [country.value for country in cls]
@@ -163,7 +168,7 @@ class Country(Enum):
class Location(BaseModel): class Location(BaseModel):
country: Country | None = None country: Country | str | None = None
city: Optional[str] = None city: Optional[str] = None
state: Optional[str] = None state: Optional[str] = None
@@ -173,7 +178,12 @@ class Location(BaseModel):
location_parts.append(self.city) location_parts.append(self.city)
if self.state: if self.state:
location_parts.append(self.state) location_parts.append(self.state)
if self.country and self.country not in (Country.US_CANADA, Country.WORLDWIDE): if isinstance(self.country, str):
location_parts.append(self.country)
elif self.country and self.country not in (
Country.US_CANADA,
Country.WORLDWIDE,
):
country_name = self.country.value[0] country_name = self.country.value[0]
if "," in country_name: if "," in country_name:
country_name = country_name.split(",")[0] country_name = country_name.split(",")[0]
@@ -193,33 +203,65 @@ class CompensationInterval(Enum):
@classmethod @classmethod
def get_interval(cls, pay_period): def get_interval(cls, pay_period):
interval_mapping = {
"YEAR": cls.YEARLY,
"HOUR": cls.HOURLY,
}
if pay_period in interval_mapping:
return interval_mapping[pay_period].value
else:
return cls[pay_period].value if pay_period in cls.__members__ else None return cls[pay_period].value if pay_period in cls.__members__ else None
class Compensation(BaseModel): class Compensation(BaseModel):
interval: Optional[CompensationInterval] = None interval: Optional[CompensationInterval] = None
min_amount: int | None = None min_amount: float | None = None
max_amount: int | None = None max_amount: float | None = None
currency: Optional[str] = "USD" currency: Optional[str] = "USD"
class DescriptionFormat(Enum):
MARKDOWN = "markdown"
HTML = "html"
class JobPost(BaseModel): class JobPost(BaseModel):
id: str | None = None
title: str title: str
company_name: str company_name: str | None
job_url: str job_url: str
job_url_direct: str | None = None
location: Optional[Location] location: Optional[Location]
description: str | None = None description: str | None = None
company_url: str | None = None company_url: str | None = None
company_url_direct: str | None = None
job_type: list[JobType] | None = None job_type: list[JobType] | None = None
compensation: Compensation | None = None compensation: Compensation | None = None
date_posted: date | None = None date_posted: date | None = None
benefits: str | None = None
emails: list[str] | None = None emails: list[str] | None = None
num_urgent_words: int | None = None
is_remote: bool | None = None is_remote: bool | None = None
# company_industry: str | None = None listing_type: str | None = None
# linkedin specific
job_level: str | None = None
# linkedin and indeed specific
company_industry: str | None = None
# indeed specific
company_addresses: str | None = None
company_num_employees: str | None = None
company_revenue: str | None = None
company_description: str | None = None
ceo_name: str | None = None
ceo_photo_url: str | None = None
logo_photo_url: str | None = None
banner_photo_url: str | None = None
# linkedin only atm
job_function: str | None = None
class JobResponse(BaseModel): class JobResponse(BaseModel):

View File

@@ -1,5 +1,15 @@
from ..jobs import Enum, BaseModel, JobType, JobResponse, Country from __future__ import annotations
from typing import List, Optional, Any
from abc import ABC, abstractmethod
from ..jobs import (
Enum,
BaseModel,
JobType,
JobResponse,
Country,
DescriptionFormat,
)
class Site(Enum): class Site(Enum):
@@ -8,27 +18,33 @@ class Site(Enum):
ZIP_RECRUITER = "zip_recruiter" ZIP_RECRUITER = "zip_recruiter"
GLASSDOOR = "glassdoor" GLASSDOOR = "glassdoor"
class SalarySource(Enum):
DIRECT_DATA = "direct_data"
DESCRIPTION = "description"
class ScraperInput(BaseModel): class ScraperInput(BaseModel):
site_type: List[Site] site_type: list[Site]
search_term: str search_term: str | None = None
location: str = None location: str | None = None
country: Optional[Country] = Country.USA country: Country | None = Country.USA
distance: Optional[int] = None distance: int | None = None
is_remote: bool = False is_remote: bool = False
job_type: Optional[JobType] = None job_type: JobType | None = None
easy_apply: bool = None # linkedin easy_apply: bool | None = None
full_description: bool = False
offset: int = 0 offset: int = 0
linkedin_fetch_description: bool = False
linkedin_company_ids: list[int] | None = None
description_format: DescriptionFormat | None = DescriptionFormat.MARKDOWN
results_wanted: int = 15 results_wanted: int = 15
hours_old: int | None = None
class Scraper: class Scraper(ABC):
def __init__(self, site: Site, proxy: Optional[List[str]] = None): def __init__(self, site: Site, proxies: list[str] | None = None):
self.proxies = proxies
self.site = site self.site = site
self.proxy = (lambda p: {"http": p, "https": p} if p else None)(proxy)
def scrape(self, scraper_input: ScraperInput) -> JobResponse: @abstractmethod
... def scrape(self, scraper_input: ScraperInput) -> JobResponse: ...

View File

@@ -4,17 +4,24 @@ jobspy.scrapers.glassdoor
This module contains routines to scrape Glassdoor. This module contains routines to scrape Glassdoor.
""" """
from __future__ import annotations
import re
import json import json
import requests import requests
from bs4 import BeautifulSoup from typing import Optional, Tuple
from typing import Optional
from datetime import datetime, timedelta from datetime import datetime, timedelta
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
from ..utils import count_urgent_words, extract_emails_from_text
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..utils import extract_emails_from_text
from ..exceptions import GlassdoorException from ..exceptions import GlassdoorException
from ..utils import create_session, modify_and_get_description from ..utils import (
create_session,
markdown_converter,
logger,
)
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Compensation, Compensation,
@@ -22,85 +29,154 @@ from ...jobs import (
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
DescriptionFormat,
) )
class GlassdoorScraper(Scraper): class GlassdoorScraper(Scraper):
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes GlassdoorScraper with the Glassdoor job search url Initializes GlassdoorScraper with the Glassdoor job search url
""" """
site = Site(Site.GLASSDOOR) site = Site(Site.GLASSDOOR)
super().__init__(site, proxy=proxy) super().__init__(site, proxies=proxies)
self.url = None self.base_url = None
self.country = None self.country = None
self.session = None
self.scraper_input = None
self.jobs_per_page = 30 self.jobs_per_page = 30
self.max_pages = 30
self.seen_urls = set() self.seen_urls = set()
def fetch_jobs_page( def scrape(self, scraper_input: ScraperInput) -> JobResponse:
"""
Scrapes Glassdoor for jobs with scraper_input criteria.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
"""
self.scraper_input = scraper_input
self.scraper_input.results_wanted = min(900, scraper_input.results_wanted)
self.base_url = self.scraper_input.country.get_glassdoor_url()
self.session = create_session(proxies=self.proxies, is_tls=True, has_retry=True)
token = self._get_csrf_token()
self.headers["gd-csrf-token"] = token if token else self.fallback_token
location_id, location_type = self._get_location(
scraper_input.location, scraper_input.is_remote
)
if location_type is None:
logger.error("Glassdoor: location not parsed")
return JobResponse(jobs=[])
job_list: list[JobPost] = []
cursor = None
range_start = 1 + (scraper_input.offset // self.jobs_per_page)
tot_pages = (scraper_input.results_wanted // self.jobs_per_page) + 2
range_end = min(tot_pages, self.max_pages + 1)
for page in range(range_start, range_end):
logger.info(f"Glassdoor search page: {page}")
try:
jobs, cursor = self._fetch_jobs_page(
scraper_input, location_id, location_type, page, cursor
)
job_list.extend(jobs)
if not jobs or len(job_list) >= scraper_input.results_wanted:
job_list = job_list[: scraper_input.results_wanted]
break
except Exception as e:
logger.error(f"Glassdoor: {str(e)}")
break
return JobResponse(jobs=job_list)
def _fetch_jobs_page(
self, self,
scraper_input: ScraperInput, scraper_input: ScraperInput,
location_id: int, location_id: int,
location_type: str, location_type: str,
page_num: int, page_num: int,
cursor: str | None, cursor: str | None,
) -> (list[JobPost], str | None): ) -> Tuple[list[JobPost], str | None]:
""" """
Scrapes a page of Glassdoor for jobs with scraper_input criteria Scrapes a page of Glassdoor for jobs with scraper_input criteria
""" """
jobs = []
self.scraper_input = scraper_input
try: try:
payload = self.add_payload( payload = self._add_payload(location_id, location_type, page_num, cursor)
scraper_input, location_id, location_type, page_num, cursor response = self.session.post(
) f"{self.base_url}/graph",
session = create_session(self.proxy, is_tls=False, has_retry=True) headers=self.headers,
response = session.post( timeout_seconds=15,
f"{self.url}/graph", headers=self.headers(), timeout=10, data=payload data=payload,
) )
if response.status_code != 200: if response.status_code != 200:
raise GlassdoorException( exc_msg = f"bad response status code: {response.status_code}"
f"bad response status code: {response.status_code}" raise GlassdoorException(exc_msg)
)
res_json = response.json()[0] res_json = response.json()[0]
if "errors" in res_json: if "errors" in res_json:
raise ValueError("Error encountered in API response") raise ValueError("Error encountered in API response")
except Exception as e: except (
raise GlassdoorException(str(e)) requests.exceptions.ReadTimeout,
GlassdoorException,
ValueError,
Exception,
) as e:
logger.error(f"Glassdoor: {str(e)}")
return jobs, None
jobs_data = res_json["data"]["jobListings"]["jobListings"] jobs_data = res_json["data"]["jobListings"]["jobListings"]
jobs = []
with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor: with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
future_to_job_data = {executor.submit(self.process_job, job): job for job in jobs_data} future_to_job_data = {
executor.submit(self._process_job, job): job for job in jobs_data
}
for future in as_completed(future_to_job_data): for future in as_completed(future_to_job_data):
job_data = future_to_job_data[future]
try: try:
job_post = future.result() job_post = future.result()
if job_post: if job_post:
jobs.append(job_post) jobs.append(job_post)
except Exception as exc: except Exception as exc:
raise GlassdoorException(f'Glassdoor generated an exception: {exc}') raise GlassdoorException(f"Glassdoor generated an exception: {exc}")
return jobs, self.get_cursor_for_page( return jobs, self.get_cursor_for_page(
res_json["data"]["jobListings"]["paginationCursors"], page_num + 1 res_json["data"]["jobListings"]["paginationCursors"], page_num + 1
) )
def process_job(self, job_data): def _get_csrf_token(self):
"""Processes a single job and fetches its description.""" """
Fetches csrf token needed for API by visiting a generic page
"""
res = self.session.get(
f"{self.base_url}/Job/computer-science-jobs.htm", headers=self.headers
)
pattern = r'"token":\s*"([^"]+)"'
matches = re.findall(pattern, res.text)
token = None
if matches:
token = matches[0]
return token
def _process_job(self, job_data):
"""
Processes a single job and fetches its description.
"""
job_id = job_data["jobview"]["job"]["listingId"] job_id = job_data["jobview"]["job"]["listingId"]
job_url = f'{self.url}job-listing/j?jl={job_id}' job_url = f"{self.base_url}job-listing/j?jl={job_id}"
if job_url in self.seen_urls: if job_url in self.seen_urls:
return None return None
self.seen_urls.add(job_url) self.seen_urls.add(job_url)
job = job_data["jobview"] job = job_data["jobview"]
title = job["job"]["jobTitleText"] title = job["job"]["jobTitleText"]
company_name = job["header"]["employerNameFromSearch"] company_name = job["header"]["employerNameFromSearch"]
company_id = job_data['jobview']['header']['employer']['id'] company_id = job_data["jobview"]["header"]["employer"]["id"]
location_name = job["header"].get("locationName", "") location_name = job["header"].get("locationName", "")
location_type = job["header"].get("locationType", "") location_type = job["header"].get("locationType", "")
age_in_days = job["header"].get("ageInDays") age_in_days = job["header"].get("ageInDays")
is_remote, location = False, None is_remote, location = False, None
date_posted = (datetime.now() - timedelta(days=age_in_days)).date() if age_in_days else None date_diff = (datetime.now() - timedelta(days=age_in_days)).date()
date_posted = date_diff if age_in_days is not None else None
if location_type == "S": if location_type == "S":
is_remote = True is_remote = True
@@ -108,15 +184,24 @@ class GlassdoorScraper(Scraper):
location = self.parse_location(location_name) location = self.parse_location(location_name)
compensation = self.parse_compensation(job["header"]) compensation = self.parse_compensation(job["header"])
try: try:
description = self.fetch_job_description(job_id) description = self._fetch_job_description(job_id)
except Exception as e : except:
description = None description = None
company_url = f"{self.base_url}Overview/W-EI_IE{company_id}.htm"
job_post = JobPost( company_logo = (
job_data["jobview"].get("overview", {}).get("squareLogoUrl", None)
)
listing_type = (
job_data["jobview"]
.get("header", {})
.get("adOrderSponsorshipLevel", "")
.lower()
)
return JobPost(
id=str(job_id),
title=title, title=title,
company_url=f"{self.url}Overview/W-EI_IE{company_id}.htm" if company_id else None, company_url=company_url if company_id else None,
company_name=company_name, company_name=company_name,
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
@@ -125,60 +210,22 @@ class GlassdoorScraper(Scraper):
is_remote=is_remote, is_remote=is_remote,
description=description, description=description,
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None, logo_photo_url=company_logo,
listing_type=listing_type,
) )
return job_post
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def _fetch_job_description(self, job_id):
""" """
Scrapes Glassdoor for jobs with scraper_input criteria. Fetches the job description for a single job ID.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
""" """
scraper_input.results_wanted = min(900, scraper_input.results_wanted) url = f"{self.base_url}/graph"
self.country = scraper_input.country
self.url = self.country.get_url()
location_id, location_type = self.get_location(
scraper_input.location, scraper_input.is_remote
)
all_jobs: list[JobPost] = []
cursor = None
max_pages = 30
try:
for page in range(
1 + (scraper_input.offset // self.jobs_per_page),
min(
(scraper_input.results_wanted // self.jobs_per_page) + 2,
max_pages + 1,
),
):
try:
jobs, cursor = self.fetch_jobs_page(
scraper_input, location_id, location_type, page, cursor
)
all_jobs.extend(jobs)
if len(all_jobs) >= scraper_input.results_wanted:
all_jobs = all_jobs[: scraper_input.results_wanted]
break
except Exception as e:
raise GlassdoorException(str(e))
except Exception as e:
raise GlassdoorException(str(e))
return JobResponse(jobs=all_jobs)
def fetch_job_description(self, job_id):
"""Fetches the job description for a single job ID."""
url = f"{self.url}/graph"
body = [ body = [
{ {
"operationName": "JobDetailQuery", "operationName": "JobDetailQuery",
"variables": { "variables": {
"jl": job_id, "jl": job_id,
"queryString": "q", "queryString": "q",
"pageTypeEnum": "SERP" "pageTypeEnum": "SERP",
}, },
"query": """ "query": """
query JobDetailQuery($jl: Long!, $queryString: String, $pageTypeEnum: PageTypeEnum) { query JobDetailQuery($jl: Long!, $queryString: String, $pageTypeEnum: PageTypeEnum) {
@@ -193,23 +240,89 @@ class GlassdoorScraper(Scraper):
__typename __typename
} }
} }
""" """,
} }
] ]
response = requests.post(url, json=body, headers=GlassdoorScraper.headers()) res = requests.post(url, json=body, headers=self.headers)
if response.status_code != 200: if res.status_code != 200:
return None return None
data = response.json()[0] data = res.json()[0]
desc = data['data']['jobview']['job']['description'] desc = data["data"]["jobview"]["job"]["description"]
soup = BeautifulSoup(desc, 'html.parser') if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
return modify_and_get_description(soup) desc = markdown_converter(desc)
return desc
def _get_location(self, location: str, is_remote: bool) -> (int, str):
if not location or is_remote:
return "11047", "STATE" # remote options
url = f"{self.base_url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}"
res = self.session.get(url, headers=self.headers)
if res.status_code != 200:
if res.status_code == 429:
err = f"429 Response - Blocked by Glassdoor for too many requests"
logger.error(err)
return None, None
else:
err = f"Glassdoor response status code {res.status_code}"
err += f" - {res.text}"
logger.error(f"Glassdoor response status code {res.status_code}")
return None, None
items = res.json()
if not items:
raise ValueError(f"Location '{location}' not found on Glassdoor")
location_type = items[0]["locationType"]
if location_type == "C":
location_type = "CITY"
elif location_type == "S":
location_type = "STATE"
elif location_type == "N":
location_type = "COUNTRY"
return int(items[0]["locationId"]), location_type
def _add_payload(
self,
location_id: int,
location_type: str,
page_num: int,
cursor: str | None = None,
) -> str:
fromage = None
if self.scraper_input.hours_old:
fromage = max(self.scraper_input.hours_old // 24, 1)
filter_params = []
if self.scraper_input.easy_apply:
filter_params.append({"filterKey": "applicationType", "values": "1"})
if fromage:
filter_params.append({"filterKey": "fromAge", "values": str(fromage)})
payload = {
"operationName": "JobSearchResultsQuery",
"variables": {
"excludeJobListingIds": [],
"filterParams": filter_params,
"keyword": self.scraper_input.search_term,
"numJobsToShow": 30,
"locationType": location_type,
"locationId": int(location_id),
"parameterUrlInput": f"IL.0,12_I{location_type}{location_id}",
"pageNumber": page_num,
"pageCursor": cursor,
"fromage": fromage,
"sort": "date",
},
"query": self.query_template,
}
if self.scraper_input.job_type:
payload["variables"]["filterParams"].append(
{"filterKey": "jobType", "values": self.scraper_input.job_type.value[0]}
)
return json.dumps([payload])
@staticmethod @staticmethod
def parse_compensation(data: dict) -> Optional[Compensation]: def parse_compensation(data: dict) -> Optional[Compensation]:
pay_period = data.get("payPeriod") pay_period = data.get("payPeriod")
adjusted_pay = data.get("payPeriodAdjustedPay") adjusted_pay = data.get("payPeriodAdjustedPay")
currency = data.get("payCurrency", "USD") currency = data.get("payCurrency", "USD")
if not pay_period or not adjusted_pay: if not pay_period or not adjusted_pay:
return None return None
@@ -220,7 +333,6 @@ class GlassdoorScraper(Scraper):
interval = CompensationInterval.get_interval(pay_period) interval = CompensationInterval.get_interval(pay_period)
min_amount = int(adjusted_pay.get("p10") // 1) min_amount = int(adjusted_pay.get("p10") // 1)
max_amount = int(adjusted_pay.get("p90") // 1) max_amount = int(adjusted_pay.get("p90") // 1)
return Compensation( return Compensation(
interval=interval, interval=interval,
min_amount=min_amount, min_amount=min_amount,
@@ -228,65 +340,6 @@ class GlassdoorScraper(Scraper):
currency=currency, currency=currency,
) )
def get_location(self, location: str, is_remote: bool) -> (int, str):
if not location or is_remote:
return "11047", "STATE" # remote options
url = f"{self.url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}"
session = create_session(self.proxy, has_retry=True)
response = session.get(url)
if response.status_code != 200:
raise GlassdoorException(
f"bad response status code: {response.status_code}"
)
items = response.json()
if not items:
raise ValueError(f"Location '{location}' not found on Glassdoor")
location_type = items[0]["locationType"]
if location_type == "C":
location_type = "CITY"
elif location_type == "S":
location_type = "STATE"
return int(items[0]["locationId"]), location_type
@staticmethod
def add_payload(
scraper_input,
location_id: int,
location_type: str,
page_num: int,
cursor: str | None = None,
) -> str:
payload = {
"operationName": "JobSearchResultsQuery",
"variables": {
"excludeJobListingIds": [],
"filterParams": [{"filterKey": "applicationType", "values": "1"}] if scraper_input.easy_apply else [],
"keyword": scraper_input.search_term,
"numJobsToShow": 30,
"locationType": location_type,
"locationId": int(location_id),
"parameterUrlInput": f"IL.0,12_I{location_type}{location_id}",
"pageNumber": page_num,
"pageCursor": cursor,
},
"query": "query JobSearchResultsQuery($excludeJobListingIds: [Long!], $keyword: String, $locationId: Int, $locationType: LocationTypeEnum, $numJobsToShow: Int!, $pageCursor: String, $pageNumber: Int, $filterParams: [FilterParams], $originalPageUrl: String, $seoFriendlyUrlInput: String, $parameterUrlInput: String, $seoUrl: Boolean) {\n jobListings(\n contextHolder: {searchParams: {excludeJobListingIds: $excludeJobListingIds, keyword: $keyword, locationId: $locationId, locationType: $locationType, numPerPage: $numJobsToShow, pageCursor: $pageCursor, pageNumber: $pageNumber, filterParams: $filterParams, originalPageUrl: $originalPageUrl, seoFriendlyUrlInput: $seoFriendlyUrlInput, parameterUrlInput: $parameterUrlInput, seoUrl: $seoUrl, searchType: SR}}\n ) {\n companyFilterOptions {\n id\n shortName\n __typename\n }\n filterOptions\n indeedCtk\n jobListings {\n ...JobView\n __typename\n }\n jobListingSeoLinks {\n linkItems {\n position\n url\n __typename\n }\n __typename\n }\n jobSearchTrackingKey\n jobsPageSeoData {\n pageMetaDescription\n pageTitle\n __typename\n }\n paginationCursors {\n cursor\n pageNumber\n __typename\n }\n indexablePageForSeo\n searchResultsMetadata {\n searchCriteria {\n implicitLocation {\n id\n localizedDisplayName\n type\n __typename\n }\n keyword\n location {\n id\n shortName\n localizedShortName\n localizedDisplayName\n type\n __typename\n }\n __typename\n }\n footerVO {\n countryMenu {\n childNavigationLinks {\n id\n link\n textKey\n __typename\n }\n __typename\n }\n __typename\n }\n helpCenterDomain\n helpCenterLocale\n jobAlert {\n jobAlertExists\n __typename\n }\n jobSerpFaq {\n questions {\n answer\n question\n __typename\n }\n __typename\n }\n jobSerpJobOutlook {\n occupation\n paragraph\n __typename\n }\n showMachineReadableJobs\n __typename\n }\n serpSeoLinksVO {\n relatedJobTitlesResults\n searchedJobTitle\n searchedKeyword\n searchedLocationIdAsString\n searchedLocationSeoName\n searchedLocationType\n topCityIdsToNameResults {\n key\n value\n __typename\n }\n topEmployerIdsToNameResults {\n key\n value\n __typename\n }\n topEmployerNameResults\n topOccupationResults\n __typename\n }\n totalJobsCount\n __typename\n }\n}\n\nfragment JobView on JobListingSearchResult {\n jobview {\n header {\n adOrderId\n advertiserType\n adOrderSponsorshipLevel\n ageInDays\n divisionEmployerName\n easyApply\n employer {\n id\n name\n shortName\n __typename\n }\n employerNameFromSearch\n goc\n gocConfidence\n gocId\n jobCountryId\n jobLink\n jobResultTrackingKey\n jobTitleText\n locationName\n locationType\n locId\n needsCommission\n payCurrency\n payPeriod\n payPeriodAdjustedPay {\n p10\n p50\n p90\n __typename\n }\n rating\n salarySource\n savedJobId\n sponsored\n __typename\n }\n job {\n descriptionFragments\n importConfigId\n jobTitleId\n jobTitleText\n listingId\n __typename\n }\n jobListingAdminDetails {\n cpcVal\n importConfigId\n jobListingId\n jobSourceId\n userEligibleForAdminJobDetails\n __typename\n }\n overview {\n shortName\n squareLogoUrl\n __typename\n }\n __typename\n }\n __typename\n}\n",
}
job_type_filters = {
JobType.FULL_TIME: "fulltime",
JobType.PART_TIME: "parttime",
JobType.CONTRACT: "contract",
JobType.INTERNSHIP: "internship",
JobType.TEMPORARY: "temporary",
}
if scraper_input.job_type in job_type_filters:
filter_value = job_type_filters[scraper_input.job_type]
payload["variables"]["filterParams"].append(
{"filterKey": "jobType", "values": filter_value}
)
return json.dumps([payload])
@staticmethod @staticmethod
def get_job_type_enum(job_type_str: str) -> list[JobType] | None: def get_job_type_enum(job_type_str: str) -> list[JobType] | None:
for job_type in JobType: for job_type in JobType:
@@ -306,21 +359,14 @@ class GlassdoorScraper(Scraper):
if cursor_data["pageNumber"] == page_num: if cursor_data["pageNumber"] == page_num:
return cursor_data["cursor"] return cursor_data["cursor"]
@staticmethod fallback_token = "Ft6oHEWlRZrxDww95Cpazw:0pGUrkb2y3TyOpAIqF2vbPmUXoXVkD3oEGDVkvfeCerceQ5-n8mBg3BovySUIjmCPHCaW0H2nQVdqzbtsYqf4Q:wcqRqeegRUa9MVLJGyujVXB7vWFPjdaS1CtrrzJq-ok"
def headers() -> dict: headers = {
"""
Returns headers needed for requests
:return: dict - Dictionary containing headers
"""
return {
"authority": "www.glassdoor.com", "authority": "www.glassdoor.com",
"accept": "*/*", "accept": "*/*",
"accept-language": "en-US,en;q=0.9", "accept-language": "en-US,en;q=0.9",
"apollographql-client-name": "job-search-next", "apollographql-client-name": "job-search-next",
"apollographql-client-version": "4.65.5", "apollographql-client-version": "4.65.5",
"content-type": "application/json", "content-type": "application/json",
"cookie": 'gdId=91e2dfc4-c8b5-4fa7-83d0-11512b80262c; G_ENABLED_IDPS=google; trs=https%3A%2F%2Fwww.redhat.com%2F:referral:referral:2023-07-05+09%3A50%3A14.862:undefined:undefined; g_state={"i_p":1688587331651,"i_l":1}; _cfuvid=.7llazxhYFZWi6EISSPdVjtqF0NMVwzxr_E.cB1jgLs-1697828392979-0-604800000; GSESSIONID=undefined; JSESSIONID=F03DD1B5EE02DB6D842FE42B142F88F3; cass=1; jobsClicked=true; indeedCtk=1hd77b301k79i801; asst=1697829114.2; G_AUTHUSER_H=0; uc=8013A8318C98C517FE6DD0024636DFDEF978FC33266D93A2FAFEF364EACA608949D8B8FA2DC243D62DE271D733EB189D809ABE5B08D7B1AE865D217BD4EEBB97C282F5DA5FEFE79C937E3F6110B2A3A0ADBBA3B4B6DF5A996FEE00516100A65FCB11DA26817BE8D1C1BF6CFE36B5B68A3FDC2CFEC83AB797F7841FBB157C202332FC7E077B56BD39B167BDF3D9866E3B; AWSALB=zxc/Yk1nbWXXT6HjNyn3H4h4950ckVsFV/zOrq5LSoChYLE1qV+hDI8Axi3fUa9rlskndcO0M+Fw+ZnJ+AQ2afBFpyOd1acouLMYgkbEpqpQaWhY6/Gv4QH1zBcJ; AWSALBCORS=zxc/Yk1nbWXXT6HjNyn3H4h4950ckVsFV/zOrq5LSoChYLE1qV+hDI8Axi3fUa9rlskndcO0M+Fw+ZnJ+AQ2afBFpyOd1acouLMYgkbEpqpQaWhY6/Gv4QH1zBcJ; gdsid=1697828393025:1697830776351:668396EDB9E6A832022D34414128093D; at=HkH8Hnqi9uaMC7eu0okqyIwqp07ht9hBvE1_St7E_hRqPvkO9pUeJ1Jcpds4F3g6LL5ADaCNlxrPn0o6DumGMfog8qI1-zxaV_jpiFs3pugntw6WpVyYWdfioIZ1IDKupyteeLQEM1AO4zhGjY_rPZynpsiZBPO_B1au94sKv64rv23yvP56OiWKKfI-8_9hhLACEwWvM-Az7X-4aE2QdFt93VJbXbbGVf07bdDZfimsIkTtgJCLSRhU1V0kEM1Efyu66vo3m77gFFaMW7lxyYnb36I5PdDtEXBm3aL-zR7-qa5ywd94ISEivgqQOA4FPItNhqIlX4XrfD1lxVz6rfPaoTIDi4DI6UMCUjwyPsuv8mn0rYqDfRnmJpZ97fJ5AnhrknAd_6ZWN5v1OrxJczHzcXd8LO820QPoqxzzG13bmSTXLwGSxMUCtSrVsq05hicimQ3jpRt0c1dA4OkTNqF7_770B9JfcHcM8cr8-C4IL56dnOjr9KBGfN1Q2IvZM2cOBRbV7okiNOzKVZ3qJ24AE34WA2F3U6Whiu6H8nIuGG5hSNkVygY6CtglNZfFF9p8pJAZm79PngrrBv-CXFBZmhYLFo46lmFetDkiJ6mirtez4tKpzTIYjIp4_JAkiZFwbLJ2QGH4mK8kyyW0lZiX1DTuQec50N_5wvRo0Gt7nlKxzLsApMnaNhuQeH5ygh_pa381ORo9mQGi0EYF9zk00pa2--z4PtjfQ8KFq36GgpxKy5-o4qgqygZj8F01L8r-FiX2G4C7PREMIpAyHX2A4-_JxA1IS2j12EyqKTLqE9VcP06qm2Z-YuIW3ctmpMxy5G9_KiEiGv17weizhSFnl6SbpAEY-2VSmQ5V6jm3hoMp2jemkuGCRkZeFstLDEPxlzFN7WM; __cf_bm=zGaVjIJw4irf40_7UVw54B6Ohm271RUX4Tc8KVScrbs-1697830777-0-AYv2GnKTnnCU+cY9xHbJunO0DwlLDO6SIBnC/s/qldpKsGK0rRAjD6y8lbyATT/KlS7g29OZaN4fbd0lrJg0KmWbIybZIzfWVLHSYePVuOhu; asst=1697829114.2; at=dFhXf64wsf2TlnWy41xLs7skJkuxgKToEGcjGtDfUvW4oEAJ4tTIR5dKQ8wbwT75aIaGgdCfvcb-da7vwrCGWscCncmfLFQpJ9l-LLwoRfk-pMsxHhd77wvf-W7I0HSm7-Q5lQJqI9WyNGRxOa-RpzBTf4L8_Et4-3FzjPaAoYY5pY1FhuwXbN5asGOAMW-p8cjpbfn3PumlIYuckguWnjrcY2F31YJ_1noeoHM9tCGpymANbqGXRkG6aXY7yCfVXtdgZU1K5SMeaSPZIuF_iLUxjc_corzpNiH6qq7BIAmh-e5Aa-g7cwpZcln1fmwTVw4uTMZf1eLIMTa9WzgqZNkvG-sGaq_XxKA_Wai6xTTkOHfRgm4632Ba2963wdJvkGmUUa3tb_L4_wTgk3eFnHp5JhghLfT2Pe3KidP-yX__vx8JOsqe3fndCkKXgVz7xQKe1Dur-sMNlGwi4LXfguTT2YUI8C5Miq3pj2IHc7dC97eyyAiAM4HvyGWfaXWZcei6oIGrOwMvYgy0AcwFry6SIP2SxLT5TrxinRRuem1r1IcOTJsMJyUPp1QsZ7bOyq9G_0060B4CPyovw5523hEuqLTM-R5e5yavY6C_1DHUyE15C3mrh7kdvmlGZeflnHqkFTEKwwOftm-Mv-CKD5Db9ABFGNxKB2FH7nDH67hfOvm4tGNMzceBPKYJ3wciTt9jK3wy39_7cOYVywfrZ-oLhw_XtsbGSSeGn3HytrfgSADAh2sT0Gg6eCC9Xy1vh-Za337SVLUDXZ73W2xJxxUHBkFzZs8L_Xndo5DsbpWhVs9IYUGyraJdqB3SLgDbAppIBCJl4fx6_DG8-xOQPBvuFMlTROe1JVdHOzXI1GElwFDTuH1pjkg4I2G0NhAbE06Y-1illQE; gdsid=1697828393025:1697831731408:99C30D94108AC3030D61C736DDCDF11C',
"gd-csrf-token": "Ft6oHEWlRZrxDww95Cpazw:0pGUrkb2y3TyOpAIqF2vbPmUXoXVkD3oEGDVkvfeCerceQ5-n8mBg3BovySUIjmCPHCaW0H2nQVdqzbtsYqf4Q:wcqRqeegRUa9MVLJGyujVXB7vWFPjdaS1CtrrzJq-ok",
"origin": "https://www.glassdoor.com", "origin": "https://www.glassdoor.com",
"referer": "https://www.glassdoor.com/", "referer": "https://www.glassdoor.com/",
"sec-ch-ua": '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"', "sec-ch-ua": '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
@@ -331,3 +377,169 @@ class GlassdoorScraper(Scraper):
"sec-fetch-site": "same-origin", "sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36",
} }
query_template = """
query JobSearchResultsQuery(
$excludeJobListingIds: [Long!],
$keyword: String,
$locationId: Int,
$locationType: LocationTypeEnum,
$numJobsToShow: Int!,
$pageCursor: String,
$pageNumber: Int,
$filterParams: [FilterParams],
$originalPageUrl: String,
$seoFriendlyUrlInput: String,
$parameterUrlInput: String,
$seoUrl: Boolean
) {
jobListings(
contextHolder: {
searchParams: {
excludeJobListingIds: $excludeJobListingIds,
keyword: $keyword,
locationId: $locationId,
locationType: $locationType,
numPerPage: $numJobsToShow,
pageCursor: $pageCursor,
pageNumber: $pageNumber,
filterParams: $filterParams,
originalPageUrl: $originalPageUrl,
seoFriendlyUrlInput: $seoFriendlyUrlInput,
parameterUrlInput: $parameterUrlInput,
seoUrl: $seoUrl,
searchType: SR
}
}
) {
companyFilterOptions {
id
shortName
__typename
}
filterOptions
indeedCtk
jobListings {
...JobView
__typename
}
jobListingSeoLinks {
linkItems {
position
url
__typename
}
__typename
}
jobSearchTrackingKey
jobsPageSeoData {
pageMetaDescription
pageTitle
__typename
}
paginationCursors {
cursor
pageNumber
__typename
}
indexablePageForSeo
searchResultsMetadata {
searchCriteria {
implicitLocation {
id
localizedDisplayName
type
__typename
}
keyword
location {
id
shortName
localizedShortName
localizedDisplayName
type
__typename
}
__typename
}
helpCenterDomain
helpCenterLocale
jobSerpJobOutlook {
occupation
paragraph
__typename
}
showMachineReadableJobs
__typename
}
totalJobsCount
__typename
}
}
fragment JobView on JobListingSearchResult {
jobview {
header {
adOrderId
advertiserType
adOrderSponsorshipLevel
ageInDays
divisionEmployerName
easyApply
employer {
id
name
shortName
__typename
}
employerNameFromSearch
goc
gocConfidence
gocId
jobCountryId
jobLink
jobResultTrackingKey
jobTitleText
locationName
locationType
locId
needsCommission
payCurrency
payPeriod
payPeriodAdjustedPay {
p10
p50
p90
__typename
}
rating
salarySource
savedJobId
sponsored
__typename
}
job {
description
importConfigId
jobTitleId
jobTitleText
listingId
__typename
}
jobListingAdminDetails {
cpcVal
importConfigId
jobListingId
jobSourceId
userEligibleForAdminJobDetails
__typename
}
overview {
shortName
squareLogoUrl
__typename
}
__typename
}
__typename
}
"""

View File

@@ -4,25 +4,21 @@ jobspy.scrapers.indeed
This module contains routines to scrape Indeed. This module contains routines to scrape Indeed.
""" """
import re
import math
import io
import json
from typing import Any
from datetime import datetime
import urllib.parse from __future__ import annotations
from bs4 import BeautifulSoup
from bs4.element import Tag import math
from typing import Tuple
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor, Future from concurrent.futures import ThreadPoolExecutor, Future
from ..exceptions import IndeedException from .. import Scraper, ScraperInput, Site
from ..utils import ( from ..utils import (
count_urgent_words,
extract_emails_from_text, extract_emails_from_text,
create_session,
get_enum_from_job_type, get_enum_from_job_type,
modify_and_get_description markdown_converter,
logger,
create_session,
) )
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
@@ -31,139 +27,26 @@ from ...jobs import (
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
DescriptionFormat,
) )
from .. import Scraper, ScraperInput, Site
class IndeedScraper(Scraper): class IndeedScraper(Scraper):
def __init__(self, proxy: str | None = None): def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes IndeedScraper with the Indeed job search url Initializes IndeedScraper with the Indeed API url
""" """
self.url = None super().__init__(Site.INDEED, proxies=proxies)
self.country = None
site = Site(Site.INDEED)
super().__init__(site, proxy=proxy)
self.jobs_per_page = 25 self.session = create_session(proxies=self.proxies, is_tls=False)
self.scraper_input = None
self.jobs_per_page = 100
self.num_workers = 10
self.seen_urls = set() self.seen_urls = set()
self.headers = None
def scrape_page( self.api_country_code = None
self, scraper_input: ScraperInput, page: int self.base_url = None
) -> tuple[list[JobPost], int]: self.api_url = "https://apis.indeed.com/graphql"
"""
Scrapes a page of Indeed for jobs with scraper_input criteria
:param scraper_input:
:param page:
:return: jobs found on page, total number of jobs found for search
"""
self.country = scraper_input.country
domain = self.country.indeed_domain_value
self.url = f"https://{domain}.indeed.com"
try:
session = create_session(self.proxy)
response = session.get(
f"{self.url}/m/jobs",
headers=self.get_headers(),
params=self.add_params(scraper_input, page),
allow_redirects=True,
timeout_seconds=10,
)
if response.status_code not in range(200, 400):
raise IndeedException(
f"bad response with status code: {response.status_code}"
)
except Exception as e:
if "Proxy responded with" in str(e):
raise IndeedException("bad proxy")
raise IndeedException(str(e))
soup = BeautifulSoup(response.content, "html.parser")
if "did not match any jobs" in response.text:
raise IndeedException("Parsing exception: Search did not match any jobs")
jobs = IndeedScraper.parse_jobs(
soup
) #: can raise exception, handled by main scrape function
total_num_jobs = IndeedScraper.total_jobs(soup)
if (
not jobs.get("metaData", {})
.get("mosaicProviderJobCardsModel", {})
.get("results")
):
raise IndeedException("No jobs found.")
def process_job(job: dict) -> JobPost | None:
job_url = f'{self.url}/m/jobs/viewjob?jk={job["jobkey"]}'
job_url_client = f'{self.url}/viewjob?jk={job["jobkey"]}'
if job_url in self.seen_urls:
return None
extracted_salary = job.get("extractedSalary")
compensation = None
if extracted_salary:
salary_snippet = job.get("salarySnippet")
currency = salary_snippet.get("currency") if salary_snippet else None
interval = (extracted_salary.get("type"),)
if isinstance(interval, tuple):
interval = interval[0]
interval = interval.upper()
if interval in CompensationInterval.__members__:
compensation = Compensation(
interval=CompensationInterval[interval],
min_amount=int(extracted_salary.get("min")),
max_amount=int(extracted_salary.get("max")),
currency=currency,
)
job_type = IndeedScraper.get_job_type(job)
timestamp_seconds = job["pubDate"] / 1000
date_posted = datetime.fromtimestamp(timestamp_seconds)
date_posted = date_posted.strftime("%Y-%m-%d")
description = self.get_description(job_url) if scraper_input.full_description else None
with io.StringIO(job["snippet"]) as f:
soup_io = BeautifulSoup(f, "html.parser")
li_elements = soup_io.find_all("li")
if description is None and li_elements:
description = " ".join(li.text for li in li_elements)
job_post = JobPost(
title=job["normTitle"],
description=description,
company_name=job["company"],
company_url=self.url + job["companyOverviewLink"] if "companyOverviewLink" in job else None,
location=Location(
city=job.get("jobLocationCity"),
state=job.get("jobLocationState"),
country=self.country,
),
job_type=job_type,
compensation=compensation,
date_posted=date_posted,
job_url=job_url_client,
emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description)
if description
else None,
is_remote=self.is_remote_job(job),
)
return job_post
workers = 10 if scraper_input.full_description else 10 # possibly lessen 10 when fetching desc based on feedback
jobs = jobs["metaData"]["mosaicProviderJobCardsModel"]["results"]
with ThreadPoolExecutor(max_workers=workers) as executor:
job_results: list[Future] = [
executor.submit(process_job, job) for job in jobs
]
job_list = [result.result() for result in job_results if result.result()]
return job_list, total_num_jobs
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
@@ -171,201 +54,405 @@ class IndeedScraper(Scraper):
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
pages_to_process = ( self.scraper_input = scraper_input
math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1 domain, self.api_country_code = self.scraper_input.country.indeed_domain_value
) self.base_url = f"https://{domain}.indeed.com"
self.headers = self.api_headers.copy()
self.headers["indeed-co"] = self.scraper_input.country.indeed_domain_value
job_list = []
page = 1
#: get first page to initialize session cursor = None
job_list, total_results = self.scrape_page(scraper_input, 0) offset_pages = math.ceil(self.scraper_input.offset / 100)
for _ in range(offset_pages):
with ThreadPoolExecutor(max_workers=10) as executor: logger.info(f"Indeed skipping search page: {page}")
futures: list[Future] = [ __, cursor = self._scrape_page(cursor)
executor.submit(self.scrape_page, scraper_input, page) if not __:
for page in range(1, pages_to_process + 1) logger.info(f"Indeed found no jobs on page: {page}")
]
for future in futures:
jobs, _ = future.result()
job_list += jobs
if len(job_list) > scraper_input.results_wanted:
job_list = job_list[: scraper_input.results_wanted]
job_response = JobResponse(
jobs=job_list,
total_results=total_results,
)
return job_response
def get_description(self, job_page_url: str) -> str | None:
"""
Retrieves job description by going to the job page url
:param job_page_url:
:return: description
"""
parsed_url = urllib.parse.urlparse(job_page_url)
params = urllib.parse.parse_qs(parsed_url.query)
jk_value = params.get("jk", [None])[0]
formatted_url = f"{self.url}/m/viewjob?jk={jk_value}&spa=1"
session = create_session(self.proxy)
try:
response = session.get(
formatted_url,
headers=self.get_headers(),
allow_redirects=True,
timeout_seconds=5,
)
except Exception as e:
return None
if response.status_code not in range(200, 400):
return None
try:
soup = BeautifulSoup(response.text, 'html.parser')
script_tags = soup.find_all('script')
job_description = ''
for tag in script_tags:
if 'window._initialData' in tag.text:
json_str = tag.text
json_str = json_str.split('window._initialData=')[1]
json_str = json_str.rsplit(';', 1)[0]
data = json.loads(json_str)
job_description = data["jobInfoWrapperModel"]["jobInfoModel"]["sanitizedJobDescription"]
break break
except (KeyError, TypeError, IndexError):
return None
soup = BeautifulSoup(job_description, "html.parser") while len(self.seen_urls) < scraper_input.results_wanted:
return modify_and_get_description(soup) logger.info(f"Indeed search page: {page}")
jobs, cursor = self._scrape_page(cursor)
if not jobs:
logger.info(f"Indeed found no jobs on page: {page}")
break
job_list += jobs
page += 1
return JobResponse(jobs=job_list[: scraper_input.results_wanted])
def _scrape_page(self, cursor: str | None) -> Tuple[list[JobPost], str | None]:
"""
Scrapes a page of Indeed for jobs with scraper_input criteria
:param cursor:
:return: jobs found on page, next page cursor
"""
jobs = []
new_cursor = None
filters = self._build_filters()
search_term = (
self.scraper_input.search_term.replace('"', '\\"')
if self.scraper_input.search_term
else ""
)
query = self.job_search_query.format(
what=(f'what: "{search_term}"' if search_term else ""),
location=(
f'location: {{where: "{self.scraper_input.location}", radius: {self.scraper_input.distance}, radiusUnit: MILES}}'
if self.scraper_input.location
else ""
),
dateOnIndeed=self.scraper_input.hours_old,
cursor=f'cursor: "{cursor}"' if cursor else "",
filters=filters,
)
payload = {
"query": query,
}
api_headers = self.api_headers.copy()
api_headers["indeed-co"] = self.api_country_code
response = self.session.post(
self.api_url,
headers=api_headers,
json=payload,
timeout=10,
)
if response.status_code != 200:
logger.info(
f"Indeed responded with status code: {response.status_code} (submit GitHub issue if this appears to be a bug)"
)
return jobs, new_cursor
data = response.json()
jobs = data["data"]["jobSearch"]["results"]
new_cursor = data["data"]["jobSearch"]["pageInfo"]["nextCursor"]
with ThreadPoolExecutor(max_workers=self.num_workers) as executor:
job_results: list[Future] = [
executor.submit(self._process_job, job["job"]) for job in jobs
]
job_list = [result.result() for result in job_results if result.result()]
return job_list, new_cursor
def _build_filters(self):
"""
Builds the filters dict for job type/is_remote. If hours_old is provided, composite filter for job_type/is_remote is not possible.
IndeedApply: filters: { keyword: { field: "indeedApplyScope", keys: ["DESKTOP"] } }
"""
filters_str = ""
if self.scraper_input.hours_old:
filters_str = """
filters: {{
date: {{
field: "dateOnIndeed",
start: "{start}h"
}}
}}
""".format(
start=self.scraper_input.hours_old
)
elif self.scraper_input.easy_apply:
filters_str = """
filters: {
keyword: {
field: "indeedApplyScope",
keys: ["DESKTOP"]
}
}
"""
elif self.scraper_input.job_type or self.scraper_input.is_remote:
job_type_key_mapping = {
JobType.FULL_TIME: "CF3CP",
JobType.PART_TIME: "75GKK",
JobType.CONTRACT: "NJXCK",
JobType.INTERNSHIP: "VDTG7",
}
keys = []
if self.scraper_input.job_type:
key = job_type_key_mapping[self.scraper_input.job_type]
keys.append(key)
if self.scraper_input.is_remote:
keys.append("DSQF7")
if keys:
keys_str = '", "'.join(keys)
filters_str = f"""
filters: {{
composite: {{
filters: [{{
keyword: {{
field: "attributes",
keys: ["{keys_str}"]
}}
}}]
}}
}}
"""
return filters_str
def _process_job(self, job: dict) -> JobPost | None:
"""
Parses the job dict into JobPost model
:param job: dict to parse
:return: JobPost if it's a new job
"""
job_url = f'{self.base_url}/viewjob?jk={job["key"]}'
if job_url in self.seen_urls:
return
self.seen_urls.add(job_url)
description = job["description"]["html"]
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
description = markdown_converter(description)
job_type = self._get_job_type(job["attributes"])
timestamp_seconds = job["datePublished"] / 1000
date_posted = datetime.fromtimestamp(timestamp_seconds).strftime("%Y-%m-%d")
employer = job["employer"].get("dossier") if job["employer"] else None
employer_details = employer.get("employerDetails", {}) if employer else {}
rel_url = job["employer"]["relativeCompanyPageUrl"] if job["employer"] else None
return JobPost(
id=str(job["key"]),
title=job["title"],
description=description,
company_name=job["employer"].get("name") if job.get("employer") else None,
company_url=(f"{self.base_url}{rel_url}" if job["employer"] else None),
company_url_direct=(
employer["links"]["corporateWebsite"] if employer else None
),
location=Location(
city=job.get("location", {}).get("city"),
state=job.get("location", {}).get("admin1Code"),
country=job.get("location", {}).get("countryCode"),
),
job_type=job_type,
compensation=self._get_compensation(job["compensation"]),
date_posted=date_posted,
job_url=job_url,
job_url_direct=(
job["recruit"].get("viewJobUrl") if job.get("recruit") else None
),
emails=extract_emails_from_text(description) if description else None,
is_remote=self._is_job_remote(job, description),
company_addresses=(
employer_details["addresses"][0]
if employer_details.get("addresses")
else None
),
company_industry=(
employer_details["industry"]
.replace("Iv1", "")
.replace("_", " ")
.title()
.strip()
if employer_details.get("industry")
else None
),
company_num_employees=employer_details.get("employeesLocalizedLabel"),
company_revenue=employer_details.get("revenueLocalizedLabel"),
company_description=employer_details.get("briefDescription"),
ceo_name=employer_details.get("ceoName"),
ceo_photo_url=employer_details.get("ceoPhotoUrl"),
logo_photo_url=(
employer["images"].get("squareLogoUrl")
if employer and employer.get("images")
else None
),
banner_photo_url=(
employer["images"].get("headerImageUrl")
if employer and employer.get("images")
else None
),
)
@staticmethod @staticmethod
def get_job_type(job: dict) -> list[JobType] | None: def _get_job_type(attributes: list) -> list[JobType]:
""" """
Parses the job to get list of job types Parses the attributes to get list of job types
:param job: :param attributes:
:return: :return: list of JobType
""" """
job_types: list[JobType] = [] job_types: list[JobType] = []
for taxonomy in job["taxonomyAttributes"]: for attribute in attributes:
if taxonomy["label"] == "job-types": job_type_str = attribute["label"].replace("-", "").replace(" ", "").lower()
for i in range(len(taxonomy["attributes"])):
label = taxonomy["attributes"][i].get("label")
if label:
job_type_str = label.replace("-", "").replace(" ", "").lower()
job_type = get_enum_from_job_type(job_type_str) job_type = get_enum_from_job_type(job_type_str)
if job_type: if job_type:
job_types.append(job_type) job_types.append(job_type)
return job_types return job_types
@staticmethod @staticmethod
def parse_jobs(soup: BeautifulSoup) -> dict: def _get_compensation(compensation: dict) -> Compensation | None:
""" """
Parses the jobs from the soup object Parses the job to get compensation
:param soup: :param job:
:return: jobs :return: compensation object
""" """
if not compensation["baseSalary"] and not compensation["estimated"]:
def find_mosaic_script() -> Tag | None:
"""
Finds jobcards script tag
:return: script_tag
"""
script_tags = soup.find_all("script")
for tag in script_tags:
if (
tag.string
and "mosaic.providerData" in tag.string
and "mosaic-provider-jobcards" in tag.string
):
return tag
return None return None
comp = (
script_tag = find_mosaic_script() compensation["baseSalary"]
if compensation["baseSalary"]
if script_tag: else compensation["estimated"]["baseSalary"]
script_str = script_tag.string )
pattern = r'window.mosaic.providerData\["mosaic-provider-jobcards"\]\s*=\s*({.*?});' if not comp:
p = re.compile(pattern, re.DOTALL) return None
m = p.search(script_str) interval = IndeedScraper._get_compensation_interval(comp["unitOfWork"])
if m: if not interval:
jobs = json.loads(m.group(1).strip()) return None
return jobs min_range = comp["range"].get("min")
else: max_range = comp["range"].get("max")
raise IndeedException("Could not find mosaic provider job cards data") return Compensation(
else: interval=interval,
raise IndeedException( min_amount=int(min_range) if min_range is not None else None,
"Could not find any results for the search" max_amount=int(max_range) if max_range is not None else None,
currency=(
compensation["estimated"]["currencyCode"]
if compensation["estimated"]
else compensation["currencyCode"]
),
) )
@staticmethod @staticmethod
def total_jobs(soup: BeautifulSoup) -> int: def _is_job_remote(job: dict, description: str) -> bool:
""" """
Parses the total jobs for that search from soup object Searches the description, location, and attributes to check if job is remote
:param soup:
:return: total_num_jobs
""" """
script = soup.find("script", string=lambda t: t and "window._initialData" in t) remote_keywords = ["remote", "work from home", "wfh"]
is_remote_in_attributes = any(
pattern = re.compile(r"window._initialData\s*=\s*({.*})\s*;", re.DOTALL) any(keyword in attr["label"].lower() for keyword in remote_keywords)
match = pattern.search(script.string) for attr in job["attributes"]
total_num_jobs = 0 )
if match: is_remote_in_description = any(
json_str = match.group(1) keyword in description.lower() for keyword in remote_keywords
data = json.loads(json_str) )
total_num_jobs = int(data["searchTitleBarModel"]["totalNumResults"]) is_remote_in_location = any(
return total_num_jobs keyword in job["location"]["formatted"]["long"].lower()
for keyword in remote_keywords
)
return (
is_remote_in_attributes or is_remote_in_description or is_remote_in_location
)
@staticmethod @staticmethod
def get_headers(): def _get_compensation_interval(interval: str) -> CompensationInterval:
return { interval_mapping = {
'Host': 'www.indeed.com', "DAY": "DAILY",
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', "YEAR": "YEARLY",
'sec-fetch-site': 'same-origin', "HOUR": "HOURLY",
'sec-fetch-dest': 'document', "WEEK": "WEEKLY",
'accept-language': 'en-US,en;q=0.9', "MONTH": "MONTHLY",
'sec-fetch-mode': 'navigate',
'user-agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 192.0',
'referer': 'https://www.indeed.com/m/jobs?q=software%20intern&l=Dallas%2C%20TX&from=serpso&rq=1&rsIdx=3',
} }
mapped_interval = interval_mapping.get(interval.upper(), None)
if mapped_interval and mapped_interval in CompensationInterval.__members__:
return CompensationInterval[mapped_interval]
else:
raise ValueError(f"Unsupported interval: {interval}")
@staticmethod api_headers = {
def is_remote_job(job: dict) -> bool: "Host": "apis.indeed.com",
""" "content-type": "application/json",
:param job: "indeed-api-key": "161092c2017b5bbab13edb12461a62d5a833871e7cad6d9d475304573de67ac8",
:return: bool "accept": "application/json",
""" "indeed-locale": "en-US",
for taxonomy in job.get("taxonomyAttributes", []): "accept-language": "en-US,en;q=0.9",
if taxonomy["label"] == "remote" and len(taxonomy["attributes"]) > 0: "user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 193.1",
return True "indeed-app-info": "appv=193.1; appid=com.indeed.jobsearch; osv=16.6.1; os=ios; dtype=phone",
return False
@staticmethod
def add_params(scraper_input: ScraperInput, page: int) -> dict[str, str | Any]:
params = {
"q": scraper_input.search_term,
"l": scraper_input.location,
"filter": 0,
"start": scraper_input.offset + page * 10,
"sort": "date"
} }
if scraper_input.distance: job_search_query = """
params["radius"] = scraper_input.distance query GetJobData {{
jobSearch(
sc_values = [] {what}
if scraper_input.is_remote: {location}
sc_values.append("attr(DSQF7)") limit: 100
if scraper_input.job_type: sort: DATE
sc_values.append("jt({})".format(scraper_input.job_type.value)) {cursor}
{filters}
if sc_values: ) {{
params["sc"] = "0kf:" + "".join(sc_values) + ";" pageInfo {{
nextCursor
if scraper_input.easy_apply: }}
params['iafilter'] = 1 results {{
trackingKey
return params job {{
source {{
name
}}
key
title
datePublished
dateOnIndeed
description {{
html
}}
location {{
countryName
countryCode
admin1Code
city
postalCode
streetAddress
formatted {{
short
long
}}
}}
compensation {{
estimated {{
currencyCode
baseSalary {{
unitOfWork
range {{
... on Range {{
min
max
}}
}}
}}
}}
baseSalary {{
unitOfWork
range {{
... on Range {{
min
max
}}
}}
}}
currencyCode
}}
attributes {{
key
label
}}
employer {{
relativeCompanyPageUrl
name
dossier {{
employerDetails {{
addresses
industry
employeesLocalizedLabel
revenueLocalizedLabel
briefDescription
ceoName
ceoPhotoUrl
}}
images {{
headerImageUrl
squareLogoUrl
}}
links {{
corporateWebsite
}}
}}
}}
recruit {{
viewJobUrl
detailedSalary
workSchedule
}}
}}
}}
}}
}}
"""

View File

@@ -4,49 +4,62 @@ jobspy.scrapers.linkedin
This module contains routines to scrape LinkedIn. This module contains routines to scrape LinkedIn.
""" """
from __future__ import annotations
import time import time
import random import random
import regex as re
from typing import Optional from typing import Optional
from datetime import datetime from datetime import datetime
import requests
from requests.exceptions import ProxyError
from threading import Lock
from bs4.element import Tag from bs4.element import Tag
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
from urllib.parse import urlparse, urlunparse from urllib.parse import urlparse, urlunparse, unquote
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..exceptions import LinkedInException from ..exceptions import LinkedInException
from ..utils import create_session from ..utils import create_session, remove_attributes
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
Country, Country,
Compensation Compensation,
DescriptionFormat,
) )
from ..utils import ( from ..utils import (
count_urgent_words, logger,
extract_emails_from_text, extract_emails_from_text,
get_enum_from_job_type, get_enum_from_job_type,
currency_parser, currency_parser,
modify_and_get_description markdown_converter,
) )
class LinkedInScraper(Scraper): class LinkedInScraper(Scraper):
DELAY = 3 base_url = "https://www.linkedin.com"
delay = 3
band_delay = 4
jobs_per_page = 25
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes LinkedInScraper with the LinkedIn job search url Initializes LinkedInScraper with the LinkedIn job search url
""" """
site = Site(Site.LINKEDIN) super().__init__(Site.LINKEDIN, proxies=proxies)
self.session = create_session(
proxies=self.proxies,
is_tls=False,
has_retry=True,
delay=5,
clear_cookies=True,
)
self.session.headers.update(self.headers)
self.scraper_input = None
self.country = "worldwide" self.country = "worldwide"
self.url = "https://www.linkedin.com" self.job_url_direct_regex = re.compile(r'(?<=\?url=)[^"]+')
super().__init__(site, proxy=proxy)
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
@@ -54,55 +67,65 @@ class LinkedInScraper(Scraper):
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
self.scraper_input = scraper_input
job_list: list[JobPost] = [] job_list: list[JobPost] = []
seen_urls = set() seen_ids = set()
url_lock = Lock() page = scraper_input.offset // 10 * 10 if scraper_input.offset else 0
page = scraper_input.offset // 25 + 25 if scraper_input.offset else 0 request_count = 0
seconds_old = (
def job_type_code(job_type_enum): scraper_input.hours_old * 3600 if scraper_input.hours_old else None
mapping = { )
JobType.FULL_TIME: "F", continue_search = (
JobType.PART_TIME: "P", lambda: len(job_list) < scraper_input.results_wanted and page < 1000
JobType.INTERNSHIP: "I", )
JobType.CONTRACT: "C", while continue_search():
JobType.TEMPORARY: "T", request_count += 1
} logger.info(f"LinkedIn search page: {request_count}")
return mapping.get(job_type_enum, "")
while len(job_list) < scraper_input.results_wanted and page < 1000:
session = create_session(is_tls=False, has_retry=True, delay=5)
params = { params = {
"keywords": scraper_input.search_term, "keywords": scraper_input.search_term,
"location": scraper_input.location, "location": scraper_input.location,
"distance": scraper_input.distance, "distance": scraper_input.distance,
"f_WT": 2 if scraper_input.is_remote else None, "f_WT": 2 if scraper_input.is_remote else None,
"f_JT": job_type_code(scraper_input.job_type) "f_JT": (
self.job_type_code(scraper_input.job_type)
if scraper_input.job_type if scraper_input.job_type
else None, else None
),
"pageNum": 0, "pageNum": 0,
"start": page + scraper_input.offset, "start": page,
"f_AL": "true" if scraper_input.easy_apply else None, "f_AL": "true" if scraper_input.easy_apply else None,
"f_C": (
",".join(map(str, scraper_input.linkedin_company_ids))
if scraper_input.linkedin_company_ids
else None
),
} }
if seconds_old is not None:
params["f_TPR"] = f"r{seconds_old}"
params = {k: v for k, v in params.items() if v is not None} params = {k: v for k, v in params.items() if v is not None}
try: try:
response = session.get( response = self.session.get(
f"{self.url}/jobs-guest/jobs/api/seeMoreJobPostings/search?", f"{self.base_url}/jobs-guest/jobs/api/seeMoreJobPostings/search?",
params=params, params=params,
allow_redirects=True,
proxies=self.proxy,
headers=self.headers(),
timeout=10, timeout=10,
) )
response.raise_for_status() if response.status_code not in range(200, 400):
if response.status_code == 429:
except requests.HTTPError as e: err = (
raise LinkedInException(f"bad response status code: {e.response.status_code}") f"429 Response - Blocked by LinkedIn for too many requests"
except ProxyError as e: )
raise LinkedInException("bad proxy") else:
err = f"LinkedIn response status code {response.status_code}"
err += f" - {response.text}"
logger.error(err)
return JobResponse(jobs=job_list)
except Exception as e: except Exception as e:
raise LinkedInException(str(e)) if "Proxy responded with" in str(e):
logger.error(f"LinkedIn: Bad proxy")
else:
logger.error(f"LinkedIn: {str(e)}")
return JobResponse(jobs=job_list)
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
job_cards = soup.find_all("div", class_="base-search-card") job_cards = soup.find_all("div", class_="base-search-card")
@@ -110,42 +133,44 @@ class LinkedInScraper(Scraper):
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
for job_card in job_cards: for job_card in job_cards:
job_url = None
href_tag = job_card.find("a", class_="base-card__full-link") href_tag = job_card.find("a", class_="base-card__full-link")
if href_tag and "href" in href_tag.attrs: if href_tag and "href" in href_tag.attrs:
href = href_tag.attrs["href"].split("?")[0] href = href_tag.attrs["href"].split("?")[0]
job_id = href.split("-")[-1] job_id = href.split("-")[-1]
job_url = f"{self.url}/jobs/view/{job_id}"
with url_lock: if job_id in seen_ids:
if job_url in seen_urls:
continue continue
seen_urls.add(job_url) seen_ids.add(job_id)
# Call process_job directly without threading
try: try:
job_post = self.process_job(job_card, job_url, scraper_input.full_description) fetch_desc = scraper_input.linkedin_fetch_description
job_post = self._process_job(job_card, job_id, fetch_desc)
if job_post: if job_post:
job_list.append(job_post) job_list.append(job_post)
if not continue_search():
break
except Exception as e: except Exception as e:
raise LinkedInException("Exception occurred while processing jobs") raise LinkedInException(str(e))
page += 25 if continue_search():
time.sleep(random.uniform(LinkedInScraper.DELAY, LinkedInScraper.DELAY + 2)) time.sleep(random.uniform(self.delay, self.delay + self.band_delay))
page += len(job_list)
job_list = job_list[: scraper_input.results_wanted] job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
def process_job(self, job_card: Tag, job_url: str, full_descr: bool) -> Optional[JobPost]: def _process_job(
salary_tag = job_card.find('span', class_='job-search-card__salary-info') self, job_card: Tag, job_id: str, full_descr: bool
) -> Optional[JobPost]:
salary_tag = job_card.find("span", class_="job-search-card__salary-info")
compensation = None compensation = None
if salary_tag: if salary_tag:
salary_text = salary_tag.get_text(separator=' ').strip() salary_text = salary_tag.get_text(separator=" ").strip()
salary_values = [currency_parser(value) for value in salary_text.split('-')] salary_values = [currency_parser(value) for value in salary_text.split("-")]
salary_min = salary_values[0] salary_min = salary_values[0]
salary_max = salary_values[1] salary_max = salary_values[1]
currency = salary_text[0] if salary_text[0] != '$' else 'USD' currency = salary_text[0] if salary_text[0] != "$" else "USD"
compensation = Compensation( compensation = Compensation(
min_amount=int(salary_min), min_amount=int(salary_min),
@@ -166,98 +191,94 @@ class LinkedInScraper(Scraper):
company = company_a_tag.get_text(strip=True) if company_a_tag else "N/A" company = company_a_tag.get_text(strip=True) if company_a_tag else "N/A"
metadata_card = job_card.find("div", class_="base-search-card__metadata") metadata_card = job_card.find("div", class_="base-search-card__metadata")
location = self.get_location(metadata_card) location = self._get_location(metadata_card)
datetime_tag = ( datetime_tag = (
metadata_card.find("time", class_="job-search-card__listdate") metadata_card.find("time", class_="job-search-card__listdate")
if metadata_card if metadata_card
else None else None
) )
date_posted = description = job_type = None date_posted = None
if datetime_tag and "datetime" in datetime_tag.attrs: if datetime_tag and "datetime" in datetime_tag.attrs:
datetime_str = datetime_tag["datetime"] datetime_str = datetime_tag["datetime"]
try: try:
date_posted = datetime.strptime(datetime_str, "%Y-%m-%d") date_posted = datetime.strptime(datetime_str, "%Y-%m-%d")
except Exception as e: except:
date_posted = None date_posted = None
benefits_tag = job_card.find("span", class_="result-benefits__text") job_details = {}
benefits = " ".join(benefits_tag.get_text().split()) if benefits_tag else None
if full_descr: if full_descr:
description, job_type = self.get_job_description(job_url) job_details = self._get_job_details(job_id)
return JobPost( return JobPost(
id=job_id,
title=title, title=title,
company_name=company, company_name=company,
company_url=company_url, company_url=company_url,
location=location, location=location,
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=f"{self.base_url}/jobs/view/{job_id}",
compensation=compensation, compensation=compensation,
benefits=benefits, job_type=job_details.get("job_type"),
job_type=job_type, job_level=job_details.get("job_level", "").lower(),
description=description, company_industry=job_details.get("company_industry"),
emails=extract_emails_from_text(description) if description else None, description=job_details.get("description"),
num_urgent_words=count_urgent_words(description) if description else None, job_url_direct=job_details.get("job_url_direct"),
emails=extract_emails_from_text(job_details.get("description")),
logo_photo_url=job_details.get("logo_photo_url"),
job_function=job_details.get("job_function"),
) )
def get_job_description( def _get_job_details(self, job_id: str) -> dict:
self, job_page_url: str
) -> tuple[None, None] | tuple[str | None, tuple[str | None, JobType | None]]:
""" """
Retrieves job description by going to the job page url Retrieves job description and other job details by going to the job page url
:param job_page_url: :param job_page_url:
:return: description or None :return: dict
""" """
try: try:
session = create_session(is_tls=False, has_retry=True) response = self.session.get(
response = session.get(job_page_url, timeout=5, proxies=self.proxy) f"{self.base_url}/jobs-guest/jobs/api/jobPosting/{job_id}", timeout=5
)
response.raise_for_status() response.raise_for_status()
except requests.HTTPError as e: except:
return None, None return {}
except Exception as e: if "linkedin.com/signup" in response.url:
return None, None return {}
if response.url == "https://www.linkedin.com/signup":
return None, None
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
div_content = soup.find( div_content = soup.find(
"div", class_=lambda x: x and "show-more-less-html__markup" in x "div", class_=lambda x: x and "show-more-less-html__markup" in x
) )
description = None description = None
if div_content: if div_content is not None:
description = modify_and_get_description(div_content) div_content = remove_attributes(div_content)
description = div_content.prettify(formatter="html")
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
description = markdown_converter(description)
def get_job_type( h3_tag = soup.find(
soup_job_type: BeautifulSoup, "h3", text=lambda text: text and "Job function" in text.strip()
) -> list[JobType] | None:
"""
Gets the job type from job page
:param soup_job_type:
:return: JobType
"""
h3_tag = soup_job_type.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Employment type" in text,
) )
employment_type = None job_function = None
if h3_tag: if h3_tag:
employment_type_span = h3_tag.find_next_sibling( job_function_span = h3_tag.find_next(
"span", "span", class_="description__job-criteria-text"
class_="description__job-criteria-text description__job-criteria-text--criteria",
) )
if employment_type_span: if job_function_span:
employment_type = employment_type_span.get_text(strip=True) job_function = job_function_span.text.strip()
employment_type = employment_type.lower() return {
employment_type = employment_type.replace("-", "") "description": description,
"job_level": self._parse_job_level(soup),
"company_industry": self._parse_company_industry(soup),
"job_type": self._parse_job_type(soup),
"job_url_direct": self._parse_job_url_direct(soup),
"logo_photo_url": soup.find("img", {"class": "artdeco-entity-image"}).get(
"data-delayed-url"
),
"job_function": job_function,
}
return [get_enum_from_job_type(employment_type)] if employment_type else [] def _get_location(self, metadata_card: Optional[Tag]) -> Location:
return description, get_job_type(soup)
def get_location(self, metadata_card: Optional[Tag]) -> Location:
""" """
Extracts the location data from the job metadata card. Extracts the location data from the job metadata card.
:param metadata_card :param metadata_card
@@ -279,28 +300,113 @@ class LinkedInScraper(Scraper):
) )
elif len(parts) == 3: elif len(parts) == 3:
city, state, country = parts city, state, country = parts
location = Location( country = Country.from_string(country)
city=city, location = Location(city=city, state=state, country=country)
state=state,
country=Country.from_string(country),
)
return location return location
@staticmethod @staticmethod
def headers() -> dict: def _parse_job_type(soup_job_type: BeautifulSoup) -> list[JobType] | None:
"""
Gets the job type from job page
:param soup_job_type:
:return: JobType
"""
h3_tag = soup_job_type.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Employment type" in text,
)
employment_type = None
if h3_tag:
employment_type_span = h3_tag.find_next_sibling(
"span",
class_="description__job-criteria-text description__job-criteria-text--criteria",
)
if employment_type_span:
employment_type = employment_type_span.get_text(strip=True)
employment_type = employment_type.lower()
employment_type = employment_type.replace("-", "")
return [get_enum_from_job_type(employment_type)] if employment_type else []
@staticmethod
def _parse_job_level(soup_job_level: BeautifulSoup) -> str | None:
"""
Gets the job level from job page
:param soup_job_level:
:return: str
"""
h3_tag = soup_job_level.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Seniority level" in text,
)
job_level = None
if h3_tag:
job_level_span = h3_tag.find_next_sibling(
"span",
class_="description__job-criteria-text description__job-criteria-text--criteria",
)
if job_level_span:
job_level = job_level_span.get_text(strip=True)
return job_level
@staticmethod
def _parse_company_industry(soup_industry: BeautifulSoup) -> str | None:
"""
Gets the company industry from job page
:param soup_industry:
:return: str
"""
h3_tag = soup_industry.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Industries" in text,
)
industry = None
if h3_tag:
industry_span = h3_tag.find_next_sibling(
"span",
class_="description__job-criteria-text description__job-criteria-text--criteria",
)
if industry_span:
industry = industry_span.get_text(strip=True)
return industry
def _parse_job_url_direct(self, soup: BeautifulSoup) -> str | None:
"""
Gets the job url direct from job page
:param soup:
:return: str
"""
job_url_direct = None
job_url_direct_content = soup.find("code", id="applyUrl")
if job_url_direct_content:
job_url_direct_match = self.job_url_direct_regex.search(
job_url_direct_content.decode_contents().strip()
)
if job_url_direct_match:
job_url_direct = unquote(job_url_direct_match.group())
return job_url_direct
@staticmethod
def job_type_code(job_type_enum: JobType) -> str:
return { return {
'authority': 'www.linkedin.com', JobType.FULL_TIME: "F",
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', JobType.PART_TIME: "P",
'accept-language': 'en-US,en;q=0.9', JobType.INTERNSHIP: "I",
'cache-control': 'max-age=0', JobType.CONTRACT: "C",
'sec-ch-ua': '"Not_A Brand";v="8", "Chromium";v="120", "Google Chrome";v="120"', JobType.TEMPORARY: "T",
# 'sec-ch-ua-mobile': '?0', }.get(job_type_enum, "")
# 'sec-ch-ua-platform': '"macOS"',
# 'sec-fetch-dest': 'document', headers = {
# 'sec-fetch-mode': 'navigate', "authority": "www.linkedin.com",
# 'sec-fetch-site': 'none', "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
# 'sec-fetch-user': '?1', "accept-language": "en-US,en;q=0.9",
'upgrade-insecure-requests': '1', "cache-control": "max-age=0",
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36' "upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
} }

View File

@@ -1,34 +1,149 @@
import re from __future__ import annotations
import numpy as np
import re
import logging
from itertools import cycle
import tls_client
import requests import requests
import tls_client
import numpy as np
from markdownify import markdownify as md
from requests.adapters import HTTPAdapter, Retry from requests.adapters import HTTPAdapter, Retry
from ..jobs import JobType from ..jobs import CompensationInterval, JobType
logger = logging.getLogger("JobSpy")
logger.propagate = False
if not logger.handlers:
logger.setLevel(logging.INFO)
console_handler = logging.StreamHandler()
format = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
formatter = logging.Formatter(format)
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
def modify_and_get_description(soup): class RotatingProxySession:
for li in soup.find_all('li'): def __init__(self, proxies=None):
li.string = "- " + li.get_text() if isinstance(proxies, str):
self.proxy_cycle = cycle([self.format_proxy(proxies)])
description = soup.get_text(separator='\n').strip() elif isinstance(proxies, list):
description = re.sub(r'\n+', '\n', description) self.proxy_cycle = (
return description cycle([self.format_proxy(proxy) for proxy in proxies])
if proxies
else None
def count_urgent_words(description: str) -> int:
"""
Count the number of urgent words or phrases in a job description.
"""
urgent_patterns = re.compile(
r"\burgen(t|cy)|\bimmediate(ly)?\b|start asap|\bhiring (now|immediate(ly)?)\b",
re.IGNORECASE,
) )
matches = re.findall(urgent_patterns, description) else:
count = len(matches) self.proxy_cycle = None
return count @staticmethod
def format_proxy(proxy):
"""Utility method to format a proxy string into a dictionary."""
if proxy.startswith("http://") or proxy.startswith("https://"):
return {"http": proxy, "https": proxy}
return {"http": f"http://{proxy}", "https": f"http://{proxy}"}
class RequestsRotating(RotatingProxySession, requests.Session):
def __init__(self, proxies=None, has_retry=False, delay=1, clear_cookies=False):
RotatingProxySession.__init__(self, proxies=proxies)
requests.Session.__init__(self)
self.clear_cookies = clear_cookies
self.allow_redirects = True
self.setup_session(has_retry, delay)
def setup_session(self, has_retry, delay):
if has_retry:
retries = Retry(
total=3,
connect=3,
status=3,
status_forcelist=[500, 502, 503, 504, 429],
backoff_factor=delay,
)
adapter = HTTPAdapter(max_retries=retries)
self.mount("http://", adapter)
self.mount("https://", adapter)
def request(self, method, url, **kwargs):
if self.clear_cookies:
self.cookies.clear()
if self.proxy_cycle:
next_proxy = next(self.proxy_cycle)
if next_proxy["http"] != "http://localhost":
self.proxies = next_proxy
else:
self.proxies = {}
return requests.Session.request(self, method, url, **kwargs)
class TLSRotating(RotatingProxySession, tls_client.Session):
def __init__(self, proxies=None):
RotatingProxySession.__init__(self, proxies=proxies)
tls_client.Session.__init__(self, random_tls_extension_order=True)
def execute_request(self, *args, **kwargs):
if self.proxy_cycle:
next_proxy = next(self.proxy_cycle)
if next_proxy["http"] != "http://localhost":
self.proxies = next_proxy
else:
self.proxies = {}
response = tls_client.Session.execute_request(self, *args, **kwargs)
response.ok = response.status_code in range(200, 400)
return response
def create_session(
*,
proxies: dict | str | None = None,
is_tls: bool = True,
has_retry: bool = False,
delay: int = 1,
clear_cookies: bool = False,
) -> requests.Session:
"""
Creates a requests session with optional tls, proxy, and retry settings.
:return: A session object
"""
if is_tls:
session = TLSRotating(proxies=proxies)
else:
session = RequestsRotating(
proxies=proxies,
has_retry=has_retry,
delay=delay,
clear_cookies=clear_cookies,
)
return session
def set_logger_level(verbose: int = 2):
"""
Adjusts the logger's level. This function allows the logging level to be changed at runtime.
Parameters:
- verbose: int {0, 1, 2} (default=2, all logs)
"""
if verbose is None:
return
level_name = {2: "INFO", 1: "WARNING", 0: "ERROR"}.get(verbose, "INFO")
level = getattr(logging, level_name.upper(), None)
if level is not None:
logger.setLevel(level)
else:
raise ValueError(f"Invalid log level: {level_name}")
def markdown_converter(description_html: str):
if description_html is None:
return None
markdown = md(description_html)
return markdown.strip()
def extract_emails_from_text(text: str) -> list[str] | None: def extract_emails_from_text(text: str) -> list[str] | None:
@@ -38,37 +153,6 @@ def extract_emails_from_text(text: str) -> list[str] | None:
return email_regex.findall(text) return email_regex.findall(text)
def create_session(proxy: dict | None = None, is_tls: bool = True, has_retry: bool = False, delay: int = 1) -> requests.Session:
"""
Creates a requests session with optional tls, proxy, and retry settings.
:return: A session object
"""
if is_tls:
session = tls_client.Session(
client_identifier="chrome112",
random_tls_extension_order=True,
)
session.proxies = proxy
else:
session = requests.Session()
session.allow_redirects = True
if proxy:
session.proxies.update(proxy)
if has_retry:
retries = Retry(total=3,
connect=3,
status=3,
status_forcelist=[500, 502, 503, 504, 429],
backoff_factor=delay)
adapter = HTTPAdapter(max_retries=retries)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
def get_enum_from_job_type(job_type_str: str) -> JobType | None: def get_enum_from_job_type(job_type_str: str) -> JobType | None:
""" """
Given a string, returns the corresponding JobType enum member if a match is found. Given a string, returns the corresponding JobType enum member if a match is found.
@@ -79,18 +163,88 @@ def get_enum_from_job_type(job_type_str: str) -> JobType | None:
res = job_type res = job_type
return res return res
def currency_parser(cur_str): def currency_parser(cur_str):
# Remove any non-numerical characters # Remove any non-numerical characters
# except for ',' '.' or '-' (e.g. EUR) # except for ',' '.' or '-' (e.g. EUR)
cur_str = re.sub("[^-0-9.,]", '', cur_str) cur_str = re.sub("[^-0-9.,]", "", cur_str)
# Remove any 000s separators (either , or .) # Remove any 000s separators (either , or .)
cur_str = re.sub("[.,]", '', cur_str[:-3]) + cur_str[-3:] cur_str = re.sub("[.,]", "", cur_str[:-3]) + cur_str[-3:]
if '.' in list(cur_str[-3:]): if "." in list(cur_str[-3:]):
num = float(cur_str) num = float(cur_str)
elif ',' in list(cur_str[-3:]): elif "," in list(cur_str[-3:]):
num = float(cur_str.replace(',', '.')) num = float(cur_str.replace(",", "."))
else: else:
num = float(cur_str) num = float(cur_str)
return np.round(num, 2) return np.round(num, 2)
def remove_attributes(tag):
for attr in list(tag.attrs):
del tag[attr]
return tag
def extract_salary(
salary_str,
lower_limit=1000,
upper_limit=700000,
hourly_threshold=350,
monthly_threshold=30000,
enforce_annual_salary=False,
):
if not salary_str:
return None, None, None, None
min_max_pattern = r"\$(\d+(?:,\d+)?(?:\.\d+)?)([kK]?)\s*[-—–]\s*(?:\$)?(\d+(?:,\d+)?(?:\.\d+)?)([kK]?)"
def to_int(s):
return int(float(s.replace(",", "")))
def convert_hourly_to_annual(hourly_wage):
return hourly_wage * 2080
def convert_monthly_to_annual(monthly_wage):
return monthly_wage * 12
match = re.search(min_max_pattern, salary_str)
if match:
min_salary = to_int(match.group(1))
max_salary = to_int(match.group(3))
# Handle 'k' suffix for min and max salaries independently
if "k" in match.group(2).lower() or "k" in match.group(4).lower():
min_salary *= 1000
max_salary *= 1000
# Convert to annual if less than the hourly threshold
if min_salary < hourly_threshold:
interval = CompensationInterval.HOURLY.value
annual_min_salary = convert_hourly_to_annual(min_salary)
if max_salary < hourly_threshold:
annual_max_salary = convert_hourly_to_annual(max_salary)
elif min_salary < monthly_threshold:
interval = CompensationInterval.MONTHLY.value
annual_min_salary = convert_monthly_to_annual(min_salary)
if max_salary < monthly_threshold:
annual_max_salary = convert_monthly_to_annual(max_salary)
else:
interval = CompensationInterval.YEARLY.value
annual_min_salary = min_salary
annual_max_salary = max_salary
# Ensure salary range is within specified limits
if (
lower_limit <= annual_min_salary <= upper_limit
and lower_limit <= annual_max_salary <= upper_limit
and annual_min_salary < annual_max_salary
):
if enforce_annual_salary:
return interval, annual_min_salary, annual_max_salary, "USD"
else:
return interval, min_salary, max_salary, "USD"
return None, None, None, None

View File

@@ -4,36 +4,86 @@ jobspy.scrapers.ziprecruiter
This module contains routines to scrape ZipRecruiter. This module contains routines to scrape ZipRecruiter.
""" """
from __future__ import annotations
import json
import math import math
import time
import re import re
from datetime import datetime, date import time
from datetime import datetime
from typing import Optional, Tuple, Any from typing import Optional, Tuple, Any
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from bs4 import BeautifulSoup
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..exceptions import ZipRecruiterException from ..utils import (
from ...jobs import JobPost, Compensation, Location, JobResponse, JobType, Country logger,
from ..utils import count_urgent_words, extract_emails_from_text, create_session, modify_and_get_description extract_emails_from_text,
create_session,
markdown_converter,
remove_attributes,
)
from ...jobs import (
JobPost,
Compensation,
Location,
JobResponse,
JobType,
Country,
DescriptionFormat,
)
class ZipRecruiterScraper(Scraper): class ZipRecruiterScraper(Scraper):
def __init__(self, proxy: Optional[str] = None): base_url = "https://www.ziprecruiter.com"
api_url = "https://api.ziprecruiter.com"
def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes ZipRecruiterScraper with the ZipRecruiter job search url Initializes ZipRecruiterScraper with the ZipRecruiter job search url
""" """
site = Site(Site.ZIP_RECRUITER) super().__init__(Site.ZIP_RECRUITER, proxies=proxies)
self.url = "https://www.ziprecruiter.com"
self.session = create_session(proxy)
self.get_cookies()
super().__init__(site, proxy=proxy)
self.scraper_input = None
self.session = create_session(proxies=proxies)
self._get_cookies()
self.delay = 5
self.jobs_per_page = 20 self.jobs_per_page = 20
self.seen_urls = set() self.seen_urls = set()
def find_jobs_in_page( def scrape(self, scraper_input: ScraperInput) -> JobResponse:
"""
Scrapes ZipRecruiter for jobs with scraper_input criteria.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
"""
self.scraper_input = scraper_input
job_list: list[JobPost] = []
continue_token = None
max_pages = math.ceil(scraper_input.results_wanted / self.jobs_per_page)
for page in range(1, max_pages + 1):
if len(job_list) >= scraper_input.results_wanted:
break
if page > 1:
time.sleep(self.delay)
logger.info(f"ZipRecruiter search page: {page}")
jobs_on_page, continue_token = self._find_jobs_in_page(
scraper_input, continue_token
)
if jobs_on_page:
job_list.extend(jobs_on_page)
else:
break
if not continue_token:
break
return JobResponse(jobs=job_list[: scraper_input.results_wanted])
def _find_jobs_in_page(
self, scraper_input: ScraperInput, continue_token: str | None = None self, scraper_input: ScraperInput, continue_token: str | None = None
) -> Tuple[list[JobPost], Optional[str]]: ) -> Tuple[list[JobPost], Optional[str]]:
""" """
@@ -42,170 +92,157 @@ class ZipRecruiterScraper(Scraper):
:param continue_token: :param continue_token:
:return: jobs found on page :return: jobs found on page
""" """
params = self.add_params(scraper_input) jobs_list = []
params = self._add_params(scraper_input)
if continue_token: if continue_token:
params["continue"] = continue_token params["continue_from"] = continue_token
try: try:
response = self.session.get( res = self.session.get(
f"https://api.ziprecruiter.com/jobs-app/jobs", f"{self.api_url}/jobs-app/jobs", headers=self.headers, params=params
headers=self.headers(),
params=self.add_params(scraper_input),
)
if response.status_code != 200:
raise ZipRecruiterException(
f"bad response status code: {response.status_code}"
) )
if res.status_code not in range(200, 400):
if res.status_code == 429:
err = "429 Response - Blocked by ZipRecruiter for too many requests"
else:
err = f"ZipRecruiter response status code {res.status_code}"
err += f" with response: {res.text}" # ZipRecruiter likely not available in EU
logger.error(err)
return jobs_list, ""
except Exception as e: except Exception as e:
if "Proxy responded with non 200 code" in str(e): if "Proxy responded with" in str(e):
raise ZipRecruiterException("bad proxy") logger.error(f"Indeed: Bad proxy")
raise ZipRecruiterException(str(e)) else:
logger.error(f"Indeed: {str(e)}")
time.sleep(5) return jobs_list, ""
response_data = response.json()
jobs_list = response_data.get("jobs", [])
next_continue_token = response_data.get("continue", None)
res_data = res.json()
jobs_list = res_data.get("jobs", [])
next_continue_token = res_data.get("continue", None)
with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor: with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
job_results = [executor.submit(self.process_job, job) for job in jobs_list] job_results = [executor.submit(self._process_job, job) for job in jobs_list]
job_list = [result.result() for result in job_results if result.result()] job_list = list(filter(None, (result.result() for result in job_results)))
return job_list, next_continue_token return job_list, next_continue_token
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def _process_job(self, job: dict) -> JobPost | None:
""" """
Scrapes ZipRecruiter for jobs with scraper_input criteria. Processes an individual job dict from the response
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
""" """
job_list: list[JobPost] = []
continue_token = None
max_pages = math.ceil(scraper_input.results_wanted / self.jobs_per_page)
for page in range(1, max_pages + 1):
if len(job_list) >= scraper_input.results_wanted:
break
jobs_on_page, continue_token = self.find_jobs_in_page(
scraper_input, continue_token
)
if jobs_on_page:
job_list.extend(jobs_on_page)
if not continue_token:
break
if len(job_list) > scraper_input.results_wanted:
job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list)
@staticmethod
def process_job(job: dict) -> JobPost:
"""Processes an individual job dict from the response"""
title = job.get("name") title = job.get("name")
job_url = job.get("job_url") job_url = f"{self.base_url}/jobs//j?lvk={job['listing_key']}"
if job_url in self.seen_urls:
return
self.seen_urls.add(job_url)
job_description_html = job.get("job_description", "").strip() description = job.get("job_description", "").strip()
description_soup = BeautifulSoup(job_description_html, "html.parser") listing_type = job.get("buyer_type", "")
description = modify_and_get_description(description_soup) description = (
markdown_converter(description)
company = job["hiring_company"].get("name") if "hiring_company" in job else None if self.scraper_input.description_format == DescriptionFormat.MARKDOWN
else description
)
company = job.get("hiring_company", {}).get("name")
country_value = "usa" if job.get("job_country") == "US" else "canada" country_value = "usa" if job.get("job_country") == "US" else "canada"
country_enum = Country.from_string(country_value) country_enum = Country.from_string(country_value)
location = Location( location = Location(
city=job.get("job_city"), state=job.get("job_state"), country=country_enum city=job.get("job_city"), state=job.get("job_state"), country=country_enum
) )
job_type = ZipRecruiterScraper.get_job_type_enum( job_type = self._get_job_type_enum(
job.get("employment_type", "").replace("_", "").lower() job.get("employment_type", "").replace("_", "").lower()
) )
date_posted = datetime.fromisoformat(job["posted_time"].rstrip("Z")).date()
save_job_url = job.get("SaveJobURL", "") comp_interval = job.get("compensation_interval")
posted_time_match = re.search( comp_interval = "yearly" if comp_interval == "annual" else comp_interval
r"posted_time=(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z)", save_job_url comp_min = int(job["compensation_min"]) if "compensation_min" in job else None
) comp_max = int(job["compensation_max"]) if "compensation_max" in job else None
if posted_time_match: comp_currency = job.get("compensation_currency")
date_time_str = posted_time_match.group(1) description_full, job_url_direct = self._get_descr(job_url)
date_posted_obj = datetime.strptime(date_time_str, "%Y-%m-%dT%H:%M:%SZ")
date_posted = date_posted_obj.date()
else:
date_posted = date.today()
return JobPost( return JobPost(
id=str(job["listing_key"]),
title=title, title=title,
company_name=company, company_name=company,
location=location, location=location,
job_type=job_type, job_type=job_type,
compensation=Compensation( compensation=Compensation(
interval="yearly" interval=comp_interval,
if job.get("compensation_interval") == "annual" min_amount=comp_min,
else job.get("compensation_interval"), max_amount=comp_max,
min_amount=int(job["compensation_min"]) currency=comp_currency,
if "compensation_min" in job
else None,
max_amount=int(job["compensation_max"])
if "compensation_max" in job
else None,
currency=job.get("compensation_currency"),
), ),
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
description=description, description=description_full if description_full else description,
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None, job_url_direct=job_url_direct,
listing_type=listing_type,
) )
def get_cookies(self): def _get_descr(self, job_url):
url="https://api.ziprecruiter.com/jobs-app/event" res = self.session.get(job_url, headers=self.headers, allow_redirects=True)
description_full = job_url_direct = None
if res.ok:
soup = BeautifulSoup(res.text, "html.parser")
job_descr_div = soup.find("div", class_="job_description")
company_descr_section = soup.find("section", class_="company_description")
job_description_clean = (
remove_attributes(job_descr_div).prettify(formatter="html")
if job_descr_div
else ""
)
company_description_clean = (
remove_attributes(company_descr_section).prettify(formatter="html")
if company_descr_section
else ""
)
description_full = job_description_clean + company_description_clean
script_tag = soup.find("script", type="application/json")
if script_tag:
job_json = json.loads(script_tag.string)
job_url_val = job_json["model"]["saveJobURL"]
m = re.search(r"job_url=(.+)", job_url_val)
if m:
job_url_direct = m.group(1)
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
description_full = markdown_converter(description_full)
return description_full, job_url_direct
def _get_cookies(self):
data = "event_type=session&logged_in=false&number_of_retry=1&property=model%3AiPhone&property=os%3AiOS&property=locale%3Aen_us&property=app_build_number%3A4734&property=app_version%3A91.0&property=manufacturer%3AApple&property=timestamp%3A2024-01-12T12%3A04%3A42-06%3A00&property=screen_height%3A852&property=os_version%3A16.6.1&property=source%3Ainstall&property=screen_width%3A393&property=device_model%3AiPhone%2014%20Pro&property=brand%3AApple" data = "event_type=session&logged_in=false&number_of_retry=1&property=model%3AiPhone&property=os%3AiOS&property=locale%3Aen_us&property=app_build_number%3A4734&property=app_version%3A91.0&property=manufacturer%3AApple&property=timestamp%3A2024-01-12T12%3A04%3A42-06%3A00&property=screen_height%3A852&property=os_version%3A16.6.1&property=source%3Ainstall&property=screen_width%3A393&property=device_model%3AiPhone%2014%20Pro&property=brand%3AApple"
self.session.post(url, data=data, headers=ZipRecruiterScraper.headers()) url = f"{self.api_url}/jobs-app/event"
self.session.post(url, data=data, headers=self.headers)
@staticmethod @staticmethod
def get_job_type_enum(job_type_str: str) -> list[JobType] | None: def _get_job_type_enum(job_type_str: str) -> list[JobType] | None:
for job_type in JobType: for job_type in JobType:
if job_type_str in job_type.value: if job_type_str in job_type.value:
return [job_type] return [job_type]
return None return None
@staticmethod @staticmethod
def add_params(scraper_input) -> dict[str, str | Any]: def _add_params(scraper_input) -> dict[str, str | Any]:
params = { params = {
"search": scraper_input.search_term, "search": scraper_input.search_term,
"location": scraper_input.location, "location": scraper_input.location,
"form": "jobs-landing",
} }
job_type_value = None if scraper_input.hours_old:
params["days"] = max(scraper_input.hours_old // 24, 1)
job_type_map = {JobType.FULL_TIME: "full_time", JobType.PART_TIME: "part_time"}
if scraper_input.job_type: if scraper_input.job_type:
if scraper_input.job_type.value == "fulltime": job_type = scraper_input.job_type
job_type_value = "full_time" params["employment_type"] = job_type_map.get(job_type, job_type.value[0])
elif scraper_input.job_type.value == "parttime":
job_type_value = "part_time"
else:
job_type_value = scraper_input.job_type.value
if scraper_input.easy_apply: if scraper_input.easy_apply:
params['zipapply'] = 1 params["zipapply"] = 1
if job_type_value:
params[
"refine_by_employment"
] = f"employment_type:employment_type:{job_type_value}"
if scraper_input.is_remote: if scraper_input.is_remote:
params["refine_by_location_type"] = "only_remote" params["remote"] = 1
if scraper_input.distance: if scraper_input.distance:
params["radius"] = scraper_input.distance params["radius"] = scraper_input.distance
return {k: v for k, v in params.items() if v is not None}
return params headers = {
@staticmethod
def headers() -> dict:
"""
Returns headers needed for requests
:return: dict - Dictionary containing headers
"""
return {
"Host": "api.ziprecruiter.com", "Host": "api.ziprecruiter.com",
"accept": "*/*", "accept": "*/*",
"x-zr-zva-override": "100000000;vid:ZT1huzm_EQlDTVEc", "x-zr-zva-override": "100000000;vid:ZT1huzm_EQlDTVEc",

View File

@@ -5,10 +5,10 @@ import pandas as pd
def test_all(): def test_all():
result = scrape_jobs( result = scrape_jobs(
site_name=["linkedin", "indeed", "zip_recruiter", "glassdoor"], site_name=["linkedin", "indeed", "zip_recruiter", "glassdoor"],
search_term="software engineer", search_term="engineer",
results_wanted=5, results_wanted=5,
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 20
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -2,10 +2,12 @@ from ..jobspy import scrape_jobs
import pandas as pd import pandas as pd
def test_indeed(): def test_glassdoor():
result = scrape_jobs( result = scrape_jobs(
site_name="glassdoor", search_term="software engineer", country_indeed="USA" site_name="glassdoor",
search_term="engineer",
results_wanted=5,
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -4,8 +4,10 @@ import pandas as pd
def test_indeed(): def test_indeed():
result = scrape_jobs( result = scrape_jobs(
site_name="indeed", search_term="software engineer", country_indeed="usa" site_name="indeed",
search_term="engineer",
results_wanted=5,
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -3,10 +3,7 @@ import pandas as pd
def test_linkedin(): def test_linkedin():
result = scrape_jobs( result = scrape_jobs(site_name="linkedin", search_term="engineer", results_wanted=5)
site_name="linkedin",
search_term="software engineer",
)
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -4,10 +4,9 @@ import pandas as pd
def test_ziprecruiter(): def test_ziprecruiter():
result = scrape_jobs( result = scrape_jobs(
site_name="zip_recruiter", site_name="zip_recruiter", search_term="software engineer", results_wanted=5
search_term="software engineer",
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"