Compare commits

...

44 Commits

Author SHA1 Message Date
Cullen Watson
209e0e65b6 fix:malaysia indeed (#180) 2024-08-03 22:48:53 -05:00
Cullen Watson
8570c0651e fix:key error (#176) 2024-07-21 13:05:18 -05:00
Cullen Watson
8678b0bbe4 enh: test on pr (#174) 2024-07-19 14:25:25 -05:00
Cullen Watson
60d4d911c9 lock file (#173) 2024-07-17 21:21:22 -05:00
Lluís Salord Quetglas
2a0cba8c7e FEAT: Optional convertion to annual and know salary source (#170) 2024-07-17 21:05:33 -05:00
Mason DePalma
de70189fa2 Update pyproject.toml (#172)
Changed Numpy to the most recent version so the package can properly install
2024-07-17 20:54:08 -05:00
Cullen Watson
b55c0eb86d docs:readme 2024-07-16 19:24:38 -05:00
Cullen Watson
88c95c4ad5 enh: estimated salary (#169) 2024-07-16 19:20:34 -05:00
Cullen Watson
d8d33d602f docs: readme 2024-07-15 21:30:11 -05:00
Cullen Watson
6330c14879 minor fix 2024-07-15 21:19:01 -05:00
Ali Bakhshi Ilani
48631ea271 Add company industry and job level to linkedin scraper (#166) 2024-07-15 21:07:39 -05:00
Cullen Watson
edffe18e65 enh: listing source (#168) 2024-07-15 20:30:04 -05:00
Lluís Salord Quetglas
0988230a24 FEAT: Add Glassdoor logo data if available (#167) 2024-07-15 20:25:18 -05:00
Cullen Watson
d000a81eb3 Salary parse (#163) 2024-06-09 17:45:38 -05:00
Cullen Watson
ccb0c17660 enh: ziprecruiter full description (#162) 2024-06-09 16:21:01 -05:00
Cullen Watson
df339610fa docs: readme 2024-05-29 19:32:32 -05:00
Cullen Watson
c501006bd8 docs: readme 2024-05-28 16:04:26 -05:00
Cullen Watson
89a3ee231c enh(li): job function (#160) 2024-05-28 16:01:29 -05:00
Cullen
6439f71433 chore: version 2024-05-28 15:39:24 -05:00
adamagassi
7f6271b2e0 LinkedIn scraper fixes: (#159)
Correct initial page offset calculation
Separate page variable from request counter
Fix job offset starting value
Increment offset by number of jobs returned instead of expected value
2024-05-28 15:38:13 -05:00
Cullen Watson
5cb7ffe5fd enh: proxies (#157)
* enh: proxies

* enh: proxies
2024-05-25 14:04:09 -05:00
Cullen Watson
cd29f79796 docs: readme 2024-05-25 11:46:23 -05:00
Cullen Watson
65d2e5e707 Update pyproject.toml 2024-05-20 11:46:36 -05:00
fasih hussain
08d63a87a2 chore: id added for JobPost schema (#152) 2024-05-20 11:45:52 -05:00
Cullen
1ffdb1756f fix: dup line 2024-04-30 12:11:48 -05:00
Cullen Watson
1185693422 delete empty file 2024-04-30 12:06:20 -05:00
Lluís Salord Quetglas
dcd7144318 FIX: Allow Indeed search term with complex syntax (#139) 2024-04-30 12:05:43 -05:00
Cullen Watson
bf73c061bd enh: linkedin company logo (#141) 2024-04-30 12:03:10 -05:00
Lluís Salord Quetglas
8dd08ed9fd FEAT: Allow LinkedIn scraper to get external job apply url (#140) 2024-04-30 11:36:01 -05:00
Cullen Watson
5d3df732e6 docs: readme 2024-03-12 20:46:25 -05:00
Kellen Mace
86f858e06d Update scrape_jobs() parameters info in readme (#130) 2024-03-12 20:45:13 -05:00
Cullen
1089d1f0a5 docs: readme 2024-03-11 21:30:57 -05:00
Cullen
3e93454738 fix(indeed): readd param 2024-03-11 21:23:20 -05:00
Cullen Watson
0d150d519f docs: readme 2024-03-11 14:52:20 -05:00
Cullen Watson
cc3497f929 docs: readme 2024-03-11 14:45:17 -05:00
Cullen Watson
5986f75346 docs: readme 2024-03-11 14:41:12 -05:00
VitaminB16
4b7bdb9313 feat: Adjust log verbosity via verbose arg (#128) 2024-03-11 14:38:44 -05:00
Cullen Watson
80213f28d2 chore: version 2024-03-11 09:43:12 -05:00
Cullen Watson
ada38532c3 fix: indeed empty location term 2024-03-11 09:42:43 -05:00
Cullen Watson
3b0017964c fix: indeed empty search term 2024-03-11 09:21:11 -05:00
VitaminB16
94d8f555fd format: Apply Black formatter to the codebase (#127) 2024-03-10 23:36:27 -05:00
Cullen Watson
e8b4b376b8 docs: readme 2024-03-09 13:40:34 -06:00
Cullen Watson
54ac1bad16 docs: readme 2024-03-09 01:49:05 -06:00
Cullen Watson
0a669e9ba8 enh: indeed more fields (#126) 2024-03-09 01:40:01 -06:00
22 changed files with 2506 additions and 1854 deletions

22
.github/workflows/python-test.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Python Tests
on:
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
pip install poetry
poetry install
- name: Run tests
run: poetry run pytest src/tests/test_all.py

7
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,7 @@
repos:
- repo: https://github.com/psf/black
rev: 24.2.0
hooks:
- id: black
language_version: python
args: [--line-length=88, --quiet]

165
README.md
View File

@@ -11,17 +11,14 @@ work with us.*
- Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, & **ZipRecruiter** simultaneously - Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, & **ZipRecruiter** simultaneously
- Aggregates the job postings in a Pandas DataFrame - Aggregates the job postings in a Pandas DataFrame
- Proxy support - Proxies support
[Video Guide for JobSpy](https://www.youtube.com/watch?v=RuP1HrAZnxs&pp=ygUgam9icyBzY3JhcGVyIGJvdCBsaW5rZWRpbiBpbmRlZWQ%3D) -
Updated for release v1.1.3
![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57) ![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57)
### Installation ### Installation
``` ```
pip install python-jobspy pip install -U python-jobspy
``` ```
_Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_
@@ -37,18 +34,22 @@ jobs = scrape_jobs(
search_term="software engineer", search_term="software engineer",
location="Dallas, TX", location="Dallas, TX",
results_wanted=20, results_wanted=20,
hours_old=72, # (only linkedin is hour specific, others round up to days old) hours_old=72, # (only Linkedin/Indeed is hour specific, others round up to days old)
country_indeed='USA' # only needed for indeed / glassdoor country_indeed='USA', # only needed for indeed / glassdoor
# linkedin_fetch_description=True # get full description , direct job url , company industry and job level (seniority level) for linkedin (slower)
# proxies=["208.195.175.46:65095", "208.195.175.45:65095", "localhost"],
) )
print(f"Found {len(jobs)} jobs") print(f"Found {len(jobs)} jobs")
print(jobs.head()) print(jobs.head())
jobs.to_csv("jobs.csv", quoting=csv.QUOTE_NONNUMERIC, escapechar="\\", index=False) # to_xlsx jobs.to_csv("jobs.csv", quoting=csv.QUOTE_NONNUMERIC, escapechar="\\", index=False) # to_excel
``` ```
### Output ### Output
``` ```
SITE TITLE COMPANY_NAME CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION SITE TITLE COMPANY CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION
indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!... indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!...
indeed Senior Software Engineer TherapyNotes.com Philadelphia PA fulltime yearly 135000 110000 https://www.indeed.com/viewjob?jk=da39574a40cb... About Us TherapyNotes is the national leader i... indeed Senior Software Engineer TherapyNotes.com Philadelphia PA fulltime yearly 135000 110000 https://www.indeed.com/viewjob?jk=da39574a40cb... About Us TherapyNotes is the national leader i...
linkedin Software Engineer - Early Career Lockheed Martin Sunnyvale CA fulltime yearly None None https://www.linkedin.com/jobs/view/3693012711 Description:By bringing together people that u... linkedin Software Engineer - Early Career Lockheed Martin Sunnyvale CA fulltime yearly None None https://www.linkedin.com/jobs/view/3693012711 Description:By bringing together people that u...
@@ -60,55 +61,121 @@ zip_recruiter Software Developer TEKsystems Phoenix
### Parameters for `scrape_jobs()` ### Parameters for `scrape_jobs()`
```plaintext ```plaintext
Required
├── site_type (List[enum]): linkedin, zip_recruiter, indeed, glassdoor
└── search_term (str)
Optional Optional
├── location (int) ├── site_name (list|str):
├── distance (int): in miles | linkedin, zip_recruiter, indeed, glassdoor
├── job_type (enum): fulltime, parttime, internship, contract | (default is all four)
├── proxy (str): in format 'http://user:pass@host:port'
├── search_term (str)
├── location (str)
├── distance (int):
| in miles, default 50
├── job_type (str):
| fulltime, parttime, internship, contract
├── proxies (list):
| in format ['user:pass@host:port', 'localhost']
| each job board scraper will round robin through the proxies
├── is_remote (bool) ├── is_remote (bool)
├── linkedin_fetch_description (bool): fetches full description for LinkedIn (slower)
├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type' ├── results_wanted (int):
├── easy_apply (bool): filters for jobs that are hosted on the job board site | number of job results to retrieve for each site specified in 'site_name'
├── linkedin_company_ids (list[int): searches for linkedin jobs with specific company ids
├── description_format (enum): markdown, html (format type of the job descriptions) ├── easy_apply (bool):
├── country_indeed (enum): filters the country on Indeed (see below for correct spelling) | filters for jobs that are hosted on the job board site
├── offset (num): starts the search from an offset (e.g. 25 will start the search from the 25th result)
├── hours_old (int): filters jobs by the number of hours since the job was posted (all but LinkedIn rounds up to next day) ├── description_format (str):
| markdown, html (Format type of the job descriptions. Default is markdown.)
├── offset (int):
| starts the search from an offset (e.g. 25 will start the search from the 25th result)
├── hours_old (int):
| filters jobs by the number of hours since the job was posted
| (ZipRecruiter and Glassdoor round up to next day.)
├── verbose (int) {0, 1, 2}:
| Controls the verbosity of the runtime printouts
| (0 prints only errors, 1 is errors+warnings, 2 is all logs. Default is 2.)
├── linkedin_fetch_description (bool):
| fetches full description and direct job url for LinkedIn (Increases requests by O(n))
├── linkedin_company_ids (list[int]):
| searches for linkedin jobs with specific company ids
|
├── country_indeed (str):
| filters the country on Indeed & Glassdoor (see below for correct spelling)
|
├── enforce_annual_salary (bool):
| converts wages to annual salary
``` ```
```
├── Indeed limitations:
| Only one from this list can be used in a search:
| - hours_old
| - job_type & is_remote
| - easy_apply
└── LinkedIn limitations:
| Only one from this list can be used in a search:
| - hours_old
| - easy_apply
```
### JobPost Schema ### JobPost Schema
```plaintext ```plaintext
JobPost JobPost
├── title (str) ├── title
├── company (str) ├── company
├── company_url (str) ├── company_url
├── job_url (str) ├── job_url
├── location (object) ├── location
│ ├── country (str) │ ├── country
│ ├── city (str) │ ├── city
│ ├── state (str) │ ├── state
├── description (str) ├── description
├── job_type (str): fulltime, parttime, internship, contract ├── job_type: fulltime, parttime, internship, contract
├── compensation (object) ├── job_function
│ ├── interval (str): yearly, monthly, weekly, daily, hourly │ ├── interval: yearly, monthly, weekly, daily, hourly
│ ├── min_amount (int) │ ├── min_amount
│ ├── max_amount (int) │ ├── max_amount
── currency (enum) ── currency
└── date_posted (date) │ └── salary_source: direct_data, description (parsed from posting)
── emails (str) ── date_posted
── num_urgent_words (int) ── emails
└── is_remote (bool) └── is_remote
Linkedin specific
└── job_level
Linkedin & Indeed specific
└── company_industry
Indeed specific
├── company_country
├── company_addresses
├── company_employees_label
├── company_revenue_label
├── company_description
├── ceo_name
├── ceo_photo_url
├── logo_photo_url
└── banner_photo_url
``` ```
## Supported Countries for Job Searching ## Supported Countries for Job Searching
### **LinkedIn** ### **LinkedIn**
LinkedIn searches globally & uses only the `location` parameter. You can only fetch 1000 jobs max from the LinkedIn endpoint we're using LinkedIn searches globally & uses only the `location` parameter.
### **ZipRecruiter** ### **ZipRecruiter**
@@ -141,7 +208,11 @@ You can specify the following countries when searching on Indeed (use the exact
| Venezuela | Vietnam* | | | | Venezuela | Vietnam* | | |
Glassdoor can only fetch 900 jobs from the endpoint we're using on a given search. ## Notes
* Indeed is the best scraper currently with no rate limiting.
* All the job board endpoints are capped at around 1000 jobs on a given search.
* LinkedIn is the most restrictive and usually rate limits around the 10th page with one ip. Proxies are a must basically.
## Frequently Asked Questions ## Frequently Asked Questions
--- ---
@@ -155,7 +226,7 @@ persist, [submit an issue](https://github.com/Bunsly/JobSpy/issues).
**Q: Received a response code 429?** **Q: Received a response code 429?**
**A:** This indicates that you have been blocked by the job board site for sending too many requests. All of the job board sites are aggressive with blocking. We recommend: **A:** This indicates that you have been blocked by the job board site for sending too many requests. All of the job board sites are aggressive with blocking. We recommend:
- Waiting some time between scrapes (site-dependent). - Wait some time between scrapes (site-dependent).
- Trying a VPN or proxy to change your IP address. - Try using the proxies param to change your IP address.
--- ---

View File

@@ -1,30 +0,0 @@
from jobspy import scrape_jobs
import pandas as pd
jobs: pd.DataFrame = scrape_jobs(
site_name=["indeed", "linkedin", "zip_recruiter", "glassdoor"],
search_term="software engineer",
location="Dallas, TX",
results_wanted=25, # be wary the higher it is, the more likey you'll get blocked (rotating proxy can help tho)
country_indeed="USA",
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
)
# formatting for pandas
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", 50) # set to 0 to see full job url / desc
# 1: output to console
print(jobs)
# 2: output to .csv
jobs.to_csv("./jobs.csv", index=False)
print("outputted to jobs.csv")
# 3: output to .xlsx
# jobs.to_xlsx('jobs.xlsx', index=False)
# 4: display in Jupyter Notebook (1. pip install jupyter 2. jupyter notebook)
# display(jobs)

View File

@@ -1,167 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "00a94b47-f47b-420f-ba7e-714ef219c006",
"metadata": {},
"outputs": [],
"source": [
"from jobspy import scrape_jobs\n",
"import pandas as pd\n",
"from IPython.display import display, HTML"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f773e6c-d9fc-42cc-b0ef-63b739e78435",
"metadata": {},
"outputs": [],
"source": [
"pd.set_option('display.max_columns', None)\n",
"pd.set_option('display.max_rows', None)\n",
"pd.set_option('display.width', None)\n",
"pd.set_option('display.max_colwidth', 50)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1253c1f8-9437-492e-9dd3-e7fe51099420",
"metadata": {},
"outputs": [],
"source": [
"# example 1 (no hyperlinks, USA)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\"],\n",
" location='san francisco',\n",
" search_term=\"engineer\",\n",
" results_wanted=5,\n",
"\n",
" # use if you want to use a proxy\n",
" # proxy=\"socks5://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" proxy=\"http://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" #proxy=\"https://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
")\n",
"display(jobs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a581b2d-f7da-4fac-868d-9efe143ee20a",
"metadata": {},
"outputs": [],
"source": [
"# example 2 - remote USA & hyperlinks\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\", \"zip_recruiter\", \"indeed\"],\n",
" # location='san francisco',\n",
" search_term=\"software engineer\",\n",
" country_indeed=\"USA\",\n",
" hyperlinks=True,\n",
" is_remote=True,\n",
" results_wanted=5, \n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fe8289bc-5b64-4202-9a64-7c117c83fd9a",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "951c2fe1-52ff-407d-8bb1-068049b36777",
"metadata": {},
"outputs": [],
"source": [
"# example 3 - with hyperlinks, international - linkedin (no zip_recruiter)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\"],\n",
" location='berlin',\n",
" search_term=\"engineer\",\n",
" hyperlinks=True,\n",
" results_wanted=5,\n",
" easy_apply=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e37a521-caef-441c-8fc2-2eb5b2e7da62",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0650e608-0b58-4bf5-ae86-68348035b16a",
"metadata": {},
"outputs": [],
"source": [
"# example 4 - international indeed (no zip_recruiter)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"indeed\"],\n",
" search_term=\"engineer\",\n",
" country_indeed = \"China\",\n",
" hyperlinks=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40913ac8-3f8a-4d7e-ac47-afb88316432b",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,77 +0,0 @@
from jobspy import scrape_jobs
import pandas as pd
import os
import time
# creates csv a new filename if the jobs.csv already exists.
csv_filename = "jobs.csv"
counter = 1
while os.path.exists(csv_filename):
csv_filename = f"jobs_{counter}.csv"
counter += 1
# results wanted and offset
results_wanted = 1000
offset = 0
all_jobs = []
# max retries
max_retries = 3
# nuumber of results at each iteration
results_in_each_iteration = 30
while len(all_jobs) < results_wanted:
retry_count = 0
while retry_count < max_retries:
print("Doing from", offset, "to", offset + results_in_each_iteration, "jobs")
try:
jobs = scrape_jobs(
site_name=["indeed"],
search_term="software engineer",
# New York, NY
# Dallas, TX
# Los Angeles, CA
location="Los Angeles, CA",
results_wanted=min(results_in_each_iteration, results_wanted - len(all_jobs)),
country_indeed="USA",
offset=offset,
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
)
# Add the scraped jobs to the list
all_jobs.extend(jobs.to_dict('records'))
# Increment the offset for the next page of results
offset += results_in_each_iteration
# Add a delay to avoid rate limiting (you can adjust the delay time as needed)
print(f"Scraped {len(all_jobs)} jobs")
print("Sleeping secs", 100 * (retry_count + 1))
time.sleep(100 * (retry_count + 1)) # Sleep for 2 seconds between requests
break # Break out of the retry loop if successful
except Exception as e:
print(f"Error: {e}")
retry_count += 1
print("Sleeping secs before retry", 100 * (retry_count + 1))
time.sleep(100 * (retry_count + 1))
if retry_count >= max_retries:
print("Max retries reached. Exiting.")
break
# DataFrame from the collected job data
jobs_df = pd.DataFrame(all_jobs)
# Formatting
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", 50)
print(jobs_df)
jobs_df.to_csv(csv_filename, index=False)
print(f"Outputted to {csv_filename}")

2334
poetry.lock generated

File diff suppressed because it is too large Load Diff

2
poetry.toml Normal file
View File

@@ -0,0 +1,2 @@
[virtualenvs]
in-project = true

View File

@@ -1,10 +1,11 @@
[tool.poetry] [tool.poetry]
name = "python-jobspy" name = "python-jobspy"
version = "1.1.47" version = "1.1.62"
description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter" description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter"
authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"] authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"]
homepage = "https://github.com/Bunsly/JobSpy" homepage = "https://github.com/Bunsly/JobSpy"
readme = "README.md" readme = "README.md"
keywords = ['jobs-scraper', 'linkedin', 'indeed', 'glassdoor', 'ziprecruiter']
packages = [ packages = [
{ include = "jobspy", from = "src" } { include = "jobspy", from = "src" }
@@ -15,16 +16,22 @@ python = "^3.10"
requests = "^2.31.0" requests = "^2.31.0"
beautifulsoup4 = "^4.12.2" beautifulsoup4 = "^4.12.2"
pandas = "^2.1.0" pandas = "^2.1.0"
NUMPY = "1.24.2" NUMPY = "1.26.3"
pydantic = "^2.3.0" pydantic = "^2.3.0"
tls-client = "^1.0.1" tls-client = "^1.0.1"
markdownify = "^0.11.6" markdownify = "^0.11.6"
regex = "^2024.4.28"
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]
pytest = "^7.4.1" pytest = "^7.4.1"
jupyter = "^1.0.0" jupyter = "^1.0.0"
black = "*"
pre-commit = "*"
[build-system] [build-system]
requires = ["poetry-core"] requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"
[tool.black]
line-length = 88

View File

@@ -1,13 +1,16 @@
from __future__ import annotations
import pandas as pd import pandas as pd
from typing import Tuple from typing import Tuple
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
from .jobs import JobType, Location from .jobs import JobType, Location
from .scrapers.utils import logger, set_logger_level, extract_salary
from .scrapers.indeed import IndeedScraper from .scrapers.indeed import IndeedScraper
from .scrapers.ziprecruiter import ZipRecruiterScraper from .scrapers.ziprecruiter import ZipRecruiterScraper
from .scrapers.glassdoor import GlassdoorScraper from .scrapers.glassdoor import GlassdoorScraper
from .scrapers.linkedin import LinkedInScraper from .scrapers.linkedin import LinkedInScraper
from .scrapers import ScraperInput, Site, JobResponse, Country from .scrapers import SalarySource, ScraperInput, Site, JobResponse, Country
from .scrapers.exceptions import ( from .scrapers.exceptions import (
LinkedInException, LinkedInException,
IndeedException, IndeedException,
@@ -20,24 +23,26 @@ def scrape_jobs(
site_name: str | list[str] | Site | list[Site] | None = None, site_name: str | list[str] | Site | list[Site] | None = None,
search_term: str | None = None, search_term: str | None = None,
location: str | None = None, location: str | None = None,
distance: int | None = None, distance: int | None = 50,
is_remote: bool = False, is_remote: bool = False,
job_type: str | None = None, job_type: str | None = None,
easy_apply: bool | None = None, easy_apply: bool | None = None,
results_wanted: int = 15, results_wanted: int = 15,
country_indeed: str = "usa", country_indeed: str = "usa",
hyperlinks: bool = False, hyperlinks: bool = False,
proxy: str | None = None, proxies: list[str] | str | None = None,
description_format: str = "markdown", description_format: str = "markdown",
linkedin_fetch_description: bool | None = False, linkedin_fetch_description: bool | None = False,
linkedin_company_ids: list[int] | None = None, linkedin_company_ids: list[int] | None = None,
offset: int | None = 0, offset: int | None = 0,
hours_old: int = None, hours_old: int = None,
enforce_annual_salary: bool = False,
verbose: int = 2,
**kwargs, **kwargs,
) -> pd.DataFrame: ) -> pd.DataFrame:
""" """
Simultaneously scrapes job data from multiple job sites. Simultaneously scrapes job data from multiple job sites.
:return: results_wanted: pandas dataframe containing job data :return: pandas dataframe containing job data
""" """
SCRAPER_MAPPING = { SCRAPER_MAPPING = {
Site.LINKEDIN: LinkedInScraper, Site.LINKEDIN: LinkedInScraper,
@@ -45,6 +50,7 @@ def scrape_jobs(
Site.ZIP_RECRUITER: ZipRecruiterScraper, Site.ZIP_RECRUITER: ZipRecruiterScraper,
Site.GLASSDOOR: GlassdoorScraper, Site.GLASSDOOR: GlassdoorScraper,
} }
set_logger_level(verbose)
def map_str_to_site(site_name: str) -> Site: def map_str_to_site(site_name: str) -> Site:
return Site[site_name.upper()] return Site[site_name.upper()]
@@ -69,6 +75,7 @@ def scrape_jobs(
for site in site_name for site in site_name
] ]
return site_types return site_types
country_enum = Country.from_string(country_indeed) country_enum = Country.from_string(country_indeed)
scraper_input = ScraperInput( scraper_input = ScraperInput(
@@ -85,13 +92,16 @@ def scrape_jobs(
results_wanted=results_wanted, results_wanted=results_wanted,
linkedin_company_ids=linkedin_company_ids, linkedin_company_ids=linkedin_company_ids,
offset=offset, offset=offset,
hours_old=hours_old hours_old=hours_old,
) )
def scrape_site(site: Site) -> Tuple[str, JobResponse]: def scrape_site(site: Site) -> Tuple[str, JobResponse]:
scraper_class = SCRAPER_MAPPING[site] scraper_class = SCRAPER_MAPPING[site]
scraper = scraper_class(proxy=proxy) scraper = scraper_class(proxies=proxies)
scraped_data: JobResponse = scraper.scrape(scraper_input) scraped_data: JobResponse = scraper.scrape(scraper_input)
cap_name = site.value.capitalize()
site_name = "ZipRecruiter" if cap_name == "Zip_recruiter" else cap_name
logger.info(f"{site_name} finished scraping")
return site.value, scraped_data return site.value, scraped_data
site_to_jobs_dict = {} site_to_jobs_dict = {}
@@ -109,14 +119,28 @@ def scrape_jobs(
site_value, scraped_data = future.result() site_value, scraped_data = future.result()
site_to_jobs_dict[site_value] = scraped_data site_to_jobs_dict[site_value] = scraped_data
def convert_to_annual(job_data: dict):
if job_data["interval"] == "hourly":
job_data["min_amount"] *= 2080
job_data["max_amount"] *= 2080
if job_data["interval"] == "monthly":
job_data["min_amount"] *= 12
job_data["max_amount"] *= 12
if job_data["interval"] == "weekly":
job_data["min_amount"] *= 52
job_data["max_amount"] *= 52
if job_data["interval"] == "daily":
job_data["min_amount"] *= 260
job_data["max_amount"] *= 260
job_data["interval"] = "yearly"
jobs_dfs: list[pd.DataFrame] = [] jobs_dfs: list[pd.DataFrame] = []
for site, job_response in site_to_jobs_dict.items(): for site, job_response in site_to_jobs_dict.items():
for job in job_response.jobs: for job in job_response.jobs:
job_data = job.dict() job_data = job.dict()
job_data[ job_url = job_data["job_url"]
"job_url_hyper" job_data["job_url_hyper"] = f'<a href="{job_url}">{job_url}</a>'
] = f'<a href="{job_data["job_url"]}">{job_data["job_url"]}</a>'
job_data["site"] = site job_data["site"] = site
job_data["company"] = job_data["company_name"] job_data["company"] = job_data["company_name"]
job_data["job_type"] = ( job_data["job_type"] = (
@@ -142,41 +166,76 @@ def scrape_jobs(
job_data["min_amount"] = compensation_obj.get("min_amount") job_data["min_amount"] = compensation_obj.get("min_amount")
job_data["max_amount"] = compensation_obj.get("max_amount") job_data["max_amount"] = compensation_obj.get("max_amount")
job_data["currency"] = compensation_obj.get("currency", "USD") job_data["currency"] = compensation_obj.get("currency", "USD")
else: job_data["salary_source"] = SalarySource.DIRECT_DATA.value
job_data["interval"] = None if enforce_annual_salary and (
job_data["min_amount"] = None job_data["interval"]
job_data["max_amount"] = None and job_data["interval"] != "yearly"
job_data["currency"] = None and job_data["min_amount"]
and job_data["max_amount"]
):
convert_to_annual(job_data)
else:
if country_enum == Country.USA:
(
job_data["interval"],
job_data["min_amount"],
job_data["max_amount"],
job_data["currency"],
) = extract_salary(
job_data["description"],
enforce_annual_salary=enforce_annual_salary,
)
job_data["salary_source"] = SalarySource.DESCRIPTION.value
job_data["salary_source"] = (
job_data["salary_source"]
if "min_amount" in job_data and job_data["min_amount"]
else None
)
job_df = pd.DataFrame([job_data]) job_df = pd.DataFrame([job_data])
jobs_dfs.append(job_df) jobs_dfs.append(job_df)
if jobs_dfs: if jobs_dfs:
# Step 1: Filter out all-NA columns from each DataFrame before concatenation # Step 1: Filter out all-NA columns from each DataFrame before concatenation
filtered_dfs = [df.dropna(axis=1, how='all') for df in jobs_dfs] filtered_dfs = [df.dropna(axis=1, how="all") for df in jobs_dfs]
# Step 2: Concatenate the filtered DataFrames # Step 2: Concatenate the filtered DataFrames
jobs_df = pd.concat(filtered_dfs, ignore_index=True) jobs_df = pd.concat(filtered_dfs, ignore_index=True)
# Desired column order # Desired column order
desired_order = [ desired_order = [
"job_url_hyper" if hyperlinks else "job_url", "id",
"site", "site",
"job_url_hyper" if hyperlinks else "job_url",
"job_url_direct",
"title", "title",
"company", "company",
"company_url",
"location", "location",
"job_type", "job_type",
"date_posted", "date_posted",
"salary_source",
"interval", "interval",
"min_amount", "min_amount",
"max_amount", "max_amount",
"currency", "currency",
"is_remote", "is_remote",
"num_urgent_words", "job_level",
"benefits", "job_function",
"company_industry",
"listing_type",
"emails", "emails",
"description", "description",
"company_url",
"company_url_direct",
"company_addresses",
"company_num_employees",
"company_revenue",
"company_description",
"logo_photo_url",
"banner_photo_url",
"ceo_name",
"ceo_photo_url",
] ]
# Step 3: Ensure all desired columns are present, adding missing ones as empty # Step 3: Ensure all desired columns are present, adding missing ones as empty
@@ -188,6 +247,6 @@ def scrape_jobs(
jobs_df = jobs_df[desired_order] jobs_df = jobs_df[desired_order]
# Step 4: Sort the DataFrame as required # Step 4: Sort the DataFrame as required
return jobs_df.sort_values(by=['site', 'date_posted'], ascending=[True, False]) return jobs_df.sort_values(by=["site", "date_posted"], ascending=[True, False])
else: else:
return pd.DataFrame() return pd.DataFrame()

View File

@@ -1,3 +1,5 @@
from __future__ import annotations
from typing import Optional from typing import Optional
from datetime import date from datetime import date
from enum import Enum from enum import Enum
@@ -57,7 +59,7 @@ class JobType(Enum):
class Country(Enum): class Country(Enum):
""" """
Gets the subdomain for Indeed and Glassdoor. Gets the subdomain for Indeed and Glassdoor.
The second item in the tuple is the subdomain for Indeed The second item in the tuple is the subdomain (and API country code if there's a ':' separator) for Indeed
The third item in the tuple is the subdomain (and tld if there's a ':' separator) for Glassdoor The third item in the tuple is the subdomain (and tld if there's a ':' separator) for Glassdoor
""" """
@@ -90,7 +92,7 @@ class Country(Enum):
JAPAN = ("japan", "jp") JAPAN = ("japan", "jp")
KUWAIT = ("kuwait", "kw") KUWAIT = ("kuwait", "kw")
LUXEMBOURG = ("luxembourg", "lu") LUXEMBOURG = ("luxembourg", "lu")
MALAYSIA = ("malaysia", "malaysia") MALAYSIA = ("malaysia", "malaysia:my", "com")
MEXICO = ("mexico", "mx", "com.mx") MEXICO = ("mexico", "mx", "com.mx")
MOROCCO = ("morocco", "ma") MOROCCO = ("morocco", "ma")
NETHERLANDS = ("netherlands", "nl", "nl") NETHERLANDS = ("netherlands", "nl", "nl")
@@ -118,8 +120,8 @@ class Country(Enum):
TURKEY = ("turkey", "tr") TURKEY = ("turkey", "tr")
UKRAINE = ("ukraine", "ua") UKRAINE = ("ukraine", "ua")
UNITEDARABEMIRATES = ("united arab emirates", "ae") UNITEDARABEMIRATES = ("united arab emirates", "ae")
UK = ("uk,united kingdom", "uk", "co.uk") UK = ("uk,united kingdom", "uk:gb", "co.uk")
USA = ("usa,us,united states", "www", "com") USA = ("usa,us,united states", "www:us", "com")
URUGUAY = ("uruguay", "uy") URUGUAY = ("uruguay", "uy")
VENEZUELA = ("venezuela", "ve") VENEZUELA = ("venezuela", "ve")
VIETNAM = ("vietnam", "vn", "com") VIETNAM = ("vietnam", "vn", "com")
@@ -132,7 +134,10 @@ class Country(Enum):
@property @property
def indeed_domain_value(self): def indeed_domain_value(self):
return self.value[1] subdomain, _, api_country_code = self.value[1].partition(":")
if subdomain and api_country_code:
return subdomain, api_country_code.upper()
return self.value[1], self.value[1].upper()
@property @property
def glassdoor_domain_value(self): def glassdoor_domain_value(self):
@@ -153,7 +158,7 @@ class Country(Enum):
"""Convert a string to the corresponding Country enum.""" """Convert a string to the corresponding Country enum."""
country_str = country_str.strip().lower() country_str = country_str.strip().lower()
for country in cls: for country in cls:
country_names = country.value[0].split(',') country_names = country.value[0].split(",")
if country_str in country_names: if country_str in country_names:
return country return country
valid_countries = [country.value for country in cls] valid_countries = [country.value for country in cls]
@@ -163,7 +168,7 @@ class Country(Enum):
class Location(BaseModel): class Location(BaseModel):
country: Country | None = None country: Country | str | None = None
city: Optional[str] = None city: Optional[str] = None
state: Optional[str] = None state: Optional[str] = None
@@ -173,7 +178,12 @@ class Location(BaseModel):
location_parts.append(self.city) location_parts.append(self.city)
if self.state: if self.state:
location_parts.append(self.state) location_parts.append(self.state)
if self.country and self.country not in (Country.US_CANADA, Country.WORLDWIDE): if isinstance(self.country, str):
location_parts.append(self.country)
elif self.country and self.country not in (
Country.US_CANADA,
Country.WORLDWIDE,
):
country_name = self.country.value[0] country_name = self.country.value[0]
if "," in country_name: if "," in country_name:
country_name = country_name.split(",")[0] country_name = country_name.split(",")[0]
@@ -216,22 +226,42 @@ class DescriptionFormat(Enum):
class JobPost(BaseModel): class JobPost(BaseModel):
id: str | None = None
title: str title: str
company_name: str company_name: str | None
job_url: str job_url: str
job_url_direct: str | None = None
location: Optional[Location] location: Optional[Location]
description: str | None = None description: str | None = None
company_url: str | None = None company_url: str | None = None
company_url_direct: str | None = None
job_type: list[JobType] | None = None job_type: list[JobType] | None = None
compensation: Compensation | None = None compensation: Compensation | None = None
date_posted: date | None = None date_posted: date | None = None
benefits: str | None = None
emails: list[str] | None = None emails: list[str] | None = None
num_urgent_words: int | None = None
is_remote: bool | None = None is_remote: bool | None = None
# company_industry: str | None = None listing_type: str | None = None
# linkedin specific
job_level: str | None = None
# linkedin and indeed specific
company_industry: str | None = None
# indeed specific
company_addresses: str | None = None
company_num_employees: str | None = None
company_revenue: str | None = None
company_description: str | None = None
ceo_name: str | None = None
ceo_photo_url: str | None = None
logo_photo_url: str | None = None
banner_photo_url: str | None = None
# linkedin only atm
job_function: str | None = None
class JobResponse(BaseModel): class JobResponse(BaseModel):

View File

@@ -1,10 +1,14 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from ..jobs import ( from ..jobs import (
Enum, Enum,
BaseModel, BaseModel,
JobType, JobType,
JobResponse, JobResponse,
Country, Country,
DescriptionFormat DescriptionFormat,
) )
@@ -14,6 +18,9 @@ class Site(Enum):
ZIP_RECRUITER = "zip_recruiter" ZIP_RECRUITER = "zip_recruiter"
GLASSDOOR = "glassdoor" GLASSDOOR = "glassdoor"
class SalarySource(Enum):
DIRECT_DATA = "direct_data"
DESCRIPTION = "description"
class ScraperInput(BaseModel): class ScraperInput(BaseModel):
site_type: list[Site] site_type: list[Site]
@@ -34,9 +41,10 @@ class ScraperInput(BaseModel):
hours_old: int | None = None hours_old: int | None = None
class Scraper: class Scraper(ABC):
def __init__(self, site: Site, proxy: list[str] | None = None): def __init__(self, site: Site, proxies: list[str] | None = None):
self.proxies = proxies
self.site = site self.site = site
self.proxy = (lambda p: {"http": p, "https": p} if p else None)(proxy)
@abstractmethod
def scrape(self, scraper_input: ScraperInput) -> JobResponse: ... def scrape(self, scraper_input: ScraperInput) -> JobResponse: ...

View File

@@ -4,21 +4,23 @@ jobspy.scrapers.glassdoor
This module contains routines to scrape Glassdoor. This module contains routines to scrape Glassdoor.
""" """
import json
import re
from __future__ import annotations
import re
import json
import requests import requests
from typing import Optional from typing import Optional, Tuple
from datetime import datetime, timedelta from datetime import datetime, timedelta
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
from ..utils import count_urgent_words, extract_emails_from_text
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..utils import extract_emails_from_text
from ..exceptions import GlassdoorException from ..exceptions import GlassdoorException
from ..utils import ( from ..utils import (
create_session, create_session,
markdown_converter, markdown_converter,
logger logger,
) )
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
@@ -27,17 +29,17 @@ from ...jobs import (
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
DescriptionFormat DescriptionFormat,
) )
class GlassdoorScraper(Scraper): class GlassdoorScraper(Scraper):
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes GlassdoorScraper with the Glassdoor job search url Initializes GlassdoorScraper with the Glassdoor job search url
""" """
site = Site(Site.GLASSDOOR) site = Site(Site.GLASSDOOR)
super().__init__(site, proxy=proxy) super().__init__(site, proxies=proxies)
self.base_url = None self.base_url = None
self.country = None self.country = None
@@ -57,39 +59,36 @@ class GlassdoorScraper(Scraper):
self.scraper_input.results_wanted = min(900, scraper_input.results_wanted) self.scraper_input.results_wanted = min(900, scraper_input.results_wanted)
self.base_url = self.scraper_input.country.get_glassdoor_url() self.base_url = self.scraper_input.country.get_glassdoor_url()
self.session = create_session(self.proxy, is_tls=True, has_retry=True) self.session = create_session(proxies=self.proxies, is_tls=True, has_retry=True)
token = self._get_csrf_token() token = self._get_csrf_token()
self.headers['gd-csrf-token'] = token if token else self.fallback_token self.headers["gd-csrf-token"] = token if token else self.fallback_token
location_id, location_type = self._get_location( location_id, location_type = self._get_location(
scraper_input.location, scraper_input.is_remote scraper_input.location, scraper_input.is_remote
) )
if location_type is None: if location_type is None:
logger.error('Glassdoor: location not parsed') logger.error("Glassdoor: location not parsed")
return JobResponse(jobs=[]) return JobResponse(jobs=[])
all_jobs: list[JobPost] = [] job_list: list[JobPost] = []
cursor = None cursor = None
for page in range( range_start = 1 + (scraper_input.offset // self.jobs_per_page)
1 + (scraper_input.offset // self.jobs_per_page), tot_pages = (scraper_input.results_wanted // self.jobs_per_page) + 2
min( range_end = min(tot_pages, self.max_pages + 1)
(scraper_input.results_wanted // self.jobs_per_page) + 2, for page in range(range_start, range_end):
self.max_pages + 1, logger.info(f"Glassdoor search page: {page}")
),
):
logger.info(f'Glassdoor search page: {page}')
try: try:
jobs, cursor = self._fetch_jobs_page( jobs, cursor = self._fetch_jobs_page(
scraper_input, location_id, location_type, page, cursor scraper_input, location_id, location_type, page, cursor
) )
all_jobs.extend(jobs) job_list.extend(jobs)
if not jobs or len(all_jobs) >= scraper_input.results_wanted: if not jobs or len(job_list) >= scraper_input.results_wanted:
all_jobs = all_jobs[: scraper_input.results_wanted] job_list = job_list[: scraper_input.results_wanted]
break break
except Exception as e: except Exception as e:
logger.error(f'Glassdoor: {str(e)}') logger.error(f"Glassdoor: {str(e)}")
break break
return JobResponse(jobs=all_jobs) return JobResponse(jobs=job_list)
def _fetch_jobs_page( def _fetch_jobs_page(
self, self,
@@ -98,39 +97,48 @@ class GlassdoorScraper(Scraper):
location_type: str, location_type: str,
page_num: int, page_num: int,
cursor: str | None, cursor: str | None,
) -> (list[JobPost], str | None): ) -> Tuple[list[JobPost], str | None]:
""" """
Scrapes a page of Glassdoor for jobs with scraper_input criteria Scrapes a page of Glassdoor for jobs with scraper_input criteria
""" """
jobs = [] jobs = []
self.scraper_input = scraper_input self.scraper_input = scraper_input
try: try:
payload = self._add_payload( payload = self._add_payload(location_id, location_type, page_num, cursor)
location_id, location_type, page_num, cursor
)
response = self.session.post( response = self.session.post(
f"{self.base_url}/graph", headers=self.headers, timeout_seconds=15, data=payload f"{self.base_url}/graph",
headers=self.headers,
timeout_seconds=15,
data=payload,
) )
if response.status_code != 200: if response.status_code != 200:
raise GlassdoorException(f"bad response status code: {response.status_code}") exc_msg = f"bad response status code: {response.status_code}"
raise GlassdoorException(exc_msg)
res_json = response.json()[0] res_json = response.json()[0]
if "errors" in res_json: if "errors" in res_json:
raise ValueError("Error encountered in API response") raise ValueError("Error encountered in API response")
except (requests.exceptions.ReadTimeout, GlassdoorException, ValueError, Exception) as e: except (
logger.error(f'Glassdoor: {str(e)}') requests.exceptions.ReadTimeout,
GlassdoorException,
ValueError,
Exception,
) as e:
logger.error(f"Glassdoor: {str(e)}")
return jobs, None return jobs, None
jobs_data = res_json["data"]["jobListings"]["jobListings"] jobs_data = res_json["data"]["jobListings"]["jobListings"]
with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor: with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
future_to_job_data = {executor.submit(self._process_job, job): job for job in jobs_data} future_to_job_data = {
executor.submit(self._process_job, job): job for job in jobs_data
}
for future in as_completed(future_to_job_data): for future in as_completed(future_to_job_data):
try: try:
job_post = future.result() job_post = future.result()
if job_post: if job_post:
jobs.append(job_post) jobs.append(job_post)
except Exception as exc: except Exception as exc:
raise GlassdoorException(f'Glassdoor generated an exception: {exc}') raise GlassdoorException(f"Glassdoor generated an exception: {exc}")
return jobs, self.get_cursor_for_page( return jobs, self.get_cursor_for_page(
res_json["data"]["jobListings"]["paginationCursors"], page_num + 1 res_json["data"]["jobListings"]["paginationCursors"], page_num + 1
@@ -140,7 +148,9 @@ class GlassdoorScraper(Scraper):
""" """
Fetches csrf token needed for API by visiting a generic page Fetches csrf token needed for API by visiting a generic page
""" """
res = self.session.get(f'{self.base_url}/Job/computer-science-jobs.htm', headers=self.headers) res = self.session.get(
f"{self.base_url}/Job/computer-science-jobs.htm", headers=self.headers
)
pattern = r'"token":\s*"([^"]+)"' pattern = r'"token":\s*"([^"]+)"'
matches = re.findall(pattern, res.text) matches = re.findall(pattern, res.text)
token = None token = None
@@ -153,19 +163,20 @@ class GlassdoorScraper(Scraper):
Processes a single job and fetches its description. Processes a single job and fetches its description.
""" """
job_id = job_data["jobview"]["job"]["listingId"] job_id = job_data["jobview"]["job"]["listingId"]
job_url = f'{self.base_url}job-listing/j?jl={job_id}' job_url = f"{self.base_url}job-listing/j?jl={job_id}"
if job_url in self.seen_urls: if job_url in self.seen_urls:
return None return None
self.seen_urls.add(job_url) self.seen_urls.add(job_url)
job = job_data["jobview"] job = job_data["jobview"]
title = job["job"]["jobTitleText"] title = job["job"]["jobTitleText"]
company_name = job["header"]["employerNameFromSearch"] company_name = job["header"]["employerNameFromSearch"]
company_id = job_data['jobview']['header']['employer']['id'] company_id = job_data["jobview"]["header"]["employer"]["id"]
location_name = job["header"].get("locationName", "") location_name = job["header"].get("locationName", "")
location_type = job["header"].get("locationType", "") location_type = job["header"].get("locationType", "")
age_in_days = job["header"].get("ageInDays") age_in_days = job["header"].get("ageInDays")
is_remote, location = False, None is_remote, location = False, None
date_posted = (datetime.now() - timedelta(days=age_in_days)).date() if age_in_days is not None else None date_diff = (datetime.now() - timedelta(days=age_in_days)).date()
date_posted = date_diff if age_in_days is not None else None
if location_type == "S": if location_type == "S":
is_remote = True is_remote = True
@@ -177,9 +188,20 @@ class GlassdoorScraper(Scraper):
description = self._fetch_job_description(job_id) description = self._fetch_job_description(job_id)
except: except:
description = None description = None
company_url = f"{self.base_url}Overview/W-EI_IE{company_id}.htm"
company_logo = (
job_data["jobview"].get("overview", {}).get("squareLogoUrl", None)
)
listing_type = (
job_data["jobview"]
.get("header", {})
.get("adOrderSponsorshipLevel", "")
.lower()
)
return JobPost( return JobPost(
id=str(job_id),
title=title, title=title,
company_url=f"{self.base_url}Overview/W-EI_IE{company_id}.htm" if company_id else None, company_url=company_url if company_id else None,
company_name=company_name, company_name=company_name,
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
@@ -188,7 +210,8 @@ class GlassdoorScraper(Scraper):
is_remote=is_remote, is_remote=is_remote,
description=description, description=description,
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None, logo_photo_url=company_logo,
listing_type=listing_type,
) )
def _fetch_job_description(self, job_id): def _fetch_job_description(self, job_id):
@@ -202,7 +225,7 @@ class GlassdoorScraper(Scraper):
"variables": { "variables": {
"jl": job_id, "jl": job_id,
"queryString": "q", "queryString": "q",
"pageTypeEnum": "SERP" "pageTypeEnum": "SERP",
}, },
"query": """ "query": """
query JobDetailQuery($jl: Long!, $queryString: String, $pageTypeEnum: PageTypeEnum) { query JobDetailQuery($jl: Long!, $queryString: String, $pageTypeEnum: PageTypeEnum) {
@@ -217,28 +240,32 @@ class GlassdoorScraper(Scraper):
__typename __typename
} }
} }
""" """,
} }
] ]
res = requests.post(url, json=body, headers=self.headers) res = requests.post(url, json=body, headers=self.headers)
if res.status_code != 200: if res.status_code != 200:
return None return None
data = res.json()[0] data = res.json()[0]
desc = data['data']['jobview']['job']['description'] desc = data["data"]["jobview"]["job"]["description"]
return markdown_converter(desc) if self.scraper_input.description_format == DescriptionFormat.MARKDOWN else desc if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
desc = markdown_converter(desc)
return desc
def _get_location(self, location: str, is_remote: bool) -> (int, str): def _get_location(self, location: str, is_remote: bool) -> (int, str):
if not location or is_remote: if not location or is_remote:
return "11047", "STATE" # remote options return "11047", "STATE" # remote options
url = f"{self.base_url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}" url = f"{self.base_url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}"
session = create_session(self.proxy, has_retry=True)
res = self.session.get(url, headers=self.headers) res = self.session.get(url, headers=self.headers)
if res.status_code != 200: if res.status_code != 200:
if res.status_code == 429: if res.status_code == 429:
logger.error(f'429 Response - Blocked by Glassdoor for too many requests') err = f"429 Response - Blocked by Glassdoor for too many requests"
logger.error(err)
return None, None return None, None
else: else:
logger.error(f'Glassdoor response status code {res.status_code}') err = f"Glassdoor response status code {res.status_code}"
err += f" - {res.text}"
logger.error(f"Glassdoor response status code {res.status_code}")
return None, None return None, None
items = res.json() items = res.json()
@@ -249,7 +276,7 @@ class GlassdoorScraper(Scraper):
location_type = "CITY" location_type = "CITY"
elif location_type == "S": elif location_type == "S":
location_type = "STATE" location_type = "STATE"
elif location_type == 'N': elif location_type == "N":
location_type = "COUNTRY" location_type = "COUNTRY"
return int(items[0]["locationId"]), location_type return int(items[0]["locationId"]), location_type
@@ -260,7 +287,9 @@ class GlassdoorScraper(Scraper):
page_num: int, page_num: int,
cursor: str | None = None, cursor: str | None = None,
) -> str: ) -> str:
fromage = max(self.scraper_input.hours_old // 24, 1) if self.scraper_input.hours_old else None fromage = None
if self.scraper_input.hours_old:
fromage = max(self.scraper_input.hours_old // 24, 1)
filter_params = [] filter_params = []
if self.scraper_input.easy_apply: if self.scraper_input.easy_apply:
filter_params.append({"filterKey": "applicationType", "values": "1"}) filter_params.append({"filterKey": "applicationType", "values": "1"})
@@ -279,9 +308,9 @@ class GlassdoorScraper(Scraper):
"pageNumber": page_num, "pageNumber": page_num,
"pageCursor": cursor, "pageCursor": cursor,
"fromage": fromage, "fromage": fromage,
"sort": "date" "sort": "date",
}, },
"query": self.query_template "query": self.query_template,
} }
if self.scraper_input.job_type: if self.scraper_input.job_type:
payload["variables"]["filterParams"].append( payload["variables"]["filterParams"].append(

View File

@@ -4,24 +4,21 @@ jobspy.scrapers.indeed
This module contains routines to scrape Indeed. This module contains routines to scrape Indeed.
""" """
import re
import math
import json
import requests
from typing import Any
from datetime import datetime
from bs4 import BeautifulSoup from __future__ import annotations
from bs4.element import Tag
import math
from typing import Tuple
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor, Future from concurrent.futures import ThreadPoolExecutor, Future
from .. import Scraper, ScraperInput, Site
from ..utils import ( from ..utils import (
count_urgent_words,
extract_emails_from_text, extract_emails_from_text,
create_session,
get_enum_from_job_type, get_enum_from_job_type,
markdown_converter, markdown_converter,
logger logger,
create_session,
) )
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
@@ -30,24 +27,26 @@ from ...jobs import (
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
DescriptionFormat DescriptionFormat,
) )
from .. import Scraper, ScraperInput, Site
class IndeedScraper(Scraper): class IndeedScraper(Scraper):
def __init__(self, proxy: str | None = None): def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes IndeedScraper with the Indeed job search url Initializes IndeedScraper with the Indeed API url
""" """
super().__init__(Site.INDEED, proxies=proxies)
self.session = create_session(proxies=self.proxies, is_tls=False)
self.scraper_input = None self.scraper_input = None
self.jobs_per_page = 25 self.jobs_per_page = 100
self.num_workers = 10 self.num_workers = 10
self.seen_urls = set() self.seen_urls = set()
self.headers = None
self.api_country_code = None
self.base_url = None self.base_url = None
self.api_url = "https://apis.indeed.com/graphql" self.api_url = "https://apis.indeed.com/graphql"
site = Site(Site.INDEED)
super().__init__(site, proxy=proxy)
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
@@ -56,284 +55,292 @@ class IndeedScraper(Scraper):
:return: job_response :return: job_response
""" """
self.scraper_input = scraper_input self.scraper_input = scraper_input
job_list = self._scrape_page() domain, self.api_country_code = self.scraper_input.country.indeed_domain_value
pages_processed = 1 self.base_url = f"https://{domain}.indeed.com"
self.headers = self.api_headers.copy()
self.headers["indeed-co"] = self.scraper_input.country.indeed_domain_value
job_list = []
page = 1
cursor = None
offset_pages = math.ceil(self.scraper_input.offset / 100)
for _ in range(offset_pages):
logger.info(f"Indeed skipping search page: {page}")
__, cursor = self._scrape_page(cursor)
if not __:
logger.info(f"Indeed found no jobs on page: {page}")
break
while len(self.seen_urls) < scraper_input.results_wanted: while len(self.seen_urls) < scraper_input.results_wanted:
pages_to_process = math.ceil((scraper_input.results_wanted - len(self.seen_urls)) / self.jobs_per_page) logger.info(f"Indeed search page: {page}")
new_jobs = False jobs, cursor = self._scrape_page(cursor)
with ThreadPoolExecutor(max_workers=self.num_workers) as executor: if not jobs:
futures: list[Future] = [ logger.info(f"Indeed found no jobs on page: {page}")
executor.submit(self._scrape_page, page + pages_processed) break
for page in range(pages_to_process)
]
for future in futures:
jobs = future.result()
if jobs:
job_list += jobs job_list += jobs
new_jobs = True page += 1
if len(self.seen_urls) >= scraper_input.results_wanted: return JobResponse(jobs=job_list[: scraper_input.results_wanted])
break
pages_processed += pages_to_process def _scrape_page(self, cursor: str | None) -> Tuple[list[JobPost], str | None]:
if not new_jobs:
break
if len(self.seen_urls) > scraper_input.results_wanted:
job_list = job_list[:scraper_input.results_wanted]
return JobResponse(jobs=job_list)
def _scrape_page(self, page: int=0) -> list[JobPost]:
""" """
Scrapes a page of Indeed for jobs with scraper_input criteria Scrapes a page of Indeed for jobs with scraper_input criteria
:param page: :param cursor:
:return: jobs found on page, total number of jobs found for search :return: jobs found on page, next page cursor
""" """
logger.info(f'Indeed search page: {page + 1}') jobs = []
job_list = [] new_cursor = None
domain = self.scraper_input.country.indeed_domain_value filters = self._build_filters()
self.base_url = f"https://{domain}.indeed.com" search_term = (
self.scraper_input.search_term.replace('"', '\\"')
try: if self.scraper_input.search_term
session = create_session(self.proxy) else ""
response = session.get(
f"{self.base_url}/m/jobs",
headers=self.headers,
params=self._add_params(page),
) )
if response.status_code not in range(200, 400): query = self.job_search_query.format(
if response.status_code == 429: what=(f'what: "{search_term}"' if search_term else ""),
logger.error(f'429 Response - Blocked by Indeed for too many requests') location=(
else: f'location: {{where: "{self.scraper_input.location}", radius: {self.scraper_input.distance}, radiusUnit: MILES}}'
logger.error(f'Indeed response status code {response.status_code}') if self.scraper_input.location
return job_list else ""
),
except Exception as e: dateOnIndeed=self.scraper_input.hours_old,
if "Proxy responded with" in str(e): cursor=f'cursor: "{cursor}"' if cursor else "",
logger.error(f'Indeed: Bad proxy') filters=filters,
else: )
logger.error(f'Indeed: {str(e)}') payload = {
return job_list "query": query,
}
soup = BeautifulSoup(response.content, "html.parser") api_headers = self.api_headers.copy()
if "did not match any jobs" in response.text: api_headers["indeed-co"] = self.api_country_code
return job_list response = self.session.post(
self.api_url,
jobs = IndeedScraper._parse_jobs(soup) headers=api_headers,
if not jobs: json=payload,
return [] timeout=10,
if ( )
not jobs.get("metaData", {}) if response.status_code != 200:
.get("mosaicProviderJobCardsModel", {}) logger.info(
.get("results") f"Indeed responded with status code: {response.status_code} (submit GitHub issue if this appears to be a bug)"
): )
logger.error("Indeed - No jobs found.") return jobs, new_cursor
return [] data = response.json()
jobs = data["data"]["jobSearch"]["results"]
jobs = jobs["metaData"]["mosaicProviderJobCardsModel"]["results"] new_cursor = data["data"]["jobSearch"]["pageInfo"]["nextCursor"]
job_keys = [job['jobkey'] for job in jobs]
jobs_detailed = self._get_job_details(job_keys)
with ThreadPoolExecutor(max_workers=self.num_workers) as executor: with ThreadPoolExecutor(max_workers=self.num_workers) as executor:
job_results: list[Future] = [ job_results: list[Future] = [
executor.submit(self._process_job, job, job_detailed['job']) for job, job_detailed in zip(jobs, jobs_detailed) executor.submit(self._process_job, job["job"]) for job in jobs
] ]
job_list = [result.result() for result in job_results if result.result()] job_list = [result.result() for result in job_results if result.result()]
return job_list, new_cursor
return job_list def _build_filters(self):
"""
Builds the filters dict for job type/is_remote. If hours_old is provided, composite filter for job_type/is_remote is not possible.
IndeedApply: filters: { keyword: { field: "indeedApplyScope", keys: ["DESKTOP"] } }
"""
filters_str = ""
if self.scraper_input.hours_old:
filters_str = """
filters: {{
date: {{
field: "dateOnIndeed",
start: "{start}h"
}}
}}
""".format(
start=self.scraper_input.hours_old
)
elif self.scraper_input.easy_apply:
filters_str = """
filters: {
keyword: {
field: "indeedApplyScope",
keys: ["DESKTOP"]
}
}
"""
elif self.scraper_input.job_type or self.scraper_input.is_remote:
job_type_key_mapping = {
JobType.FULL_TIME: "CF3CP",
JobType.PART_TIME: "75GKK",
JobType.CONTRACT: "NJXCK",
JobType.INTERNSHIP: "VDTG7",
}
def _process_job(self, job: dict, job_detailed: dict) -> JobPost | None: keys = []
job_url = f'{self.base_url}/m/jobs/viewjob?jk={job["jobkey"]}' if self.scraper_input.job_type:
job_url_client = f'{self.base_url}/viewjob?jk={job["jobkey"]}' key = job_type_key_mapping[self.scraper_input.job_type]
keys.append(key)
if self.scraper_input.is_remote:
keys.append("DSQF7")
if keys:
keys_str = '", "'.join(keys)
filters_str = f"""
filters: {{
composite: {{
filters: [{{
keyword: {{
field: "attributes",
keys: ["{keys_str}"]
}}
}}]
}}
}}
"""
return filters_str
def _process_job(self, job: dict) -> JobPost | None:
"""
Parses the job dict into JobPost model
:param job: dict to parse
:return: JobPost if it's a new job
"""
job_url = f'{self.base_url}/viewjob?jk={job["key"]}'
if job_url in self.seen_urls: if job_url in self.seen_urls:
return None return
self.seen_urls.add(job_url) self.seen_urls.add(job_url)
description = job_detailed['description']['html'] description = job["description"]["html"]
description = markdown_converter(description) if self.scraper_input.description_format == DescriptionFormat.MARKDOWN else description if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
job_type = self._get_job_type(job) description = markdown_converter(description)
timestamp_seconds = job["pubDate"] / 1000
date_posted = datetime.fromtimestamp(timestamp_seconds) job_type = self._get_job_type(job["attributes"])
date_posted = date_posted.strftime("%Y-%m-%d") timestamp_seconds = job["datePublished"] / 1000
date_posted = datetime.fromtimestamp(timestamp_seconds).strftime("%Y-%m-%d")
employer = job["employer"].get("dossier") if job["employer"] else None
employer_details = employer.get("employerDetails", {}) if employer else {}
rel_url = job["employer"]["relativeCompanyPageUrl"] if job["employer"] else None
return JobPost( return JobPost(
title=job["normTitle"], id=str(job["key"]),
title=job["title"],
description=description, description=description,
company_name=job["company"], company_name=job["employer"].get("name") if job.get("employer") else None,
company_url=f"{self.base_url}{job_detailed['employer']['relativeCompanyPageUrl']}" if job_detailed[ company_url=(f"{self.base_url}{rel_url}" if job["employer"] else None),
'employer'] else None, company_url_direct=(
employer["links"]["corporateWebsite"] if employer else None
),
location=Location( location=Location(
city=job.get("jobLocationCity"), city=job.get("location", {}).get("city"),
state=job.get("jobLocationState"), state=job.get("location", {}).get("admin1Code"),
country=self.scraper_input.country, country=job.get("location", {}).get("countryCode"),
), ),
job_type=job_type, job_type=job_type,
compensation=self._get_compensation(job, job_detailed), compensation=self._get_compensation(job["compensation"]),
date_posted=date_posted, date_posted=date_posted,
job_url=job_url_client, job_url=job_url,
job_url_direct=(
job["recruit"].get("viewJobUrl") if job.get("recruit") else None
),
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None, is_remote=self._is_job_remote(job, description),
is_remote=self._is_job_remote(job, job_detailed, description) company_addresses=(
employer_details["addresses"][0]
if employer_details.get("addresses")
else None
),
company_industry=(
employer_details["industry"]
.replace("Iv1", "")
.replace("_", " ")
.title()
.strip()
if employer_details.get("industry")
else None
),
company_num_employees=employer_details.get("employeesLocalizedLabel"),
company_revenue=employer_details.get("revenueLocalizedLabel"),
company_description=employer_details.get("briefDescription"),
ceo_name=employer_details.get("ceoName"),
ceo_photo_url=employer_details.get("ceoPhotoUrl"),
logo_photo_url=(
employer["images"].get("squareLogoUrl")
if employer and employer.get("images")
else None
),
banner_photo_url=(
employer["images"].get("headerImageUrl")
if employer and employer.get("images")
else None
),
) )
def _get_job_details(self, job_keys: list[str]) -> dict:
"""
Queries the GraphQL endpoint for detailed job information for the given job keys.
"""
job_keys_gql = '[' + ', '.join(f'"{key}"' for key in job_keys) + ']'
payload = dict(self.api_payload)
payload["query"] = self.api_payload["query"].format(job_keys_gql=job_keys_gql)
response = requests.post(self.api_url, headers=self.api_headers, json=payload, proxies=self.proxy)
if response.status_code == 200:
return response.json()['data']['jobData']['results']
else:
return {}
def _add_params(self, page: int) -> dict[str, str | Any]:
fromage = max(self.scraper_input.hours_old // 24, 1) if self.scraper_input.hours_old else None
params = {
"q": self.scraper_input.search_term,
"l": self.scraper_input.location if self.scraper_input.location else self.scraper_input.country.value[0].split(',')[-1],
"filter": 0,
"start": self.scraper_input.offset + page * 10,
"sort": "date",
"fromage": fromage,
}
if self.scraper_input.distance:
params["radius"] = self.scraper_input.distance
sc_values = []
if self.scraper_input.is_remote:
sc_values.append("attr(DSQF7)")
if self.scraper_input.job_type:
sc_values.append("jt({})".format(self.scraper_input.job_type.value[0]))
if sc_values:
params["sc"] = "0kf:" + "".join(sc_values) + ";"
if self.scraper_input.easy_apply:
params['iafilter'] = 1
return params
@staticmethod @staticmethod
def _get_job_type(job: dict) -> list[JobType] | None: def _get_job_type(attributes: list) -> list[JobType]:
""" """
Parses the job to get list of job types Parses the attributes to get list of job types
:param job: :param attributes:
:return: :return: list of JobType
""" """
job_types: list[JobType] = [] job_types: list[JobType] = []
for taxonomy in job["taxonomyAttributes"]: for attribute in attributes:
if taxonomy["label"] == "job-types": job_type_str = attribute["label"].replace("-", "").replace(" ", "").lower()
for i in range(len(taxonomy["attributes"])):
label = taxonomy["attributes"][i].get("label")
if label:
job_type_str = label.replace("-", "").replace(" ", "").lower()
job_type = get_enum_from_job_type(job_type_str) job_type = get_enum_from_job_type(job_type_str)
if job_type: if job_type:
job_types.append(job_type) job_types.append(job_type)
return job_types return job_types
@staticmethod @staticmethod
def _get_compensation(job: dict, job_detailed: dict) -> Compensation: def _get_compensation(compensation: dict) -> Compensation | None:
""" """
Parses the job to get Parses the job to get compensation
:param job: :param job:
:param job_detailed:
:return: compensation object :return: compensation object
""" """
comp = job_detailed['compensation']['baseSalary'] if not compensation["baseSalary"] and not compensation["estimated"]:
if comp: return None
interval = IndeedScraper._get_correct_interval(comp['unitOfWork']) comp = (
if interval: compensation["baseSalary"]
if compensation["baseSalary"]
else compensation["estimated"]["baseSalary"]
)
if not comp:
return None
interval = IndeedScraper._get_compensation_interval(comp["unitOfWork"])
if not interval:
return None
min_range = comp["range"].get("min")
max_range = comp["range"].get("max")
return Compensation( return Compensation(
interval=interval, interval=interval,
min_amount=round(comp['range'].get('min'), 2) if comp['range'].get('min') is not None else None, min_amount=int(min_range) if min_range is not None else None,
max_amount=round(comp['range'].get('max'), 2) if comp['range'].get('max') is not None else None, max_amount=int(max_range) if max_range is not None else None,
currency=job_detailed['compensation']['currencyCode'] currency=(
compensation["estimated"]["currencyCode"]
if compensation["estimated"]
else compensation["currencyCode"]
),
) )
extracted_salary = job.get("extractedSalary")
compensation = None
if extracted_salary:
salary_snippet = job.get("salarySnippet")
currency = salary_snippet.get("currency") if salary_snippet else None
interval = (extracted_salary.get("type"),)
if isinstance(interval, tuple):
interval = interval[0]
interval = interval.upper()
if interval in CompensationInterval.__members__:
compensation = Compensation(
interval=CompensationInterval[interval],
min_amount=int(extracted_salary.get("min")),
max_amount=int(extracted_salary.get("max")),
currency=currency,
)
return compensation
@staticmethod @staticmethod
def _parse_jobs(soup: BeautifulSoup) -> dict: def _is_job_remote(job: dict, description: str) -> bool:
""" """
Parses the jobs from the soup object Searches the description, location, and attributes to check if job is remote
:param soup:
:return: jobs
""" """
def find_mosaic_script() -> Tag | None: remote_keywords = ["remote", "work from home", "wfh"]
script_tags = soup.find_all("script")
for tag in script_tags:
if (
tag.string
and "mosaic.providerData" in tag.string
and "mosaic-provider-jobcards" in tag.string
):
return tag
return None
script_tag = find_mosaic_script()
if script_tag:
script_str = script_tag.string
pattern = r'window.mosaic.providerData\["mosaic-provider-jobcards"\]\s*=\s*({.*?});'
p = re.compile(pattern, re.DOTALL)
m = p.search(script_str)
if m:
jobs = json.loads(m.group(1).strip())
return jobs
else:
logger.warning(f'Indeed: Could not find mosaic provider job cards data')
return {}
else:
logger.warning(f"Indeed: Could not parse any jobs on the page")
return {}
@staticmethod
def _is_job_remote(job: dict, job_detailed: dict, description: str) -> bool:
remote_keywords = ['remote', 'work from home', 'wfh']
is_remote_in_attributes = any( is_remote_in_attributes = any(
any(keyword in attr['label'].lower() for keyword in remote_keywords) any(keyword in attr["label"].lower() for keyword in remote_keywords)
for attr in job_detailed['attributes'] for attr in job["attributes"]
)
is_remote_in_description = any(
keyword in description.lower() for keyword in remote_keywords
) )
is_remote_in_description = any(keyword in description.lower() for keyword in remote_keywords)
is_remote_in_location = any( is_remote_in_location = any(
keyword in job_detailed['location']['formatted']['long'].lower() keyword in job["location"]["formatted"]["long"].lower()
for keyword in remote_keywords for keyword in remote_keywords
) )
is_remote_in_taxonomy = any( return (
taxonomy["label"] == "remote" and len(taxonomy["attributes"]) > 0 is_remote_in_attributes or is_remote_in_description or is_remote_in_location
for taxonomy in job.get("taxonomyAttributes", [])
) )
return is_remote_in_attributes or is_remote_in_description or is_remote_in_location or is_remote_in_taxonomy
@staticmethod @staticmethod
def _get_correct_interval(interval: str) -> CompensationInterval: def _get_compensation_interval(interval: str) -> CompensationInterval:
interval_mapping = { interval_mapping = {
"DAY": "DAILY", "DAY": "DAILY",
"YEAR": "YEARLY", "YEAR": "YEARLY",
"HOUR": "HOURLY", "HOUR": "HOURLY",
"WEEK": "WEEKLY", "WEEK": "WEEKLY",
"MONTH": "MONTHLY" "MONTH": "MONTHLY",
} }
mapped_interval = interval_mapping.get(interval.upper(), None) mapped_interval = interval_mapping.get(interval.upper(), None)
if mapped_interval and mapped_interval in CompensationInterval.__members__: if mapped_interval and mapped_interval in CompensationInterval.__members__:
@@ -341,43 +348,46 @@ class IndeedScraper(Scraper):
else: else:
raise ValueError(f"Unsupported interval: {interval}") raise ValueError(f"Unsupported interval: {interval}")
headers = {
'Host': 'www.indeed.com',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'sec-fetch-site': 'same-origin',
'sec-fetch-dest': 'document',
'accept-language': 'en-US,en;q=0.9',
'sec-fetch-mode': 'navigate',
'user-agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 192.0',
'referer': 'https://www.indeed.com/m/jobs?q=software%20intern&l=Dallas%2C%20TX&from=serpso&rq=1&rsIdx=3',
}
api_headers = { api_headers = {
'Host': 'apis.indeed.com', "Host": "apis.indeed.com",
'content-type': 'application/json', "content-type": "application/json",
'indeed-api-key': '161092c2017b5bbab13edb12461a62d5a833871e7cad6d9d475304573de67ac8', "indeed-api-key": "161092c2017b5bbab13edb12461a62d5a833871e7cad6d9d475304573de67ac8",
'accept': 'application/json', "accept": "application/json",
'indeed-locale': 'en-US', "indeed-locale": "en-US",
'accept-language': 'en-US,en;q=0.9', "accept-language": "en-US,en;q=0.9",
'user-agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 193.1', "user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 193.1",
'indeed-app-info': 'appv=193.1; appid=com.indeed.jobsearch; osv=16.6.1; os=ios; dtype=phone', "indeed-app-info": "appv=193.1; appid=com.indeed.jobsearch; osv=16.6.1; os=ios; dtype=phone",
'indeed-co': 'US',
} }
api_payload = { job_search_query = """
"query": """
query GetJobData {{ query GetJobData {{
jobData(input: {{ jobSearch(
jobKeys: {job_keys_gql} {what}
}}) {{ {location}
limit: 100
sort: DATE
{cursor}
{filters}
) {{
pageInfo {{
nextCursor
}}
results {{ results {{
trackingKey
job {{ job {{
source {{
name
}}
key key
title title
datePublished
dateOnIndeed
description {{ description {{
html html
}} }}
location {{ location {{
countryName countryName
countryCode countryCode
admin1Code
city city
postalCode postalCode
streetAddress streetAddress
@@ -387,6 +397,18 @@ class IndeedScraper(Scraper):
}} }}
}} }}
compensation {{ compensation {{
estimated {{
currencyCode
baseSalary {{
unitOfWork
range {{
... on Range {{
min
max
}}
}}
}}
}}
baseSalary {{ baseSalary {{
unitOfWork unitOfWork
range {{ range {{
@@ -399,10 +421,30 @@ class IndeedScraper(Scraper):
currencyCode currencyCode
}} }}
attributes {{ attributes {{
key
label label
}} }}
employer {{ employer {{
relativeCompanyPageUrl relativeCompanyPageUrl
name
dossier {{
employerDetails {{
addresses
industry
employeesLocalizedLabel
revenueLocalizedLabel
briefDescription
ceoName
ceoPhotoUrl
}}
images {{
headerImageUrl
squareLogoUrl
}}
links {{
corporateWebsite
}}
}}
}} }}
recruit {{ recruit {{
viewJobUrl viewJobUrl
@@ -414,4 +456,3 @@ class IndeedScraper(Scraper):
}} }}
}} }}
""" """
}

View File

@@ -4,19 +4,22 @@ jobspy.scrapers.linkedin
This module contains routines to scrape LinkedIn. This module contains routines to scrape LinkedIn.
""" """
from __future__ import annotations
import time import time
import random import random
import regex as re
from typing import Optional from typing import Optional
from datetime import datetime from datetime import datetime
from threading import Lock
from bs4.element import Tag from bs4.element import Tag
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
from urllib.parse import urlparse, urlunparse from urllib.parse import urlparse, urlunparse, unquote
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..exceptions import LinkedInException from ..exceptions import LinkedInException
from ..utils import create_session from ..utils import create_session, remove_attributes
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Location, Location,
@@ -24,15 +27,14 @@ from ...jobs import (
JobType, JobType,
Country, Country,
Compensation, Compensation,
DescriptionFormat DescriptionFormat,
) )
from ..utils import ( from ..utils import (
logger, logger,
count_urgent_words,
extract_emails_from_text, extract_emails_from_text,
get_enum_from_job_type, get_enum_from_job_type,
currency_parser, currency_parser,
markdown_converter markdown_converter,
) )
@@ -42,13 +44,22 @@ class LinkedInScraper(Scraper):
band_delay = 4 band_delay = 4
jobs_per_page = 25 jobs_per_page = 25
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes LinkedInScraper with the LinkedIn job search url Initializes LinkedInScraper with the LinkedIn job search url
""" """
super().__init__(Site(Site.LINKEDIN), proxy=proxy) super().__init__(Site.LINKEDIN, proxies=proxies)
self.session = create_session(
proxies=self.proxies,
is_tls=False,
has_retry=True,
delay=5,
clear_cookies=True,
)
self.session.headers.update(self.headers)
self.scraper_input = None self.scraper_input = None
self.country = "worldwide" self.country = "worldwide"
self.job_url_direct_regex = re.compile(r'(?<=\?url=)[^"]+')
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
@@ -58,55 +69,62 @@ class LinkedInScraper(Scraper):
""" """
self.scraper_input = scraper_input self.scraper_input = scraper_input
job_list: list[JobPost] = [] job_list: list[JobPost] = []
seen_urls = set() seen_ids = set()
url_lock = Lock() page = scraper_input.offset // 10 * 10 if scraper_input.offset else 0
page = scraper_input.offset // 25 + 25 if scraper_input.offset else 0 request_count = 0
seconds_old = ( seconds_old = (
scraper_input.hours_old * 3600 scraper_input.hours_old * 3600 if scraper_input.hours_old else None
if scraper_input.hours_old )
else None continue_search = (
lambda: len(job_list) < scraper_input.results_wanted and page < 1000
) )
continue_search = lambda: len(job_list) < scraper_input.results_wanted and page < 1000
while continue_search(): while continue_search():
logger.info(f'LinkedIn search page: {page // 25 + 1}') request_count += 1
session = create_session(is_tls=False, has_retry=True, delay=5) logger.info(f"LinkedIn search page: {request_count}")
params = { params = {
"keywords": scraper_input.search_term, "keywords": scraper_input.search_term,
"location": scraper_input.location, "location": scraper_input.location,
"distance": scraper_input.distance, "distance": scraper_input.distance,
"f_WT": 2 if scraper_input.is_remote else None, "f_WT": 2 if scraper_input.is_remote else None,
"f_JT": self.job_type_code(scraper_input.job_type) "f_JT": (
self.job_type_code(scraper_input.job_type)
if scraper_input.job_type if scraper_input.job_type
else None, else None
),
"pageNum": 0, "pageNum": 0,
"start": page + scraper_input.offset, "start": page,
"f_AL": "true" if scraper_input.easy_apply else None, "f_AL": "true" if scraper_input.easy_apply else None,
"f_C": ','.join(map(str, scraper_input.linkedin_company_ids)) if scraper_input.linkedin_company_ids else None, "f_C": (
",".join(map(str, scraper_input.linkedin_company_ids))
if scraper_input.linkedin_company_ids
else None
),
} }
if seconds_old is not None: if seconds_old is not None:
params["f_TPR"] = f"r{seconds_old}" params["f_TPR"] = f"r{seconds_old}"
params = {k: v for k, v in params.items() if v is not None} params = {k: v for k, v in params.items() if v is not None}
try: try:
response = session.get( response = self.session.get(
f"{self.base_url}/jobs-guest/jobs/api/seeMoreJobPostings/search?", f"{self.base_url}/jobs-guest/jobs/api/seeMoreJobPostings/search?",
params=params, params=params,
allow_redirects=True,
proxies=self.proxy,
headers=self.headers,
timeout=10, timeout=10,
) )
if response.status_code not in range(200, 400): if response.status_code not in range(200, 400):
if response.status_code == 429: if response.status_code == 429:
logger.error(f'429 Response - Blocked by LinkedIn for too many requests') err = (
f"429 Response - Blocked by LinkedIn for too many requests"
)
else: else:
logger.error(f'LinkedIn response status code {response.status_code}') err = f"LinkedIn response status code {response.status_code}"
err += f" - {response.text}"
logger.error(err)
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
except Exception as e: except Exception as e:
if "Proxy responded with" in str(e): if "Proxy responded with" in str(e):
logger.error(f'LinkedIn: Bad proxy') logger.error(f"LinkedIn: Bad proxy")
else: else:
logger.error(f'LinkedIn: {str(e)}') logger.error(f"LinkedIn: {str(e)}")
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
@@ -115,19 +133,18 @@ class LinkedInScraper(Scraper):
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
for job_card in job_cards: for job_card in job_cards:
job_url = None
href_tag = job_card.find("a", class_="base-card__full-link") href_tag = job_card.find("a", class_="base-card__full-link")
if href_tag and "href" in href_tag.attrs: if href_tag and "href" in href_tag.attrs:
href = href_tag.attrs["href"].split("?")[0] href = href_tag.attrs["href"].split("?")[0]
job_id = href.split("-")[-1] job_id = href.split("-")[-1]
job_url = f"{self.base_url}/jobs/view/{job_id}"
with url_lock: if job_id in seen_ids:
if job_url in seen_urls:
continue continue
seen_urls.add(job_url) seen_ids.add(job_id)
try: try:
job_post = self._process_job(job_card, job_url, scraper_input.linkedin_fetch_description) fetch_desc = scraper_input.linkedin_fetch_description
job_post = self._process_job(job_card, job_id, fetch_desc)
if job_post: if job_post:
job_list.append(job_post) job_list.append(job_post)
if not continue_search(): if not continue_search():
@@ -137,13 +154,15 @@ class LinkedInScraper(Scraper):
if continue_search(): if continue_search():
time.sleep(random.uniform(self.delay, self.delay + self.band_delay)) time.sleep(random.uniform(self.delay, self.delay + self.band_delay))
page += self.jobs_per_page page += len(job_list)
job_list = job_list[: scraper_input.results_wanted] job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
def _process_job(self, job_card: Tag, job_url: str, full_descr: bool) -> Optional[JobPost]: def _process_job(
salary_tag = job_card.find('span', class_='job-search-card__salary-info') self, job_card: Tag, job_id: str, full_descr: bool
) -> Optional[JobPost]:
salary_tag = job_card.find("span", class_="job-search-card__salary-info")
compensation = None compensation = None
if salary_tag: if salary_tag:
@@ -179,49 +198,51 @@ class LinkedInScraper(Scraper):
if metadata_card if metadata_card
else None else None
) )
date_posted = description = job_type = None date_posted = None
if datetime_tag and "datetime" in datetime_tag.attrs: if datetime_tag and "datetime" in datetime_tag.attrs:
datetime_str = datetime_tag["datetime"] datetime_str = datetime_tag["datetime"]
try: try:
date_posted = datetime.strptime(datetime_str, "%Y-%m-%d") date_posted = datetime.strptime(datetime_str, "%Y-%m-%d")
except: except:
date_posted = None date_posted = None
benefits_tag = job_card.find("span", class_="result-benefits__text") job_details = {}
benefits = " ".join(benefits_tag.get_text().split()) if benefits_tag else None
if full_descr: if full_descr:
description, job_type = self._get_job_description(job_url) job_details = self._get_job_details(job_id)
return JobPost( return JobPost(
id=job_id,
title=title, title=title,
company_name=company, company_name=company,
company_url=company_url, company_url=company_url,
location=location, location=location,
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=f"{self.base_url}/jobs/view/{job_id}",
compensation=compensation, compensation=compensation,
benefits=benefits, job_type=job_details.get("job_type"),
job_type=job_type, job_level=job_details.get("job_level", "").lower(),
description=description, company_industry=job_details.get("company_industry"),
emails=extract_emails_from_text(description) if description else None, description=job_details.get("description"),
num_urgent_words=count_urgent_words(description) if description else None, job_url_direct=job_details.get("job_url_direct"),
emails=extract_emails_from_text(job_details.get("description")),
logo_photo_url=job_details.get("logo_photo_url"),
job_function=job_details.get("job_function"),
) )
def _get_job_description( def _get_job_details(self, job_id: str) -> dict:
self, job_page_url: str
) -> tuple[None, None] | tuple[str | None, tuple[str | None, JobType | None]]:
""" """
Retrieves job description by going to the job page url Retrieves job description and other job details by going to the job page url
:param job_page_url: :param job_page_url:
:return: description or None :return: dict
""" """
try: try:
session = create_session(is_tls=False, has_retry=True) response = self.session.get(
response = session.get(job_page_url, headers=self.headers, timeout=5, proxies=self.proxy) f"{self.base_url}/jobs-guest/jobs/api/jobPosting/{job_id}", timeout=5
)
response.raise_for_status() response.raise_for_status()
except: except:
return None, None return {}
if response.url == "https://www.linkedin.com/signup": if "linkedin.com/signup" in response.url:
return None, None return {}
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
div_content = soup.find( div_content = soup.find(
@@ -229,15 +250,33 @@ class LinkedInScraper(Scraper):
) )
description = None description = None
if div_content is not None: if div_content is not None:
def remove_attributes(tag):
for attr in list(tag.attrs):
del tag[attr]
return tag
div_content = remove_attributes(div_content) div_content = remove_attributes(div_content)
description = div_content.prettify(formatter="html") description = div_content.prettify(formatter="html")
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN: if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
description = markdown_converter(description) description = markdown_converter(description)
return description, self._parse_job_type(soup)
h3_tag = soup.find(
"h3", text=lambda text: text and "Job function" in text.strip()
)
job_function = None
if h3_tag:
job_function_span = h3_tag.find_next(
"span", class_="description__job-criteria-text"
)
if job_function_span:
job_function = job_function_span.text.strip()
return {
"description": description,
"job_level": self._parse_job_level(soup),
"company_industry": self._parse_company_industry(soup),
"job_type": self._parse_job_type(soup),
"job_url_direct": self._parse_job_url_direct(soup),
"logo_photo_url": soup.find("img", {"class": "artdeco-entity-image"}).get(
"data-delayed-url"
),
"job_function": job_function,
}
def _get_location(self, metadata_card: Optional[Tag]) -> Location: def _get_location(self, metadata_card: Optional[Tag]) -> Location:
""" """
@@ -261,11 +300,8 @@ class LinkedInScraper(Scraper):
) )
elif len(parts) == 3: elif len(parts) == 3:
city, state, country = parts city, state, country = parts
location = Location( country = Country.from_string(country)
city=city, location = Location(city=city, state=state, country=country)
state=state,
country=Country.from_string(country)
)
return location return location
@staticmethod @staticmethod
@@ -293,6 +329,69 @@ class LinkedInScraper(Scraper):
return [get_enum_from_job_type(employment_type)] if employment_type else [] return [get_enum_from_job_type(employment_type)] if employment_type else []
@staticmethod
def _parse_job_level(soup_job_level: BeautifulSoup) -> str | None:
"""
Gets the job level from job page
:param soup_job_level:
:return: str
"""
h3_tag = soup_job_level.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Seniority level" in text,
)
job_level = None
if h3_tag:
job_level_span = h3_tag.find_next_sibling(
"span",
class_="description__job-criteria-text description__job-criteria-text--criteria",
)
if job_level_span:
job_level = job_level_span.get_text(strip=True)
return job_level
@staticmethod
def _parse_company_industry(soup_industry: BeautifulSoup) -> str | None:
"""
Gets the company industry from job page
:param soup_industry:
:return: str
"""
h3_tag = soup_industry.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Industries" in text,
)
industry = None
if h3_tag:
industry_span = h3_tag.find_next_sibling(
"span",
class_="description__job-criteria-text description__job-criteria-text--criteria",
)
if industry_span:
industry = industry_span.get_text(strip=True)
return industry
def _parse_job_url_direct(self, soup: BeautifulSoup) -> str | None:
"""
Gets the job url direct from job page
:param soup:
:return: str
"""
job_url_direct = None
job_url_direct_content = soup.find("code", id="applyUrl")
if job_url_direct_content:
job_url_direct_match = self.job_url_direct_regex.search(
job_url_direct_content.decode_contents().strip()
)
if job_url_direct_match:
job_url_direct = unquote(job_url_direct_match.group())
return job_url_direct
@staticmethod @staticmethod
def job_type_code(job_type_enum: JobType) -> str: def job_type_code(job_type_enum: JobType) -> str:
return { return {

View File

@@ -1,36 +1,142 @@
import logging from __future__ import annotations
import re
import re
import logging
from itertools import cycle
import numpy as np
import requests import requests
import tls_client import tls_client
import numpy as np
from markdownify import markdownify as md from markdownify import markdownify as md
from requests.adapters import HTTPAdapter, Retry from requests.adapters import HTTPAdapter, Retry
from ..jobs import JobType from ..jobs import CompensationInterval, JobType
logger = logging.getLogger("JobSpy") logger = logging.getLogger("JobSpy")
logger.propagate = False logger.propagate = False
if not logger.handlers: if not logger.handlers:
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
console_handler = logging.StreamHandler() console_handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') format = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
formatter = logging.Formatter(format)
console_handler.setFormatter(formatter) console_handler.setFormatter(formatter)
logger.addHandler(console_handler) logger.addHandler(console_handler)
def count_urgent_words(description: str) -> int: class RotatingProxySession:
""" def __init__(self, proxies=None):
Count the number of urgent words or phrases in a job description. if isinstance(proxies, str):
""" self.proxy_cycle = cycle([self.format_proxy(proxies)])
urgent_patterns = re.compile( elif isinstance(proxies, list):
r"\burgen(t|cy)|\bimmediate(ly)?\b|start asap|\bhiring (now|immediate(ly)?)\b", self.proxy_cycle = (
re.IGNORECASE, cycle([self.format_proxy(proxy) for proxy in proxies])
if proxies
else None
) )
matches = re.findall(urgent_patterns, description) else:
count = len(matches) self.proxy_cycle = None
return count @staticmethod
def format_proxy(proxy):
"""Utility method to format a proxy string into a dictionary."""
if proxy.startswith("http://") or proxy.startswith("https://"):
return {"http": proxy, "https": proxy}
return {"http": f"http://{proxy}", "https": f"http://{proxy}"}
class RequestsRotating(RotatingProxySession, requests.Session):
def __init__(self, proxies=None, has_retry=False, delay=1, clear_cookies=False):
RotatingProxySession.__init__(self, proxies=proxies)
requests.Session.__init__(self)
self.clear_cookies = clear_cookies
self.allow_redirects = True
self.setup_session(has_retry, delay)
def setup_session(self, has_retry, delay):
if has_retry:
retries = Retry(
total=3,
connect=3,
status=3,
status_forcelist=[500, 502, 503, 504, 429],
backoff_factor=delay,
)
adapter = HTTPAdapter(max_retries=retries)
self.mount("http://", adapter)
self.mount("https://", adapter)
def request(self, method, url, **kwargs):
if self.clear_cookies:
self.cookies.clear()
if self.proxy_cycle:
next_proxy = next(self.proxy_cycle)
if next_proxy["http"] != "http://localhost":
self.proxies = next_proxy
else:
self.proxies = {}
return requests.Session.request(self, method, url, **kwargs)
class TLSRotating(RotatingProxySession, tls_client.Session):
def __init__(self, proxies=None):
RotatingProxySession.__init__(self, proxies=proxies)
tls_client.Session.__init__(self, random_tls_extension_order=True)
def execute_request(self, *args, **kwargs):
if self.proxy_cycle:
next_proxy = next(self.proxy_cycle)
if next_proxy["http"] != "http://localhost":
self.proxies = next_proxy
else:
self.proxies = {}
response = tls_client.Session.execute_request(self, *args, **kwargs)
response.ok = response.status_code in range(200, 400)
return response
def create_session(
*,
proxies: dict | str | None = None,
is_tls: bool = True,
has_retry: bool = False,
delay: int = 1,
clear_cookies: bool = False,
) -> requests.Session:
"""
Creates a requests session with optional tls, proxy, and retry settings.
:return: A session object
"""
if is_tls:
session = TLSRotating(proxies=proxies)
else:
session = RequestsRotating(
proxies=proxies,
has_retry=has_retry,
delay=delay,
clear_cookies=clear_cookies,
)
return session
def set_logger_level(verbose: int = 2):
"""
Adjusts the logger's level. This function allows the logging level to be changed at runtime.
Parameters:
- verbose: int {0, 1, 2} (default=2, all logs)
"""
if verbose is None:
return
level_name = {2: "INFO", 1: "WARNING", 0: "ERROR"}.get(verbose, "INFO")
level = getattr(logging, level_name.upper(), None)
if level is not None:
logger.setLevel(level)
else:
raise ValueError(f"Invalid log level: {level_name}")
def markdown_converter(description_html: str): def markdown_converter(description_html: str):
@@ -47,32 +153,6 @@ def extract_emails_from_text(text: str) -> list[str] | None:
return email_regex.findall(text) return email_regex.findall(text)
def create_session(proxy: dict | None = None, is_tls: bool = True, has_retry: bool = False, delay: int = 1) -> requests.Session:
"""
Creates a requests session with optional tls, proxy, and retry settings.
:return: A session object
"""
if is_tls:
session = tls_client.Session(random_tls_extension_order=True)
session.proxies = proxy
else:
session = requests.Session()
session.allow_redirects = True
if proxy:
session.proxies.update(proxy)
if has_retry:
retries = Retry(total=3,
connect=3,
status=3,
status_forcelist=[500, 502, 503, 504, 429],
backoff_factor=delay)
adapter = HTTPAdapter(max_retries=retries)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
def get_enum_from_job_type(job_type_str: str) -> JobType | None: def get_enum_from_job_type(job_type_str: str) -> JobType | None:
""" """
Given a string, returns the corresponding JobType enum member if a match is found. Given a string, returns the corresponding JobType enum member if a match is found.
@@ -87,17 +167,84 @@ def get_enum_from_job_type(job_type_str: str) -> JobType | None:
def currency_parser(cur_str): def currency_parser(cur_str):
# Remove any non-numerical characters # Remove any non-numerical characters
# except for ',' '.' or '-' (e.g. EUR) # except for ',' '.' or '-' (e.g. EUR)
cur_str = re.sub("[^-0-9.,]", '', cur_str) cur_str = re.sub("[^-0-9.,]", "", cur_str)
# Remove any 000s separators (either , or .) # Remove any 000s separators (either , or .)
cur_str = re.sub("[.,]", '', cur_str[:-3]) + cur_str[-3:] cur_str = re.sub("[.,]", "", cur_str[:-3]) + cur_str[-3:]
if '.' in list(cur_str[-3:]): if "." in list(cur_str[-3:]):
num = float(cur_str) num = float(cur_str)
elif ',' in list(cur_str[-3:]): elif "," in list(cur_str[-3:]):
num = float(cur_str.replace(',', '.')) num = float(cur_str.replace(",", "."))
else: else:
num = float(cur_str) num = float(cur_str)
return np.round(num, 2) return np.round(num, 2)
def remove_attributes(tag):
for attr in list(tag.attrs):
del tag[attr]
return tag
def extract_salary(
salary_str,
lower_limit=1000,
upper_limit=700000,
hourly_threshold=350,
monthly_threshold=30000,
enforce_annual_salary=False,
):
if not salary_str:
return None, None, None, None
min_max_pattern = r"\$(\d+(?:,\d+)?(?:\.\d+)?)([kK]?)\s*[-—–]\s*(?:\$)?(\d+(?:,\d+)?(?:\.\d+)?)([kK]?)"
def to_int(s):
return int(float(s.replace(",", "")))
def convert_hourly_to_annual(hourly_wage):
return hourly_wage * 2080
def convert_monthly_to_annual(monthly_wage):
return monthly_wage * 12
match = re.search(min_max_pattern, salary_str)
if match:
min_salary = to_int(match.group(1))
max_salary = to_int(match.group(3))
# Handle 'k' suffix for min and max salaries independently
if "k" in match.group(2).lower() or "k" in match.group(4).lower():
min_salary *= 1000
max_salary *= 1000
# Convert to annual if less than the hourly threshold
if min_salary < hourly_threshold:
interval = CompensationInterval.HOURLY.value
annual_min_salary = convert_hourly_to_annual(min_salary)
if max_salary < hourly_threshold:
annual_max_salary = convert_hourly_to_annual(max_salary)
elif min_salary < monthly_threshold:
interval = CompensationInterval.MONTHLY.value
annual_min_salary = convert_monthly_to_annual(min_salary)
if max_salary < monthly_threshold:
annual_max_salary = convert_monthly_to_annual(max_salary)
else:
interval = CompensationInterval.YEARLY.value
annual_min_salary = min_salary
annual_max_salary = max_salary
# Ensure salary range is within specified limits
if (
lower_limit <= annual_min_salary <= upper_limit
and lower_limit <= annual_max_salary <= upper_limit
and annual_min_salary < annual_max_salary
):
if enforce_annual_salary:
return interval, annual_min_salary, annual_max_salary, "USD"
else:
return interval, min_salary, max_salary, "USD"
return None, None, None, None

View File

@@ -4,20 +4,27 @@ jobspy.scrapers.ziprecruiter
This module contains routines to scrape ZipRecruiter. This module contains routines to scrape ZipRecruiter.
""" """
from __future__ import annotations
import json
import math import math
import re
import time import time
from datetime import datetime from datetime import datetime
from typing import Optional, Tuple, Any from typing import Optional, Tuple, Any
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from bs4 import BeautifulSoup
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..utils import ( from ..utils import (
logger, logger,
count_urgent_words,
extract_emails_from_text, extract_emails_from_text,
create_session, create_session,
markdown_converter markdown_converter,
remove_attributes,
) )
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
@@ -26,7 +33,7 @@ from ...jobs import (
JobResponse, JobResponse,
JobType, JobType,
Country, Country,
DescriptionFormat DescriptionFormat,
) )
@@ -34,14 +41,15 @@ class ZipRecruiterScraper(Scraper):
base_url = "https://www.ziprecruiter.com" base_url = "https://www.ziprecruiter.com"
api_url = "https://api.ziprecruiter.com" api_url = "https://api.ziprecruiter.com"
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxies: list[str] | str | None = None):
""" """
Initializes ZipRecruiterScraper with the ZipRecruiter job search url Initializes ZipRecruiterScraper with the ZipRecruiter job search url
""" """
super().__init__(Site.ZIP_RECRUITER, proxies=proxies)
self.scraper_input = None self.scraper_input = None
self.session = create_session(proxy) self.session = create_session(proxies=proxies)
self._get_cookies() self._get_cookies()
super().__init__(Site.ZIP_RECRUITER, proxy=proxy)
self.delay = 5 self.delay = 5
self.jobs_per_page = 20 self.jobs_per_page = 20
@@ -63,7 +71,7 @@ class ZipRecruiterScraper(Scraper):
break break
if page > 1: if page > 1:
time.sleep(self.delay) time.sleep(self.delay)
logger.info(f'ZipRecruiter search page: {page}') logger.info(f"ZipRecruiter search page: {page}")
jobs_on_page, continue_token = self._find_jobs_in_page( jobs_on_page, continue_token = self._find_jobs_in_page(
scraper_input, continue_token scraper_input, continue_token
) )
@@ -89,25 +97,24 @@ class ZipRecruiterScraper(Scraper):
if continue_token: if continue_token:
params["continue_from"] = continue_token params["continue_from"] = continue_token
try: try:
res= self.session.get( res = self.session.get(
f"{self.api_url}/jobs-app/jobs", f"{self.api_url}/jobs-app/jobs", headers=self.headers, params=params
headers=self.headers,
params=params
) )
if res.status_code not in range(200, 400): if res.status_code not in range(200, 400):
if res.status_code == 429: if res.status_code == 429:
logger.error(f'429 Response - Blocked by ZipRecruiter for too many requests') err = "429 Response - Blocked by ZipRecruiter for too many requests"
else: else:
logger.error(f'ZipRecruiter response status code {res.status_code}') err = f"ZipRecruiter response status code {res.status_code}"
err += f" with response: {res.text}" # ZipRecruiter likely not available in EU
logger.error(err)
return jobs_list, "" return jobs_list, ""
except Exception as e: except Exception as e:
if "Proxy responded with" in str(e): if "Proxy responded with" in str(e):
logger.error(f'Indeed: Bad proxy') logger.error(f"Indeed: Bad proxy")
else: else:
logger.error(f'Indeed: {str(e)}') logger.error(f"Indeed: {str(e)}")
return jobs_list, "" return jobs_list, ""
res_data = res.json() res_data = res.json()
jobs_list = res_data.get("jobs", []) jobs_list = res_data.get("jobs", [])
next_continue_token = res_data.get("continue", None) next_continue_token = res_data.get("continue", None)
@@ -128,7 +135,12 @@ class ZipRecruiterScraper(Scraper):
self.seen_urls.add(job_url) self.seen_urls.add(job_url)
description = job.get("job_description", "").strip() description = job.get("job_description", "").strip()
description = markdown_converter(description) if self.scraper_input.description_format == DescriptionFormat.MARKDOWN else description listing_type = job.get("buyer_type", "")
description = (
markdown_converter(description)
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN
else description
)
company = job.get("hiring_company", {}).get("name") company = job.get("hiring_company", {}).get("name")
country_value = "usa" if job.get("job_country") == "US" else "canada" country_value = "usa" if job.get("job_country") == "US" else "canada"
country_enum = Country.from_string(country_value) country_enum = Country.from_string(country_value)
@@ -139,34 +151,69 @@ class ZipRecruiterScraper(Scraper):
job_type = self._get_job_type_enum( job_type = self._get_job_type_enum(
job.get("employment_type", "").replace("_", "").lower() job.get("employment_type", "").replace("_", "").lower()
) )
date_posted = datetime.fromisoformat(job['posted_time'].rstrip("Z")).date() date_posted = datetime.fromisoformat(job["posted_time"].rstrip("Z")).date()
comp_interval = job.get("compensation_interval")
comp_interval = "yearly" if comp_interval == "annual" else comp_interval
comp_min = int(job["compensation_min"]) if "compensation_min" in job else None
comp_max = int(job["compensation_max"]) if "compensation_max" in job else None
comp_currency = job.get("compensation_currency")
description_full, job_url_direct = self._get_descr(job_url)
return JobPost( return JobPost(
id=str(job["listing_key"]),
title=title, title=title,
company_name=company, company_name=company,
location=location, location=location,
job_type=job_type, job_type=job_type,
compensation=Compensation( compensation=Compensation(
interval="yearly" interval=comp_interval,
if job.get("compensation_interval") == "annual" min_amount=comp_min,
else job.get("compensation_interval"), max_amount=comp_max,
min_amount=int(job["compensation_min"]) currency=comp_currency,
if "compensation_min" in job
else None,
max_amount=int(job["compensation_max"])
if "compensation_max" in job
else None,
currency=job.get("compensation_currency"),
), ),
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
description=description, description=description_full if description_full else description,
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None, job_url_direct=job_url_direct,
listing_type=listing_type,
) )
def _get_descr(self, job_url):
res = self.session.get(job_url, headers=self.headers, allow_redirects=True)
description_full = job_url_direct = None
if res.ok:
soup = BeautifulSoup(res.text, "html.parser")
job_descr_div = soup.find("div", class_="job_description")
company_descr_section = soup.find("section", class_="company_description")
job_description_clean = (
remove_attributes(job_descr_div).prettify(formatter="html")
if job_descr_div
else ""
)
company_description_clean = (
remove_attributes(company_descr_section).prettify(formatter="html")
if company_descr_section
else ""
)
description_full = job_description_clean + company_description_clean
script_tag = soup.find("script", type="application/json")
if script_tag:
job_json = json.loads(script_tag.string)
job_url_val = job_json["model"]["saveJobURL"]
m = re.search(r"job_url=(.+)", job_url_val)
if m:
job_url_direct = m.group(1)
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
description_full = markdown_converter(description_full)
return description_full, job_url_direct
def _get_cookies(self): def _get_cookies(self):
data="event_type=session&logged_in=false&number_of_retry=1&property=model%3AiPhone&property=os%3AiOS&property=locale%3Aen_us&property=app_build_number%3A4734&property=app_version%3A91.0&property=manufacturer%3AApple&property=timestamp%3A2024-01-12T12%3A04%3A42-06%3A00&property=screen_height%3A852&property=os_version%3A16.6.1&property=source%3Ainstall&property=screen_width%3A393&property=device_model%3AiPhone%2014%20Pro&property=brand%3AApple" data = "event_type=session&logged_in=false&number_of_retry=1&property=model%3AiPhone&property=os%3AiOS&property=locale%3Aen_us&property=app_build_number%3A4734&property=app_version%3A91.0&property=manufacturer%3AApple&property=timestamp%3A2024-01-12T12%3A04%3A42-06%3A00&property=screen_height%3A852&property=os_version%3A16.6.1&property=source%3Ainstall&property=screen_width%3A393&property=device_model%3AiPhone%2014%20Pro&property=brand%3AApple"
self.session.post(f"{self.api_url}/jobs-app/event", data=data, headers=self.headers) url = f"{self.api_url}/jobs-app/event"
self.session.post(url, data=data, headers=self.headers)
@staticmethod @staticmethod
def _get_job_type_enum(job_type_str: str) -> list[JobType] | None: def _get_job_type_enum(job_type_str: str) -> list[JobType] | None:
@@ -182,16 +229,13 @@ class ZipRecruiterScraper(Scraper):
"location": scraper_input.location, "location": scraper_input.location,
} }
if scraper_input.hours_old: if scraper_input.hours_old:
fromage = max(scraper_input.hours_old // 24, 1) if scraper_input.hours_old else None params["days"] = max(scraper_input.hours_old // 24, 1)
params['days'] = fromage job_type_map = {JobType.FULL_TIME: "full_time", JobType.PART_TIME: "part_time"}
job_type_map = {
JobType.FULL_TIME: 'full_time',
JobType.PART_TIME: 'part_time'
}
if scraper_input.job_type: if scraper_input.job_type:
params['employment_type'] = job_type_map[scraper_input.job_type] if scraper_input.job_type in job_type_map else scraper_input.job_type.value[0] job_type = scraper_input.job_type
params["employment_type"] = job_type_map.get(job_type, job_type.value[0])
if scraper_input.easy_apply: if scraper_input.easy_apply:
params['zipapply'] = 1 params["zipapply"] = 1
if scraper_input.is_remote: if scraper_input.is_remote:
params["remote"] = 1 params["remote"] = 1
if scraper_input.distance: if scraper_input.distance:

View File

@@ -4,11 +4,15 @@ import pandas as pd
def test_all(): def test_all():
result = scrape_jobs( result = scrape_jobs(
site_name=["linkedin", "indeed", "zip_recruiter", "glassdoor"], site_name=[
search_term="software engineer", "linkedin",
"indeed",
"glassdoor",
], # ziprecruiter needs good ip, and temp fix to pass test on ci
search_term="engineer",
results_wanted=5, results_wanted=5,
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 15
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -2,10 +2,12 @@ from ..jobspy import scrape_jobs
import pandas as pd import pandas as pd
def test_indeed(): def test_glassdoor():
result = scrape_jobs( result = scrape_jobs(
site_name="glassdoor", search_term="software engineer", country_indeed="USA" site_name="glassdoor",
search_term="engineer",
results_wanted=5,
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -4,8 +4,10 @@ import pandas as pd
def test_indeed(): def test_indeed():
result = scrape_jobs( result = scrape_jobs(
site_name="indeed", search_term="software engineer", country_indeed="usa" site_name="indeed",
search_term="engineer",
results_wanted=5,
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -3,10 +3,7 @@ import pandas as pd
def test_linkedin(): def test_linkedin():
result = scrape_jobs( result = scrape_jobs(site_name="linkedin", search_term="engineer", results_wanted=5)
site_name="linkedin",
search_term="software engineer",
)
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"

View File

@@ -4,10 +4,9 @@ import pandas as pd
def test_ziprecruiter(): def test_ziprecruiter():
result = scrape_jobs( result = scrape_jobs(
site_name="zip_recruiter", site_name="zip_recruiter", search_term="software engineer", results_wanted=5
search_term="software engineer",
) )
assert ( assert (
isinstance(result, pd.DataFrame) and not result.empty isinstance(result, pd.DataFrame) and len(result) == 5
), "Result should be a non-empty DataFrame" ), "Result should be a non-empty DataFrame"