Compare commits

...

5 Commits

Author SHA1 Message Date
Cullen Watson
0a669e9ba8 enh: indeed more fields (#126) 2024-03-09 01:40:01 -06:00
gigaSec
a4f6851c32 Fix GlassDoor Country Vietnam(#122) 2024-03-04 17:35:57 -06:00
troy-conte
db01bc6bbb log search updates, fix glassdoor (#120) 2024-03-04 16:39:38 -06:00
Cullen Watson
f8a4eccc6b Remove pandas warning (#118) 2024-02-29 21:30:56 -06:00
Cullen Watson
ba3a16b228 Description format (#107) 2024-02-14 16:04:23 -06:00
11 changed files with 806 additions and 810 deletions

View File

@@ -11,7 +11,7 @@ work with us.*
- Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, & **ZipRecruiter** simultaneously - Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, & **ZipRecruiter** simultaneously
- Aggregates the job postings in a Pandas DataFrame - Aggregates the job postings in a Pandas DataFrame
- Proxy support (HTTP/S, SOCKS) - Proxy support
[Video Guide for JobSpy](https://www.youtube.com/watch?v=RuP1HrAZnxs&pp=ygUgam9icyBzY3JhcGVyIGJvdCBsaW5rZWRpbiBpbmRlZWQ%3D) - [Video Guide for JobSpy](https://www.youtube.com/watch?v=RuP1HrAZnxs&pp=ygUgam9icyBzY3JhcGVyIGJvdCBsaW5rZWRpbiBpbmRlZWQ%3D) -
Updated for release v1.1.3 Updated for release v1.1.3
@@ -21,7 +21,7 @@ Updated for release v1.1.3
### Installation ### Installation
``` ```
pip install python-jobspy pip install -U python-jobspy
``` ```
_Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_
@@ -64,18 +64,19 @@ Required
├── site_type (List[enum]): linkedin, zip_recruiter, indeed, glassdoor ├── site_type (List[enum]): linkedin, zip_recruiter, indeed, glassdoor
└── search_term (str) └── search_term (str)
Optional Optional
├── location (int) ├── location (str)
├── distance (int): in miles ├── distance (int): in miles, default 50
├── job_type (enum): fulltime, parttime, internship, contract ├── job_type (enum): fulltime, parttime, internship, contract
├── proxy (str): in format 'http://user:pass@host:port' or [https, socks] ├── proxy (str): in format 'http://user:pass@host:port'
├── is_remote (bool) ├── is_remote (bool)
├── full_description (bool): fetches full description for LinkedIn (slower) ├── linkedin_fetch_description (bool): fetches full description for LinkedIn (slower)
├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type' ├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type'
├── easy_apply (bool): filters for jobs that are hosted on the job board site ├── easy_apply (bool): filters for jobs that are hosted on the job board site
├── linkedin_company_ids (list[int): searches for linkedin jobs with specific company ids ├── linkedin_company_ids (list[int): searches for linkedin jobs with specific company ids
├── description_format (enum): markdown, html (format type of the job descriptions)
├── country_indeed (enum): filters the country on Indeed (see below for correct spelling) ├── country_indeed (enum): filters the country on Indeed (see below for correct spelling)
├── offset (num): starts the search from an offset (e.g. 25 will start the search from the 25th result) ├── offset (num): starts the search from an offset (e.g. 25 will start the search from the 25th result)
├── hours_old (int): filters jobs by the number of hours since the job was posted (all but LinkedIn rounds up to next day) ├── hours_old (int): filters jobs by the number of hours since the job was posted (ZipRecruiter and Glassdoor round up to next day. If you use this on Indeed, it will not filter by job_type or is_remote)
``` ```
### JobPost Schema ### JobPost Schema
@@ -99,24 +100,26 @@ JobPost
│ └── currency (enum) │ └── currency (enum)
└── date_posted (date) └── date_posted (date)
└── emails (str) └── emails (str)
└── num_urgent_words (int)
└── is_remote (bool) └── is_remote (bool)
Indeed specific
├── company_country (str)
└── company_addresses (str)
└── company_industry (str)
└── company_employees_label (str)
└── company_revenue_label (str)
└── company_description (str)
└── ceo_name (str)
└── ceo_photo_url (str)
└── logo_photo_url (str)
└── banner_photo_url (str)
``` ```
### Exceptions
The following exceptions may be raised when using JobSpy:
* `LinkedInException`
* `IndeedException`
* `ZipRecruiterException`
* `GlassdoorException`
## Supported Countries for Job Searching ## Supported Countries for Job Searching
### **LinkedIn** ### **LinkedIn**
LinkedIn searches globally & uses only the `location` parameter. You can only fetch 1000 jobs max from the LinkedIn endpoint we're using LinkedIn searches globally & uses only the `location` parameter. You can only fetch 1000 jobs max from the LinkedIn endpoint we are using
### **ZipRecruiter** ### **ZipRecruiter**
@@ -146,10 +149,14 @@ You can specify the following countries when searching on Indeed (use the exact
| South Korea | Spain* | Sweden | Switzerland* | | South Korea | Spain* | Sweden | Switzerland* |
| Taiwan | Thailand | Turkey | Ukraine | | Taiwan | Thailand | Turkey | Ukraine |
| United Arab Emirates | UK* | USA* | Uruguay | | United Arab Emirates | UK* | USA* | Uruguay |
| Venezuela | Vietnam | | | | Venezuela | Vietnam* | | |
Glassdoor can only fetch 900 jobs from the endpoint we're using on a given search. ## Notes
* Indeed is the best scraper currently with no rate limiting.
* Glassdoor can only fetch 900 jobs from the endpoint we're using on a given search.
* LinkedIn is the most restrictive and usually rate limits on around the 10th page
* ZipRecruiter is okay but has a 5 second delay in between each page to avoid rate limiting.
## Frequently Asked Questions ## Frequently Asked Questions
--- ---
@@ -167,7 +174,3 @@ persist, [submit an issue](https://github.com/Bunsly/JobSpy/issues).
- Trying a VPN or proxy to change your IP address. - Trying a VPN or proxy to change your IP address.
--- ---

23
poetry.lock generated
View File

@@ -1026,6 +1026,21 @@ files = [
{file = "jupyterlab_widgets-3.0.8.tar.gz", hash = "sha256:d428ab97b8d87cc7c54cbf37644d6e0f0e662f23876e05fa460a73ec3257252a"}, {file = "jupyterlab_widgets-3.0.8.tar.gz", hash = "sha256:d428ab97b8d87cc7c54cbf37644d6e0f0e662f23876e05fa460a73ec3257252a"},
] ]
[[package]]
name = "markdownify"
version = "0.11.6"
description = "Convert HTML to markdown."
optional = false
python-versions = "*"
files = [
{file = "markdownify-0.11.6-py3-none-any.whl", hash = "sha256:ba35fe289d5e9073bcd7d2cad629278fe25f1a93741fcdc0bfb4f009076d8324"},
{file = "markdownify-0.11.6.tar.gz", hash = "sha256:009b240e0c9f4c8eaf1d085625dcd4011e12f0f8cec55dedf9ea6f7655e49bfe"},
]
[package.dependencies]
beautifulsoup4 = ">=4.9,<5"
six = ">=1.15,<2"
[[package]] [[package]]
name = "markupsafe" name = "markupsafe"
version = "2.1.3" version = "2.1.3"
@@ -2260,13 +2275,13 @@ test = ["flake8", "isort", "pytest"]
[[package]] [[package]]
name = "tls-client" name = "tls-client"
version = "1.0" version = "1.0.1"
description = "Advanced Python HTTP Client." description = "Advanced Python HTTP Client."
optional = false optional = false
python-versions = "*" python-versions = "*"
files = [ files = [
{file = "tls_client-1.0-py3-none-any.whl", hash = "sha256:f1183f5e18cb31914bd62d11b350a33ea0293ea80fb91d69a3072821dece3e66"}, {file = "tls_client-1.0.1-py3-none-any.whl", hash = "sha256:2f8915c0642c2226c9e33120072a2af082812f6310d32f4ea4da322db7d3bb1c"},
{file = "tls_client-1.0.tar.gz", hash = "sha256:7f6de48ad4a0ef69b72682c76ce604155971e07b4bfb2148a36276194ae3e7a0"}, {file = "tls_client-1.0.1.tar.gz", hash = "sha256:dad797f3412bb713606e0765d489f547ffb580c5ffdb74aed47a183ce8505ff5"},
] ]
[[package]] [[package]]
@@ -2435,4 +2450,4 @@ files = [
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = "^3.10" python-versions = "^3.10"
content-hash = "404a77d78066cbb2ef71015562baf44aa11d12aac29a191c1ccc7758bfda598a" content-hash = "ba7f7cc9b6833a4a6271981f90610395639dd8b9b3db1370cbd1149d70cc9632"

View File

@@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "python-jobspy" name = "python-jobspy"
version = "1.1.44" version = "1.1.48"
description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter" description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter"
authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"] authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"]
homepage = "https://github.com/Bunsly/JobSpy" homepage = "https://github.com/Bunsly/JobSpy"
@@ -13,11 +13,12 @@ packages = [
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = "^3.10" python = "^3.10"
requests = "^2.31.0" requests = "^2.31.0"
tls-client = "*"
beautifulsoup4 = "^4.12.2" beautifulsoup4 = "^4.12.2"
pandas = "^2.1.0" pandas = "^2.1.0"
NUMPY = "1.24.2" NUMPY = "1.24.2"
pydantic = "^2.3.0" pydantic = "^2.3.0"
tls-client = "^1.0.1"
markdownify = "^0.11.6"
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]

View File

@@ -3,6 +3,7 @@ from typing import Tuple
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
from .jobs import JobType, Location from .jobs import JobType, Location
from .scrapers.utils import logger
from .scrapers.indeed import IndeedScraper from .scrapers.indeed import IndeedScraper
from .scrapers.ziprecruiter import ZipRecruiterScraper from .scrapers.ziprecruiter import ZipRecruiterScraper
from .scrapers.glassdoor import GlassdoorScraper from .scrapers.glassdoor import GlassdoorScraper
@@ -15,23 +16,12 @@ from .scrapers.exceptions import (
GlassdoorException, GlassdoorException,
) )
SCRAPER_MAPPING = {
Site.LINKEDIN: LinkedInScraper,
Site.INDEED: IndeedScraper,
Site.ZIP_RECRUITER: ZipRecruiterScraper,
Site.GLASSDOOR: GlassdoorScraper,
}
def _map_str_to_site(site_name: str) -> Site:
return Site[site_name.upper()]
def scrape_jobs( def scrape_jobs(
site_name: str | list[str] | Site | list[Site] | None = None, site_name: str | list[str] | Site | list[Site] | None = None,
search_term: str | None = None, search_term: str | None = None,
location: str | None = None, location: str | None = None,
distance: int | None = None, distance: int | None = 50,
is_remote: bool = False, is_remote: bool = False,
job_type: str | None = None, job_type: str | None = None,
easy_apply: bool | None = None, easy_apply: bool | None = None,
@@ -39,7 +29,8 @@ def scrape_jobs(
country_indeed: str = "usa", country_indeed: str = "usa",
hyperlinks: bool = False, hyperlinks: bool = False,
proxy: str | None = None, proxy: str | None = None,
full_description: bool | None = False, description_format: str = "markdown",
linkedin_fetch_description: bool | None = False,
linkedin_company_ids: list[int] | None = None, linkedin_company_ids: list[int] | None = None,
offset: int | None = 0, offset: int | None = 0,
hours_old: int = None, hours_old: int = None,
@@ -49,6 +40,15 @@ def scrape_jobs(
Simultaneously scrapes job data from multiple job sites. Simultaneously scrapes job data from multiple job sites.
:return: results_wanted: pandas dataframe containing job data :return: results_wanted: pandas dataframe containing job data
""" """
SCRAPER_MAPPING = {
Site.LINKEDIN: LinkedInScraper,
Site.INDEED: IndeedScraper,
Site.ZIP_RECRUITER: ZipRecruiterScraper,
Site.GLASSDOOR: GlassdoorScraper,
}
def map_str_to_site(site_name: str) -> Site:
return Site[site_name.upper()]
def get_enum_from_value(value_str): def get_enum_from_value(value_str):
for job_type in JobType: for job_type in JobType:
@@ -61,16 +61,15 @@ def scrape_jobs(
def get_site_type(): def get_site_type():
site_types = list(Site) site_types = list(Site)
if isinstance(site_name, str): if isinstance(site_name, str):
site_types = [_map_str_to_site(site_name)] site_types = [map_str_to_site(site_name)]
elif isinstance(site_name, Site): elif isinstance(site_name, Site):
site_types = [site_name] site_types = [site_name]
elif isinstance(site_name, list): elif isinstance(site_name, list):
site_types = [ site_types = [
_map_str_to_site(site) if isinstance(site, str) else site map_str_to_site(site) if isinstance(site, str) else site
for site in site_name for site in site_name
] ]
return site_types return site_types
country_enum = Country.from_string(country_indeed) country_enum = Country.from_string(country_indeed)
scraper_input = ScraperInput( scraper_input = ScraperInput(
@@ -82,7 +81,8 @@ def scrape_jobs(
is_remote=is_remote, is_remote=is_remote,
job_type=job_type, job_type=job_type,
easy_apply=easy_apply, easy_apply=easy_apply,
full_description=full_description, description_format=description_format,
linkedin_fetch_description=linkedin_fetch_description,
results_wanted=results_wanted, results_wanted=results_wanted,
linkedin_company_ids=linkedin_company_ids, linkedin_company_ids=linkedin_company_ids,
offset=offset, offset=offset,
@@ -92,22 +92,9 @@ def scrape_jobs(
def scrape_site(site: Site) -> Tuple[str, JobResponse]: def scrape_site(site: Site) -> Tuple[str, JobResponse]:
scraper_class = SCRAPER_MAPPING[site] scraper_class = SCRAPER_MAPPING[site]
scraper = scraper_class(proxy=proxy) scraper = scraper_class(proxy=proxy)
try:
scraped_data: JobResponse = scraper.scrape(scraper_input) scraped_data: JobResponse = scraper.scrape(scraper_input)
except (LinkedInException, IndeedException, ZipRecruiterException) as lie: site_name = 'ZipRecruiter' if site.value.capitalize() == 'Zip_recruiter' else site.value.capitalize()
raise lie logger.info(f"{site_name} finished scraping")
except Exception as e:
if site == Site.LINKEDIN:
raise LinkedInException(str(e))
if site == Site.INDEED:
raise IndeedException(str(e))
if site == Site.ZIP_RECRUITER:
raise ZipRecruiterException(str(e))
if site == Site.GLASSDOOR:
raise GlassdoorException(str(e))
else:
raise e
return site.value, scraped_data return site.value, scraped_data
site_to_jobs_dict = {} site_to_jobs_dict = {}
@@ -168,13 +155,19 @@ def scrape_jobs(
jobs_dfs.append(job_df) jobs_dfs.append(job_df)
if jobs_dfs: if jobs_dfs:
jobs_df = pd.concat(jobs_dfs, ignore_index=True) # Step 1: Filter out all-NA columns from each DataFrame before concatenation
desired_order: list[str] = [ filtered_dfs = [df.dropna(axis=1, how='all') for df in jobs_dfs]
"job_url_hyper" if hyperlinks else "job_url",
# Step 2: Concatenate the filtered DataFrames
jobs_df = pd.concat(filtered_dfs, ignore_index=True)
# Desired column order
desired_order = [
"site", "site",
"job_url_hyper" if hyperlinks else "job_url",
"job_url_direct",
"title", "title",
"company", "company",
"company_url",
"location", "location",
"job_type", "job_type",
"date_posted", "date_posted",
@@ -183,13 +176,31 @@ def scrape_jobs(
"max_amount", "max_amount",
"currency", "currency",
"is_remote", "is_remote",
"num_urgent_words",
"benefits",
"emails", "emails",
"description", "description",
]
jobs_formatted_df = jobs_df[desired_order]
else:
jobs_formatted_df = pd.DataFrame()
return jobs_formatted_df.sort_values(by=['site', 'date_posted'], ascending=[True, False]) "company_url",
"company_url_direct",
"company_addresses",
"company_industry",
"company_num_employees",
"company_revenue",
"company_description",
"logo_photo_url",
"banner_photo_url",
"ceo_name",
"ceo_photo_url",
]
# Step 3: Ensure all desired columns are present, adding missing ones as empty
for column in desired_order:
if column not in jobs_df.columns:
jobs_df[column] = None # Add missing columns as empty
# Reorder the DataFrame according to the desired order
jobs_df = jobs_df[desired_order]
# Step 4: Sort the DataFrame as required
return jobs_df.sort_values(by=['site', 'date_posted'], ascending=[True, False])
else:
return pd.DataFrame()

View File

@@ -57,7 +57,7 @@ class JobType(Enum):
class Country(Enum): class Country(Enum):
""" """
Gets the subdomain for Indeed and Glassdoor. Gets the subdomain for Indeed and Glassdoor.
The second item in the tuple is the subdomain for Indeed The second item in the tuple is the subdomain (and API country code if there's a ':' separator) for Indeed
The third item in the tuple is the subdomain (and tld if there's a ':' separator) for Glassdoor The third item in the tuple is the subdomain (and tld if there's a ':' separator) for Glassdoor
""" """
@@ -118,11 +118,11 @@ class Country(Enum):
TURKEY = ("turkey", "tr") TURKEY = ("turkey", "tr")
UKRAINE = ("ukraine", "ua") UKRAINE = ("ukraine", "ua")
UNITEDARABEMIRATES = ("united arab emirates", "ae") UNITEDARABEMIRATES = ("united arab emirates", "ae")
UK = ("uk,united kingdom", "uk", "co.uk") UK = ("uk,united kingdom", "uk:gb", "co.uk")
USA = ("usa,us,united states", "www", "com") USA = ("usa,us,united states", "www:us", "com")
URUGUAY = ("uruguay", "uy") URUGUAY = ("uruguay", "uy")
VENEZUELA = ("venezuela", "ve") VENEZUELA = ("venezuela", "ve")
VIETNAM = ("vietnam", "vn") VIETNAM = ("vietnam", "vn", "com")
# internal for ziprecruiter # internal for ziprecruiter
US_CANADA = ("usa/ca", "www") US_CANADA = ("usa/ca", "www")
@@ -132,7 +132,10 @@ class Country(Enum):
@property @property
def indeed_domain_value(self): def indeed_domain_value(self):
return self.value[1] subdomain, _, api_country_code = self.value[1].partition(":")
if subdomain and api_country_code:
return subdomain, api_country_code.upper()
return self.value[1], self.value[1].upper()
@property @property
def glassdoor_domain_value(self): def glassdoor_domain_value(self):
@@ -145,7 +148,7 @@ class Country(Enum):
else: else:
raise Exception(f"Glassdoor is not available for {self.name}") raise Exception(f"Glassdoor is not available for {self.name}")
def get_url(self): def get_glassdoor_url(self):
return f"https://{self.glassdoor_domain_value}/" return f"https://{self.glassdoor_domain_value}/"
@classmethod @classmethod
@@ -163,7 +166,7 @@ class Country(Enum):
class Location(BaseModel): class Location(BaseModel):
country: Country | None = None country: Country | str | None = None
city: Optional[str] = None city: Optional[str] = None
state: Optional[str] = None state: Optional[str] = None
@@ -173,7 +176,9 @@ class Location(BaseModel):
location_parts.append(self.city) location_parts.append(self.city)
if self.state: if self.state:
location_parts.append(self.state) location_parts.append(self.state)
if self.country and self.country not in (Country.US_CANADA, Country.WORLDWIDE): if isinstance(self.country, str):
location_parts.append(self.country)
elif self.country and self.country not in (Country.US_CANADA, Country.WORLDWIDE):
country_name = self.country.value[0] country_name = self.country.value[0]
if "," in country_name: if "," in country_name:
country_name = country_name.split(",")[0] country_name = country_name.split(",")[0]
@@ -210,23 +215,38 @@ class Compensation(BaseModel):
currency: Optional[str] = "USD" currency: Optional[str] = "USD"
class DescriptionFormat(Enum):
MARKDOWN = "markdown"
HTML = "html"
class JobPost(BaseModel): class JobPost(BaseModel):
title: str title: str
company_name: str company_name: str | None
job_url: str job_url: str
job_url_direct: str | None = None
location: Optional[Location] location: Optional[Location]
description: str | None = None description: str | None = None
company_url: str | None = None company_url: str | None = None
company_url_direct: str | None = None
job_type: list[JobType] | None = None job_type: list[JobType] | None = None
compensation: Compensation | None = None compensation: Compensation | None = None
date_posted: date | None = None date_posted: date | None = None
benefits: str | None = None
emails: list[str] | None = None emails: list[str] | None = None
num_urgent_words: int | None = None
is_remote: bool | None = None is_remote: bool | None = None
# company_industry: str | None = None
# indeed specific
company_addresses: str | None = None
company_industry: str | None = None
company_num_employees: str | None = None
company_revenue: str | None = None
company_description: str | None = None
ceo_name: str | None = None
ceo_photo_url: str | None = None
logo_photo_url: str | None = None
banner_photo_url: str | None = None
class JobResponse(BaseModel): class JobResponse(BaseModel):

View File

@@ -1,4 +1,11 @@
from ..jobs import Enum, BaseModel, JobType, JobResponse, Country from ..jobs import (
Enum,
BaseModel,
JobType,
JobResponse,
Country,
DescriptionFormat
)
class Site(Enum): class Site(Enum):
@@ -18,9 +25,10 @@ class ScraperInput(BaseModel):
is_remote: bool = False is_remote: bool = False
job_type: JobType | None = None job_type: JobType | None = None
easy_apply: bool | None = None easy_apply: bool | None = None
full_description: bool = False
offset: int = 0 offset: int = 0
linkedin_fetch_description: bool = False
linkedin_company_ids: list[int] | None = None linkedin_company_ids: list[int] | None = None
description_format: DescriptionFormat | None = DescriptionFormat.MARKDOWN
results_wanted: int = 15 results_wanted: int = 15
hours_old: int | None = None hours_old: int | None = None

View File

@@ -5,15 +5,21 @@ jobspy.scrapers.glassdoor
This module contains routines to scrape Glassdoor. This module contains routines to scrape Glassdoor.
""" """
import json import json
import re
import requests import requests
from typing import Optional from typing import Optional
from datetime import datetime, timedelta from datetime import datetime, timedelta
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
from ..utils import count_urgent_words, extract_emails_from_text from ..utils import extract_emails_from_text
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..exceptions import GlassdoorException from ..exceptions import GlassdoorException
from ..utils import create_session from ..utils import (
create_session,
markdown_converter,
logger
)
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Compensation, Compensation,
@@ -21,6 +27,7 @@ from ...jobs import (
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
DescriptionFormat
) )
@@ -32,13 +39,59 @@ class GlassdoorScraper(Scraper):
site = Site(Site.GLASSDOOR) site = Site(Site.GLASSDOOR)
super().__init__(site, proxy=proxy) super().__init__(site, proxy=proxy)
self.url = None self.base_url = None
self.country = None self.country = None
self.session = None self.session = None
self.scraper_input = None
self.jobs_per_page = 30 self.jobs_per_page = 30
self.max_pages = 30
self.seen_urls = set() self.seen_urls = set()
def fetch_jobs_page( def scrape(self, scraper_input: ScraperInput) -> JobResponse:
"""
Scrapes Glassdoor for jobs with scraper_input criteria.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
"""
self.scraper_input = scraper_input
self.scraper_input.results_wanted = min(900, scraper_input.results_wanted)
self.base_url = self.scraper_input.country.get_glassdoor_url()
self.session = create_session(self.proxy, is_tls=True, has_retry=True)
token = self._get_csrf_token()
self.headers['gd-csrf-token'] = token if token else self.fallback_token
location_id, location_type = self._get_location(
scraper_input.location, scraper_input.is_remote
)
if location_type is None:
logger.error('Glassdoor: location not parsed')
return JobResponse(jobs=[])
all_jobs: list[JobPost] = []
cursor = None
for page in range(
1 + (scraper_input.offset // self.jobs_per_page),
min(
(scraper_input.results_wanted // self.jobs_per_page) + 2,
self.max_pages + 1,
),
):
logger.info(f'Glassdoor search page: {page}')
try:
jobs, cursor = self._fetch_jobs_page(
scraper_input, location_id, location_type, page, cursor
)
all_jobs.extend(jobs)
if not jobs or len(all_jobs) >= scraper_input.results_wanted:
all_jobs = all_jobs[: scraper_input.results_wanted]
break
except Exception as e:
logger.error(f'Glassdoor: {str(e)}')
break
return JobResponse(jobs=all_jobs)
def _fetch_jobs_page(
self, self,
scraper_input: ScraperInput, scraper_input: ScraperInput,
location_id: int, location_id: int,
@@ -49,28 +102,28 @@ class GlassdoorScraper(Scraper):
""" """
Scrapes a page of Glassdoor for jobs with scraper_input criteria Scrapes a page of Glassdoor for jobs with scraper_input criteria
""" """
jobs = []
self.scraper_input = scraper_input
try: try:
payload = self.add_payload( payload = self._add_payload(
scraper_input, location_id, location_type, page_num, cursor location_id, location_type, page_num, cursor
) )
response = self.session.post( response = self.session.post(
f"{self.url}/graph", headers=self.headers(), timeout=10, data=payload f"{self.base_url}/graph", headers=self.headers, timeout_seconds=15, data=payload
) )
if response.status_code != 200: if response.status_code != 200:
raise GlassdoorException( raise GlassdoorException(f"bad response status code: {response.status_code}")
f"bad response status code: {response.status_code}"
)
res_json = response.json()[0] res_json = response.json()[0]
if "errors" in res_json: if "errors" in res_json:
raise ValueError("Error encountered in API response") raise ValueError("Error encountered in API response")
except Exception as e: except (requests.exceptions.ReadTimeout, GlassdoorException, ValueError, Exception) as e:
raise GlassdoorException(str(e)) logger.error(f'Glassdoor: {str(e)}')
return jobs, None
jobs_data = res_json["data"]["jobListings"]["jobListings"] jobs_data = res_json["data"]["jobListings"]["jobListings"]
jobs = []
with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor: with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
future_to_job_data = {executor.submit(self.process_job, job): job for job in jobs_data} future_to_job_data = {executor.submit(self._process_job, job): job for job in jobs_data}
for future in as_completed(future_to_job_data): for future in as_completed(future_to_job_data):
try: try:
job_post = future.result() job_post = future.result()
@@ -83,10 +136,24 @@ class GlassdoorScraper(Scraper):
res_json["data"]["jobListings"]["paginationCursors"], page_num + 1 res_json["data"]["jobListings"]["paginationCursors"], page_num + 1
) )
def process_job(self, job_data): def _get_csrf_token(self):
"""Processes a single job and fetches its description.""" """
Fetches csrf token needed for API by visiting a generic page
"""
res = self.session.get(f'{self.base_url}/Job/computer-science-jobs.htm', headers=self.headers)
pattern = r'"token":\s*"([^"]+)"'
matches = re.findall(pattern, res.text)
token = None
if matches:
token = matches[0]
return token
def _process_job(self, job_data):
"""
Processes a single job and fetches its description.
"""
job_id = job_data["jobview"]["job"]["listingId"] job_id = job_data["jobview"]["job"]["listingId"]
job_url = f'{self.url}job-listing/j?jl={job_id}' job_url = f'{self.base_url}job-listing/j?jl={job_id}'
if job_url in self.seen_urls: if job_url in self.seen_urls:
return None return None
self.seen_urls.add(job_url) self.seen_urls.add(job_url)
@@ -106,15 +173,13 @@ class GlassdoorScraper(Scraper):
location = self.parse_location(location_name) location = self.parse_location(location_name)
compensation = self.parse_compensation(job["header"]) compensation = self.parse_compensation(job["header"])
try: try:
description = self.fetch_job_description(job_id) description = self._fetch_job_description(job_id)
except: except:
description = None description = None
return JobPost(
job_post = JobPost(
title=title, title=title,
company_url=f"{self.url}Overview/W-EI_IE{company_id}.htm" if company_id else None, company_url=f"{self.base_url}Overview/W-EI_IE{company_id}.htm" if company_id else None,
company_name=company_name, company_name=company_name,
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
@@ -123,55 +188,13 @@ class GlassdoorScraper(Scraper):
is_remote=is_remote, is_remote=is_remote,
description=description, description=description,
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None,
) )
return job_post
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def _fetch_job_description(self, job_id):
""" """
Scrapes Glassdoor for jobs with scraper_input criteria. Fetches the job description for a single job ID.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
""" """
scraper_input.results_wanted = min(900, scraper_input.results_wanted) url = f"{self.base_url}/graph"
self.country = scraper_input.country
self.url = self.country.get_url()
location_id, location_type = self.get_location(
scraper_input.location, scraper_input.is_remote
)
all_jobs: list[JobPost] = []
cursor = None
max_pages = 30
self.session = create_session(self.proxy, is_tls=False, has_retry=True)
self.session.get(self.url)
try:
for page in range(
1 + (scraper_input.offset // self.jobs_per_page),
min(
(scraper_input.results_wanted // self.jobs_per_page) + 2,
max_pages + 1,
),
):
try:
jobs, cursor = self.fetch_jobs_page(
scraper_input, location_id, location_type, page, cursor
)
all_jobs.extend(jobs)
if len(all_jobs) >= scraper_input.results_wanted:
all_jobs = all_jobs[: scraper_input.results_wanted]
break
except Exception as e:
raise GlassdoorException(str(e))
except Exception as e:
raise GlassdoorException(str(e))
return JobResponse(jobs=all_jobs)
def fetch_job_description(self, job_id):
"""Fetches the job description for a single job ID."""
url = f"{self.url}/graph"
body = [ body = [
{ {
"operationName": "JobDetailQuery", "operationName": "JobDetailQuery",
@@ -196,48 +219,28 @@ class GlassdoorScraper(Scraper):
""" """
} }
] ]
response = requests.post(url, json=body, headers=GlassdoorScraper.headers()) res = requests.post(url, json=body, headers=self.headers)
if response.status_code != 200: if res.status_code != 200:
return None return None
data = response.json()[0] data = res.json()[0]
desc = data['data']['jobview']['job']['description'] desc = data['data']['jobview']['job']['description']
return desc return markdown_converter(desc) if self.scraper_input.description_format == DescriptionFormat.MARKDOWN else desc
@staticmethod def _get_location(self, location: str, is_remote: bool) -> (int, str):
def parse_compensation(data: dict) -> Optional[Compensation]:
pay_period = data.get("payPeriod")
adjusted_pay = data.get("payPeriodAdjustedPay")
currency = data.get("payCurrency", "USD")
if not pay_period or not adjusted_pay:
return None
interval = None
if pay_period == "ANNUAL":
interval = CompensationInterval.YEARLY
elif pay_period:
interval = CompensationInterval.get_interval(pay_period)
min_amount = int(adjusted_pay.get("p10") // 1)
max_amount = int(adjusted_pay.get("p90") // 1)
return Compensation(
interval=interval,
min_amount=min_amount,
max_amount=max_amount,
currency=currency,
)
def get_location(self, location: str, is_remote: bool) -> (int, str):
if not location or is_remote: if not location or is_remote:
return "11047", "STATE" # remote options return "11047", "STATE" # remote options
url = f"{self.url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}" url = f"{self.base_url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}"
session = create_session(self.proxy, has_retry=True) session = create_session(self.proxy, has_retry=True)
response = session.get(url) res = self.session.get(url, headers=self.headers)
if response.status_code != 200: if res.status_code != 200:
raise GlassdoorException( if res.status_code == 429:
f"bad response status code: {response.status_code}" logger.error(f'429 Response - Blocked by Glassdoor for too many requests')
) return None, None
items = response.json() else:
logger.error(f'Glassdoor response status code {res.status_code}')
return None, None
items = res.json()
if not items: if not items:
raise ValueError(f"Location '{location}' not found on Glassdoor") raise ValueError(f"Location '{location}' not found on Glassdoor")
location_type = items[0]["locationType"] location_type = items[0]["locationType"]
@@ -249,18 +252,16 @@ class GlassdoorScraper(Scraper):
location_type = "COUNTRY" location_type = "COUNTRY"
return int(items[0]["locationId"]), location_type return int(items[0]["locationId"]), location_type
@staticmethod def _add_payload(
def add_payload( self,
scraper_input,
location_id: int, location_id: int,
location_type: str, location_type: str,
page_num: int, page_num: int,
cursor: str | None = None, cursor: str | None = None,
) -> str: ) -> str:
# `fromage` is the posting time filter in days fromage = max(self.scraper_input.hours_old // 24, 1) if self.scraper_input.hours_old else None
fromage = max(scraper_input.hours_old // 24, 1) if scraper_input.hours_old else None
filter_params = [] filter_params = []
if scraper_input.easy_apply: if self.scraper_input.easy_apply:
filter_params.append({"filterKey": "applicationType", "values": "1"}) filter_params.append({"filterKey": "applicationType", "values": "1"})
if fromage: if fromage:
filter_params.append({"filterKey": "fromAge", "values": str(fromage)}) filter_params.append({"filterKey": "fromAge", "values": str(fromage)})
@@ -269,7 +270,7 @@ class GlassdoorScraper(Scraper):
"variables": { "variables": {
"excludeJobListingIds": [], "excludeJobListingIds": [],
"filterParams": filter_params, "filterParams": filter_params,
"keyword": scraper_input.search_term, "keyword": self.scraper_input.search_term,
"numJobsToShow": 30, "numJobsToShow": 30,
"locationType": location_type, "locationType": location_type,
"locationId": int(location_id), "locationId": int(location_id),
@@ -279,7 +280,74 @@ class GlassdoorScraper(Scraper):
"fromage": fromage, "fromage": fromage,
"sort": "date" "sort": "date"
}, },
"query": """ "query": self.query_template
}
if self.scraper_input.job_type:
payload["variables"]["filterParams"].append(
{"filterKey": "jobType", "values": self.scraper_input.job_type.value[0]}
)
return json.dumps([payload])
@staticmethod
def parse_compensation(data: dict) -> Optional[Compensation]:
pay_period = data.get("payPeriod")
adjusted_pay = data.get("payPeriodAdjustedPay")
currency = data.get("payCurrency", "USD")
if not pay_period or not adjusted_pay:
return None
interval = None
if pay_period == "ANNUAL":
interval = CompensationInterval.YEARLY
elif pay_period:
interval = CompensationInterval.get_interval(pay_period)
min_amount = int(adjusted_pay.get("p10") // 1)
max_amount = int(adjusted_pay.get("p90") // 1)
return Compensation(
interval=interval,
min_amount=min_amount,
max_amount=max_amount,
currency=currency,
)
@staticmethod
def get_job_type_enum(job_type_str: str) -> list[JobType] | None:
for job_type in JobType:
if job_type_str in job_type.value:
return [job_type]
@staticmethod
def parse_location(location_name: str) -> Location | None:
if not location_name or location_name == "Remote":
return
city, _, state = location_name.partition(", ")
return Location(city=city, state=state)
@staticmethod
def get_cursor_for_page(pagination_cursors, page_num):
for cursor_data in pagination_cursors:
if cursor_data["pageNumber"] == page_num:
return cursor_data["cursor"]
fallback_token = "Ft6oHEWlRZrxDww95Cpazw:0pGUrkb2y3TyOpAIqF2vbPmUXoXVkD3oEGDVkvfeCerceQ5-n8mBg3BovySUIjmCPHCaW0H2nQVdqzbtsYqf4Q:wcqRqeegRUa9MVLJGyujVXB7vWFPjdaS1CtrrzJq-ok"
headers = {
"authority": "www.glassdoor.com",
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"apollographql-client-name": "job-search-next",
"apollographql-client-version": "4.65.5",
"content-type": "application/json",
"origin": "https://www.glassdoor.com",
"referer": "https://www.glassdoor.com/",
"sec-ch-ua": '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": '"macOS"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36",
}
query_template = """
query JobSearchResultsQuery( query JobSearchResultsQuery(
$excludeJobListingIds: [Long!], $excludeJobListingIds: [Long!],
$keyword: String, $keyword: String,
@@ -445,54 +513,3 @@ class GlassdoorScraper(Scraper):
__typename __typename
} }
""" """
}
if scraper_input.job_type:
payload["variables"]["filterParams"].append(
{"filterKey": "jobType", "values": scraper_input.job_type.value[0]}
)
return json.dumps([payload])
@staticmethod
def get_job_type_enum(job_type_str: str) -> list[JobType] | None:
for job_type in JobType:
if job_type_str in job_type.value:
return [job_type]
@staticmethod
def parse_location(location_name: str) -> Location | None:
if not location_name or location_name == "Remote":
return
city, _, state = location_name.partition(", ")
return Location(city=city, state=state)
@staticmethod
def get_cursor_for_page(pagination_cursors, page_num):
for cursor_data in pagination_cursors:
if cursor_data["pageNumber"] == page_num:
return cursor_data["cursor"]
@staticmethod
def headers() -> dict:
"""
Returns headers needed for requests
:return: dict - Dictionary containing headers
"""
return {
"authority": "www.glassdoor.com",
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"apollographql-client-name": "job-search-next",
"apollographql-client-version": "4.65.5",
"content-type": "application/json",
"gd-csrf-token": "Ft6oHEWlRZrxDww95Cpazw:0pGUrkb2y3TyOpAIqF2vbPmUXoXVkD3oEGDVkvfeCerceQ5-n8mBg3BovySUIjmCPHCaW0H2nQVdqzbtsYqf4Q:wcqRqeegRUa9MVLJGyujVXB7vWFPjdaS1CtrrzJq-ok",
"origin": "https://www.glassdoor.com",
"referer": "https://www.glassdoor.com/",
"sec-ch-ua": '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": '"macOS"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36",
}

View File

@@ -4,23 +4,17 @@ jobspy.scrapers.indeed
This module contains routines to scrape Indeed. This module contains routines to scrape Indeed.
""" """
import re
import math import math
import json from concurrent.futures import ThreadPoolExecutor, Future
import requests
from typing import Any
from datetime import datetime from datetime import datetime
from bs4 import BeautifulSoup import requests
from bs4.element import Tag
from concurrent.futures import ThreadPoolExecutor, Future
from ..exceptions import IndeedException from .. import Scraper, ScraperInput, Site
from ..utils import ( from ..utils import (
count_urgent_words,
extract_emails_from_text, extract_emails_from_text,
create_session,
get_enum_from_job_type, get_enum_from_job_type,
markdown_converter,
logger logger
) )
from ...jobs import ( from ...jobs import (
@@ -30,324 +24,261 @@ from ...jobs import (
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
DescriptionFormat
) )
from .. import Scraper, ScraperInput, Site
class IndeedScraper(Scraper): class IndeedScraper(Scraper):
def __init__(self, proxy: str | None = None): def __init__(self, proxy: str | None = None):
""" """
Initializes IndeedScraper with the Indeed job search url Initializes IndeedScraper with the Indeed API url
""" """
self.url = None self.scraper_input = None
self.country = None self.jobs_per_page = 100
self.num_workers = 10
self.seen_urls = set()
self.headers = None
self.api_country_code = None
self.base_url = None
self.api_url = "https://apis.indeed.com/graphql"
site = Site(Site.INDEED) site = Site(Site.INDEED)
super().__init__(site, proxy=proxy) super().__init__(site, proxy=proxy)
self.jobs_per_page = 25
self.seen_urls = set()
def scrape_page(
self, scraper_input: ScraperInput, page: int
) -> list[JobPost]:
"""
Scrapes a page of Indeed for jobs with scraper_input criteria
:param scraper_input:
:param page:
:return: jobs found on page, total number of jobs found for search
"""
job_list = []
self.country = scraper_input.country
domain = self.country.indeed_domain_value
self.url = f"https://{domain}.indeed.com"
try:
session = create_session(self.proxy)
response = session.get(
f"{self.url}/m/jobs",
headers=self.get_headers(),
params=self.add_params(scraper_input, page),
allow_redirects=True,
timeout_seconds=10,
)
if response.status_code not in range(200, 400):
raise IndeedException(
f"bad response with status code: {response.status_code}"
)
except Exception as e:
if "Proxy responded with" in str(e):
logger.error(f'Indeed: Bad proxy')
else:
logger.error(f'Indeed: {str(e)}')
return job_list
soup = BeautifulSoup(response.content, "html.parser")
if "did not match any jobs" in response.text:
return job_list
jobs = IndeedScraper.parse_jobs(
soup
) #: can raise exception, handled by main scrape function
if (
not jobs.get("metaData", {})
.get("mosaicProviderJobCardsModel", {})
.get("results")
):
raise IndeedException("No jobs found.")
def process_job(job: dict, job_detailed: dict) -> JobPost | None:
job_url = f'{self.url}/m/jobs/viewjob?jk={job["jobkey"]}'
job_url_client = f'{self.url}/viewjob?jk={job["jobkey"]}'
if job_url in self.seen_urls:
return None
self.seen_urls.add(job_url)
description = job_detailed['description']['html']
job_type = IndeedScraper.get_job_type(job)
timestamp_seconds = job["pubDate"] / 1000
date_posted = datetime.fromtimestamp(timestamp_seconds)
date_posted = date_posted.strftime("%Y-%m-%d")
job_post = JobPost(
title=job["normTitle"],
description=description,
company_name=job["company"],
company_url=f"{self.url}{job_detailed['employer']['relativeCompanyPageUrl']}" if job_detailed['employer'] else None,
location=Location(
city=job.get("jobLocationCity"),
state=job.get("jobLocationState"),
country=self.country,
),
job_type=job_type,
compensation=self.get_compensation(job, job_detailed),
date_posted=date_posted,
job_url=job_url_client,
emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description)
if description
else None,
is_remote=IndeedScraper.is_job_remote(job, job_detailed, description)
)
return job_post
workers = 10
jobs = jobs["metaData"]["mosaicProviderJobCardsModel"]["results"]
job_keys = [job['jobkey'] for job in jobs]
jobs_detailed = self.get_job_details(job_keys)
with ThreadPoolExecutor(max_workers=workers) as executor:
job_results: list[Future] = [
executor.submit(process_job, job, job_detailed['job']) for job, job_detailed in zip(jobs, jobs_detailed)
]
job_list = [result.result() for result in job_results if result.result()]
return job_list
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
Scrapes Indeed for jobs with scraper_input criteria Scrapes Indeed for jobs with scraper_input criteria
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
job_list = self.scrape_page(scraper_input, 0) self.scraper_input = scraper_input
pages_processed = 1 domain, self.api_country_code = self.scraper_input.country.indeed_domain_value
self.base_url = f"https://{domain}.indeed.com"
self.headers = self.api_headers.copy()
self.headers['indeed-co'] = self.scraper_input.country.indeed_domain_value
job_list = []
page = 1
cursor = None
offset_pages = math.ceil(self.scraper_input.offset / 100)
for _ in range(offset_pages):
logger.info(f'Indeed skipping search page: {page}')
__, cursor = self._scrape_page(cursor)
if not __:
logger.info(f'Indeed found no jobs on page: {page}')
break
while len(self.seen_urls) < scraper_input.results_wanted: while len(self.seen_urls) < scraper_input.results_wanted:
pages_to_process = math.ceil((scraper_input.results_wanted - len(self.seen_urls)) / self.jobs_per_page) logger.info(f'Indeed search page: {page}')
new_jobs = False jobs, cursor = self._scrape_page(cursor)
if not jobs:
with ThreadPoolExecutor(max_workers=10) as executor: logger.info(f'Indeed found no jobs on page: {page}')
futures: list[Future] = [ break
executor.submit(self.scrape_page, scraper_input, page + pages_processed)
for page in range(pages_to_process)
]
for future in futures:
jobs = future.result()
if jobs:
job_list += jobs job_list += jobs
new_jobs = True page += 1
if len(self.seen_urls) >= scraper_input.results_wanted: return JobResponse(jobs=job_list[:scraper_input.results_wanted])
break
pages_processed += pages_to_process def _scrape_page(self, cursor: str | None) -> (list[JobPost], str | None):
if not new_jobs: """
break Scrapes a page of Indeed for jobs with scraper_input criteria
:param cursor:
:return: jobs found on page, next page cursor
"""
jobs = []
new_cursor = None
filters = self._build_filters()
query = self.job_search_query.format(
what=self.scraper_input.search_term,
location=self.scraper_input.location if self.scraper_input.location else self.scraper_input.country.value[0].split(',')[-1],
radius=self.scraper_input.distance,
dateOnIndeed=self.scraper_input.hours_old,
cursor=f'cursor: "{cursor}"' if cursor else '',
filters=filters
)
payload = {
'query': query,
}
api_headers = self.api_headers.copy()
api_headers['indeed-co'] = self.api_country_code
response = requests.post(self.api_url, headers=api_headers, json=payload, proxies=self.proxy, timeout=10)
if response.status_code != 200:
logger.info(f'Indeed responded with status code: {response.status_code} (submit GitHub issue if this appears to be a beg)')
return jobs, new_cursor
data = response.json()
jobs = data['data']['jobSearch']['results']
new_cursor = data['data']['jobSearch']['pageInfo']['nextCursor']
with ThreadPoolExecutor(max_workers=self.num_workers) as executor:
job_results: list[Future] = [
executor.submit(self._process_job, job['job']) for job in jobs
]
job_list = [result.result() for result in job_results if result.result()]
return job_list, new_cursor
if len(self.seen_urls) > scraper_input.results_wanted: def _build_filters(self):
job_list = job_list[:scraper_input.results_wanted] """
Builds the filters dict for job type/is_remote. If hours_old is provided, composite filter for job_type/is_remote is not possible.
IndeedApply: filters: { keyword: { field: "indeedApplyScope", keys: ["DESKTOP"] } }
"""
filters_str = ""
if self.scraper_input.hours_old:
filters_str = """
filters: {{
date: {{
field: "dateOnIndeed",
start: "{start}h"
}}
}}
""".format(start=self.scraper_input.hours_old)
elif self.scraper_input.job_type or self.scraper_input.is_remote:
job_type_key_mapping = {
JobType.FULL_TIME: "CF3CP",
JobType.PART_TIME: "75GKK",
JobType.CONTRACT: "NJXCK",
JobType.INTERNSHIP: "VDTG7",
}
return JobResponse(jobs=job_list) keys = []
if self.scraper_input.job_type:
key = job_type_key_mapping[self.scraper_input.job_type]
keys.append(key)
if self.scraper_input.is_remote:
keys.append("DSQF7")
if keys:
keys_str = '", "'.join(keys) # Prepare your keys string
filters_str = f"""
filters: {{
composite: {{
filters: [{{
keyword: {{
field: "attributes",
keys: ["{keys_str}"]
}}
}}]
}}
}}
"""
return filters_str
def _process_job(self, job: dict) -> JobPost | None:
"""
Parses the job dict into JobPost model
:param job: dict to parse
:return: JobPost if it's a new job
"""
job_url = f'{self.base_url}/viewjob?jk={job["key"]}'
if job_url in self.seen_urls:
return
self.seen_urls.add(job_url)
description = job['description']['html']
description = markdown_converter(description) if self.scraper_input.description_format == DescriptionFormat.MARKDOWN else description
job_type = self._get_job_type(job['attributes'])
timestamp_seconds = job["datePublished"] / 1000
date_posted = datetime.fromtimestamp(timestamp_seconds).strftime("%Y-%m-%d")
employer = job['employer'].get('dossier') if job['employer'] else None
employer_details = employer.get('employerDetails', {}) if employer else {}
return JobPost(
title=job["title"],
description=description,
company_name=job['employer'].get("name") if job.get('employer') else None,
company_url=f"{self.base_url}{job['employer']['relativeCompanyPageUrl']}" if job[
'employer'] else None,
company_url_direct=employer['links']['corporateWebsite'] if employer else None,
location=Location(
city=job.get("location", {}).get("city"),
state=job.get("location", {}).get("admin1Code"),
country=job.get("location", {}).get("countryCode"),
),
job_type=job_type,
compensation=self._get_compensation(job),
date_posted=date_posted,
job_url=job_url,
job_url_direct=job['recruit'].get('viewJobUrl') if job.get('recruit') else None,
emails=extract_emails_from_text(description) if description else None,
is_remote=self._is_job_remote(job, description),
company_addresses=employer_details['addresses'][0] if employer_details.get('addresses') else None,
company_industry=employer_details['industry'].replace('Iv1', '').replace('_', ' ').title() if employer_details.get('industry') else None,
company_num_employees=employer_details.get('employeesLocalizedLabel'),
company_revenue=employer_details.get('revenueLocalizedLabel'),
company_description=employer_details.get('briefDescription'),
ceo_name=employer_details.get('ceoName'),
ceo_photo_url=employer_details.get('ceoPhotoUrl'),
logo_photo_url=employer['images'].get('squareLogoUrl') if employer and employer.get('images') else None,
banner_photo_url=employer['images'].get('headerImageUrl') if employer and employer.get('images') else None,
)
@staticmethod @staticmethod
def get_job_type(job: dict) -> list[JobType] | None: def _get_job_type(attributes: list) -> list[JobType]:
""" """
Parses the job to get list of job types Parses the attributes to get list of job types
:param job: :param attributes:
:return: :return: list of JobType
""" """
job_types: list[JobType] = [] job_types: list[JobType] = []
for taxonomy in job["taxonomyAttributes"]: for attribute in attributes:
if taxonomy["label"] == "job-types": job_type_str = attribute['label'].replace("-", "").replace(" ", "").lower()
for i in range(len(taxonomy["attributes"])):
label = taxonomy["attributes"][i].get("label")
if label:
job_type_str = label.replace("-", "").replace(" ", "").lower()
job_type = get_enum_from_job_type(job_type_str) job_type = get_enum_from_job_type(job_type_str)
if job_type: if job_type:
job_types.append(job_type) job_types.append(job_type)
return job_types return job_types
@staticmethod @staticmethod
def get_compensation(job: dict, job_detailed: dict) -> Compensation: def _get_compensation(job: dict) -> Compensation | None:
""" """
Parses the job to get Parses the job to get compensation
:param job:
:param job: :param job:
:param job_detailed:
:return: compensation object :return: compensation object
""" """
comp = job_detailed['compensation']['baseSalary'] comp = job['compensation']['baseSalary']
if comp: if comp:
interval = IndeedScraper.get_correct_interval(comp['unitOfWork']) interval = IndeedScraper._get_compensation_interval(comp['unitOfWork'])
if interval: if interval:
return Compensation( return Compensation(
interval=interval, interval=interval,
min_amount=round(comp['range'].get('min'), 2) if comp['range'].get('min') is not None else None, min_amount=round(comp['range'].get('min'), 2) if comp['range'].get('min') is not None else None,
max_amount=round(comp['range'].get('max'), 2) if comp['range'].get('max') is not None else None, max_amount=round(comp['range'].get('max'), 2) if comp['range'].get('max') is not None else None,
currency=job_detailed['compensation']['currencyCode'] currency=job['compensation']['currencyCode']
)
extracted_salary = job.get("extractedSalary")
compensation = None
if extracted_salary:
salary_snippet = job.get("salarySnippet")
currency = salary_snippet.get("currency") if salary_snippet else None
interval = (extracted_salary.get("type"),)
if isinstance(interval, tuple):
interval = interval[0]
interval = interval.upper()
if interval in CompensationInterval.__members__:
compensation = Compensation(
interval=CompensationInterval[interval],
min_amount=int(extracted_salary.get("min")),
max_amount=int(extracted_salary.get("max")),
currency=currency,
)
return compensation
@staticmethod
def parse_jobs(soup: BeautifulSoup) -> dict:
"""
Parses the jobs from the soup object
:param soup:
:return: jobs
"""
def find_mosaic_script() -> Tag | None:
"""
Finds jobcards script tag
:return: script_tag
"""
script_tags = soup.find_all("script")
for tag in script_tags:
if (
tag.string
and "mosaic.providerData" in tag.string
and "mosaic-provider-jobcards" in tag.string
):
return tag
return None
script_tag = find_mosaic_script()
if script_tag:
script_str = script_tag.string
pattern = r'window.mosaic.providerData\["mosaic-provider-jobcards"\]\s*=\s*({.*?});'
p = re.compile(pattern, re.DOTALL)
m = p.search(script_str)
if m:
jobs = json.loads(m.group(1).strip())
return jobs
else:
raise IndeedException("Could not find mosaic provider job cards data")
else:
raise IndeedException(
"Could not find any results for the search"
) )
@staticmethod @staticmethod
def get_headers(): def _is_job_remote(job: dict, description: str) -> bool:
return { """
'Host': 'www.indeed.com', Searches the description, location, and attributes to check if job is remote
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', """
'sec-fetch-site': 'same-origin',
'sec-fetch-dest': 'document',
'accept-language': 'en-US,en;q=0.9',
'sec-fetch-mode': 'navigate',
'user-agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 192.0',
'referer': 'https://www.indeed.com/m/jobs?q=software%20intern&l=Dallas%2C%20TX&from=serpso&rq=1&rsIdx=3',
}
@staticmethod
def add_params(scraper_input: ScraperInput, page: int) -> dict[str, str | Any]:
# `fromage` is the posting time filter in days
fromage = max(scraper_input.hours_old // 24, 1) if scraper_input.hours_old else None
params = {
"q": scraper_input.search_term,
"l": scraper_input.location if scraper_input.location else scraper_input.country.value[0].split(',')[-1],
"filter": 0,
"start": scraper_input.offset + page * 10,
"sort": "date",
"fromage": fromage,
}
if scraper_input.distance:
params["radius"] = scraper_input.distance
sc_values = []
if scraper_input.is_remote:
sc_values.append("attr(DSQF7)")
if scraper_input.job_type:
sc_values.append("jt({})".format(scraper_input.job_type.value[0]))
if sc_values:
params["sc"] = "0kf:" + "".join(sc_values) + ";"
if scraper_input.easy_apply:
params['iafilter'] = 1
return params
@staticmethod
def is_job_remote(job: dict, job_detailed: dict, description: str) -> bool:
remote_keywords = ['remote', 'work from home', 'wfh'] remote_keywords = ['remote', 'work from home', 'wfh']
is_remote_in_attributes = any( is_remote_in_attributes = any(
any(keyword in attr['label'].lower() for keyword in remote_keywords) any(keyword in attr['label'].lower() for keyword in remote_keywords)
for attr in job_detailed['attributes'] for attr in job['attributes']
) )
is_remote_in_description = any(keyword in description.lower() for keyword in remote_keywords) is_remote_in_description = any(keyword in description.lower() for keyword in remote_keywords)
is_remote_in_location = any( is_remote_in_location = any(
keyword in job_detailed['location']['formatted']['long'].lower() keyword in job['location']['formatted']['long'].lower()
for keyword in remote_keywords for keyword in remote_keywords
) )
is_remote_in_taxonomy = any( return is_remote_in_attributes or is_remote_in_description or is_remote_in_location
taxonomy["label"] == "remote" and len(taxonomy["attributes"]) > 0
for taxonomy in job.get("taxonomyAttributes", [])
)
return is_remote_in_attributes or is_remote_in_description or is_remote_in_location or is_remote_in_taxonomy
def get_job_details(self, job_keys: list[str]) -> dict: @staticmethod
""" def _get_compensation_interval(interval: str) -> CompensationInterval:
Queries the GraphQL endpoint for detailed job information for the given job keys. interval_mapping = {
""" "DAY": "DAILY",
url = "https://apis.indeed.com/graphql" "YEAR": "YEARLY",
headers = { "HOUR": "HOURLY",
"WEEK": "WEEKLY",
"MONTH": "MONTHLY"
}
mapped_interval = interval_mapping.get(interval.upper(), None)
if mapped_interval and mapped_interval in CompensationInterval.__members__:
return CompensationInterval[mapped_interval]
else:
raise ValueError(f"Unsupported interval: {interval}")
api_headers = {
'Host': 'apis.indeed.com', 'Host': 'apis.indeed.com',
'content-type': 'application/json', 'content-type': 'application/json',
'indeed-api-key': '161092c2017b5bbab13edb12461a62d5a833871e7cad6d9d475304573de67ac8', 'indeed-api-key': '161092c2017b5bbab13edb12461a62d5a833871e7cad6d9d475304573de67ac8',
@@ -356,27 +287,35 @@ class IndeedScraper(Scraper):
'accept-language': 'en-US,en;q=0.9', 'accept-language': 'en-US,en;q=0.9',
'user-agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 193.1', 'user-agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Indeed App 193.1',
'indeed-app-info': 'appv=193.1; appid=com.indeed.jobsearch; osv=16.6.1; os=ios; dtype=phone', 'indeed-app-info': 'appv=193.1; appid=com.indeed.jobsearch; osv=16.6.1; os=ios; dtype=phone',
'indeed-co': 'US',
} }
job_search_query = """
job_keys_gql = '[' + ', '.join(f'"{key}"' for key in job_keys) + ']'
payload = {
"query": f"""
query GetJobData {{ query GetJobData {{
jobData(input: {{ jobSearch(
jobKeys: {job_keys_gql} what: "{what}"
}}) {{ location: {{ where: "{location}", radius: {radius}, radiusUnit: MILES }}
includeSponsoredResults: NONE
limit: 100
sort: DATE
{cursor}
{filters}
) {{
pageInfo {{
nextCursor
}}
results {{ results {{
trackingKey
job {{ job {{
key key
title title
datePublished
dateOnIndeed
description {{ description {{
html html
}} }}
location {{ location {{
countryName countryName
countryCode countryCode
admin1Code
city city
postalCode postalCode
streetAddress streetAddress
@@ -398,10 +337,30 @@ class IndeedScraper(Scraper):
currencyCode currencyCode
}} }}
attributes {{ attributes {{
key
label label
}} }}
employer {{ employer {{
relativeCompanyPageUrl relativeCompanyPageUrl
name
dossier {{
employerDetails {{
addresses
industry
employeesLocalizedLabel
revenueLocalizedLabel
briefDescription
ceoName
ceoPhotoUrl
}}
images {{
headerImageUrl
squareLogoUrl
}}
links {{
corporateWebsite
}}
}}
}} }}
recruit {{ recruit {{
viewJobUrl viewJobUrl
@@ -413,24 +372,3 @@ class IndeedScraper(Scraper):
}} }}
}} }}
""" """
}
response = requests.post(url, headers=headers, json=payload, proxies=self.proxy)
if response.status_code == 200:
return response.json()['data']['jobData']['results']
else:
return {}
@staticmethod
def get_correct_interval(interval: str) -> CompensationInterval:
interval_mapping = {
"DAY": "DAILY",
"YEAR": "YEARLY",
"HOUR": "HOURLY",
"WEEK": "WEEKLY",
"MONTH": "MONTHLY"
}
mapped_interval = interval_mapping.get(interval.upper(), None)
if mapped_interval and mapped_interval in CompensationInterval.__members__:
return CompensationInterval[mapped_interval]
else:
raise ValueError(f"Unsupported interval: {interval}")

View File

@@ -9,8 +9,6 @@ import random
from typing import Optional from typing import Optional
from datetime import datetime from datetime import datetime
import requests
from requests.exceptions import ProxyError
from threading import Lock from threading import Lock
from bs4.element import Tag from bs4.element import Tag
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
@@ -25,27 +23,31 @@ from ...jobs import (
JobResponse, JobResponse,
JobType, JobType,
Country, Country,
Compensation Compensation,
DescriptionFormat
) )
from ..utils import ( from ..utils import (
count_urgent_words, logger,
extract_emails_from_text, extract_emails_from_text,
get_enum_from_job_type, get_enum_from_job_type,
currency_parser currency_parser,
markdown_converter
) )
class LinkedInScraper(Scraper): class LinkedInScraper(Scraper):
DELAY = 3 base_url = "https://www.linkedin.com"
delay = 3
band_delay = 4
jobs_per_page = 25
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxy: Optional[str] = None):
""" """
Initializes LinkedInScraper with the LinkedIn job search url Initializes LinkedInScraper with the LinkedIn job search url
""" """
site = Site(Site.LINKEDIN) super().__init__(Site(Site.LINKEDIN), proxy=proxy)
self.scraper_input = None
self.country = "worldwide" self.country = "worldwide"
self.url = "https://www.linkedin.com"
super().__init__(site, proxy=proxy)
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
@@ -53,67 +55,58 @@ class LinkedInScraper(Scraper):
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
self.scraper_input = scraper_input
job_list: list[JobPost] = [] job_list: list[JobPost] = []
seen_urls = set() seen_urls = set()
url_lock = Lock() url_lock = Lock()
page = scraper_input.offset // 25 + 25 if scraper_input.offset else 0 page = scraper_input.offset // 25 + 25 if scraper_input.offset else 0
seconds_old = ( seconds_old = (
scraper_input.hours_old * 3600 scraper_input.hours_old * 3600
if scraper_input.hours_old if scraper_input.hours_old
else None else None
) )
def job_type_code(job_type_enum):
mapping = {
JobType.FULL_TIME: "F",
JobType.PART_TIME: "P",
JobType.INTERNSHIP: "I",
JobType.CONTRACT: "C",
JobType.TEMPORARY: "T",
}
return mapping.get(job_type_enum, "")
continue_search = lambda: len(job_list) < scraper_input.results_wanted and page < 1000 continue_search = lambda: len(job_list) < scraper_input.results_wanted and page < 1000
while continue_search(): while continue_search():
logger.info(f'LinkedIn search page: {page // 25 + 1}')
session = create_session(is_tls=False, has_retry=True, delay=5) session = create_session(is_tls=False, has_retry=True, delay=5)
params = { params = {
"keywords": scraper_input.search_term, "keywords": scraper_input.search_term,
"location": scraper_input.location, "location": scraper_input.location,
"distance": scraper_input.distance, "distance": scraper_input.distance,
"f_WT": 2 if scraper_input.is_remote else None, "f_WT": 2 if scraper_input.is_remote else None,
"f_JT": job_type_code(scraper_input.job_type) "f_JT": self.job_type_code(scraper_input.job_type)
if scraper_input.job_type if scraper_input.job_type
else None, else None,
"pageNum": 0, "pageNum": 0,
"start": page + scraper_input.offset, "start": page + scraper_input.offset,
"f_AL": "true" if scraper_input.easy_apply else None, "f_AL": "true" if scraper_input.easy_apply else None,
"f_C": ','.join(map(str, scraper_input.linkedin_company_ids)) if scraper_input.linkedin_company_ids else None, "f_C": ','.join(map(str, scraper_input.linkedin_company_ids)) if scraper_input.linkedin_company_ids else None,
"f_TPR": f"r{seconds_old}",
} }
if seconds_old is not None:
params["f_TPR"] = f"r{seconds_old}"
params = {k: v for k, v in params.items() if v is not None} params = {k: v for k, v in params.items() if v is not None}
try: try:
response = session.get( response = session.get(
f"{self.url}/jobs-guest/jobs/api/seeMoreJobPostings/search?", f"{self.base_url}/jobs-guest/jobs/api/seeMoreJobPostings/search?",
params=params, params=params,
allow_redirects=True, allow_redirects=True,
proxies=self.proxy, proxies=self.proxy,
headers=self.headers(), headers=self.headers,
timeout=10, timeout=10,
) )
response.raise_for_status() if response.status_code not in range(200, 400):
if response.status_code == 429:
except requests.HTTPError as e: logger.error(f'429 Response - Blocked by LinkedIn for too many requests')
raise LinkedInException( else:
f"bad response status code: {e.response.status_code}" logger.error(f'LinkedIn response status code {response.status_code}')
) return JobResponse(jobs=job_list)
except ProxyError as e:
raise LinkedInException("bad proxy")
except Exception as e: except Exception as e:
raise LinkedInException(str(e)) if "Proxy responded with" in str(e):
logger.error(f'LinkedIn: Bad proxy')
else:
logger.error(f'LinkedIn: {str(e)}')
return JobResponse(jobs=job_list)
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
job_cards = soup.find_all("div", class_="base-search-card") job_cards = soup.find_all("div", class_="base-search-card")
@@ -126,29 +119,29 @@ class LinkedInScraper(Scraper):
if href_tag and "href" in href_tag.attrs: if href_tag and "href" in href_tag.attrs:
href = href_tag.attrs["href"].split("?")[0] href = href_tag.attrs["href"].split("?")[0]
job_id = href.split("-")[-1] job_id = href.split("-")[-1]
job_url = f"{self.url}/jobs/view/{job_id}" job_url = f"{self.base_url}/jobs/view/{job_id}"
with url_lock: with url_lock:
if job_url in seen_urls: if job_url in seen_urls:
continue continue
seen_urls.add(job_url) seen_urls.add(job_url)
# Call process_job directly without threading
try: try:
job_post = self.process_job(job_card, job_url, scraper_input.full_description) job_post = self._process_job(job_card, job_url, scraper_input.linkedin_fetch_description)
if job_post: if job_post:
job_list.append(job_post) job_list.append(job_post)
if not continue_search():
break
except Exception as e: except Exception as e:
raise LinkedInException("Exception occurred while processing jobs") raise LinkedInException(str(e))
if continue_search(): if continue_search():
time.sleep(random.uniform(LinkedInScraper.DELAY, LinkedInScraper.DELAY + 2)) time.sleep(random.uniform(self.delay, self.delay + self.band_delay))
page += 25 page += self.jobs_per_page
job_list = job_list[: scraper_input.results_wanted] job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
def process_job(self, job_card: Tag, job_url: str, full_descr: bool) -> Optional[JobPost]: def _process_job(self, job_card: Tag, job_url: str, full_descr: bool) -> Optional[JobPost]:
salary_tag = job_card.find('span', class_='job-search-card__salary-info') salary_tag = job_card.find('span', class_='job-search-card__salary-info')
compensation = None compensation = None
@@ -178,7 +171,7 @@ class LinkedInScraper(Scraper):
company = company_a_tag.get_text(strip=True) if company_a_tag else "N/A" company = company_a_tag.get_text(strip=True) if company_a_tag else "N/A"
metadata_card = job_card.find("div", class_="base-search-card__metadata") metadata_card = job_card.find("div", class_="base-search-card__metadata")
location = self.get_location(metadata_card) location = self._get_location(metadata_card)
datetime_tag = ( datetime_tag = (
metadata_card.find("time", class_="job-search-card__listdate") metadata_card.find("time", class_="job-search-card__listdate")
@@ -190,12 +183,11 @@ class LinkedInScraper(Scraper):
datetime_str = datetime_tag["datetime"] datetime_str = datetime_tag["datetime"]
try: try:
date_posted = datetime.strptime(datetime_str, "%Y-%m-%d") date_posted = datetime.strptime(datetime_str, "%Y-%m-%d")
except Exception as e: except:
date_posted = None date_posted = None
benefits_tag = job_card.find("span", class_="result-benefits__text") benefits_tag = job_card.find("span", class_="result-benefits__text")
benefits = " ".join(benefits_tag.get_text().split()) if benefits_tag else None
if full_descr: if full_descr:
description, job_type = self.get_job_description(job_url) description, job_type = self._get_job_description(job_url)
return JobPost( return JobPost(
title=title, title=title,
@@ -205,14 +197,12 @@ class LinkedInScraper(Scraper):
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
compensation=compensation, compensation=compensation,
benefits=benefits,
job_type=job_type, job_type=job_type,
description=description, description=description,
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None,
) )
def get_job_description( def _get_job_description(
self, job_page_url: str self, job_page_url: str
) -> tuple[None, None] | tuple[str | None, tuple[str | None, JobType | None]]: ) -> tuple[None, None] | tuple[str | None, tuple[str | None, JobType | None]]:
""" """
@@ -222,11 +212,9 @@ class LinkedInScraper(Scraper):
""" """
try: try:
session = create_session(is_tls=False, has_retry=True) session = create_session(is_tls=False, has_retry=True)
response = session.get(job_page_url, timeout=5, proxies=self.proxy) response = session.get(job_page_url, headers=self.headers, timeout=5, proxies=self.proxy)
response.raise_for_status() response.raise_for_status()
except requests.HTTPError as e: except:
return None, None
except Exception as e:
return None, None return None, None
if response.url == "https://www.linkedin.com/signup": if response.url == "https://www.linkedin.com/signup":
return None, None return None, None
@@ -241,40 +229,13 @@ class LinkedInScraper(Scraper):
for attr in list(tag.attrs): for attr in list(tag.attrs):
del tag[attr] del tag[attr]
return tag return tag
div_content = remove_attributes(div_content) div_content = remove_attributes(div_content)
description = div_content.prettify(formatter="html") description = div_content.prettify(formatter="html")
if self.scraper_input.description_format == DescriptionFormat.MARKDOWN:
description = markdown_converter(description)
return description, self._parse_job_type(soup)
def get_job_type( def _get_location(self, metadata_card: Optional[Tag]) -> Location:
soup_job_type: BeautifulSoup,
) -> list[JobType] | None:
"""
Gets the job type from job page
:param soup_job_type:
:return: JobType
"""
h3_tag = soup_job_type.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Employment type" in text,
)
employment_type = None
if h3_tag:
employment_type_span = h3_tag.find_next_sibling(
"span",
class_="description__job-criteria-text description__job-criteria-text--criteria",
)
if employment_type_span:
employment_type = employment_type_span.get_text(strip=True)
employment_type = employment_type.lower()
employment_type = employment_type.replace("-", "")
return [get_enum_from_job_type(employment_type)] if employment_type else []
return description, get_job_type(soup)
def get_location(self, metadata_card: Optional[Tag]) -> Location:
""" """
Extracts the location data from the job metadata card. Extracts the location data from the job metadata card.
:param metadata_card :param metadata_card
@@ -299,25 +260,50 @@ class LinkedInScraper(Scraper):
location = Location( location = Location(
city=city, city=city,
state=state, state=state,
country=Country.from_string(country), country=Country.from_string(country)
) )
return location return location
@staticmethod @staticmethod
def headers() -> dict: def _parse_job_type(soup_job_type: BeautifulSoup) -> list[JobType] | None:
"""
Gets the job type from job page
:param soup_job_type:
:return: JobType
"""
h3_tag = soup_job_type.find(
"h3",
class_="description__job-criteria-subheader",
string=lambda text: "Employment type" in text,
)
employment_type = None
if h3_tag:
employment_type_span = h3_tag.find_next_sibling(
"span",
class_="description__job-criteria-text description__job-criteria-text--criteria",
)
if employment_type_span:
employment_type = employment_type_span.get_text(strip=True)
employment_type = employment_type.lower()
employment_type = employment_type.replace("-", "")
return [get_enum_from_job_type(employment_type)] if employment_type else []
@staticmethod
def job_type_code(job_type_enum: JobType) -> str:
return { return {
JobType.FULL_TIME: "F",
JobType.PART_TIME: "P",
JobType.INTERNSHIP: "I",
JobType.CONTRACT: "C",
JobType.TEMPORARY: "T",
}.get(job_type_enum, "")
headers = {
"authority": "www.linkedin.com", "authority": "www.linkedin.com",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7", "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"accept-language": "en-US,en;q=0.9", "accept-language": "en-US,en;q=0.9",
"cache-control": "max-age=0", "cache-control": "max-age=0",
"sec-ch-ua": '"Not_A Brand";v="8", "Chromium";v="120", "Google Chrome";v="120"',
# 'sec-ch-ua-mobile': '?0',
# 'sec-ch-ua-platform': '"macOS"',
# 'sec-fetch-dest': 'document',
# 'sec-fetch-mode': 'navigate',
# 'sec-fetch-site': 'none',
# 'sec-fetch-user': '?1',
"upgrade-insecure-requests": "1", "upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
} }

View File

@@ -1,35 +1,29 @@
import re
import logging import logging
import numpy as np import re
import tls_client import numpy as np
import requests import requests
import tls_client
from markdownify import markdownify as md
from requests.adapters import HTTPAdapter, Retry from requests.adapters import HTTPAdapter, Retry
from ..jobs import JobType from ..jobs import JobType
logger = logging.getLogger("JobSpy") logger = logging.getLogger("JobSpy")
logger.propagate = False
if not logger.handlers: if not logger.handlers:
logger.setLevel(logging.ERROR) logger.setLevel(logging.INFO)
console_handler = logging.StreamHandler() console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter) console_handler.setFormatter(formatter)
logger.addHandler(console_handler) logger.addHandler(console_handler)
def count_urgent_words(description: str) -> int: def markdown_converter(description_html: str):
""" if description_html is None:
Count the number of urgent words or phrases in a job description. return None
""" markdown = md(description_html)
urgent_patterns = re.compile( return markdown.strip()
r"\burgen(t|cy)|\bimmediate(ly)?\b|start asap|\bhiring (now|immediate(ly)?)\b",
re.IGNORECASE,
)
matches = re.findall(urgent_patterns, description)
count = len(matches)
return count
def extract_emails_from_text(text: str) -> list[str] | None: def extract_emails_from_text(text: str) -> list[str] | None:
@@ -42,14 +36,10 @@ def extract_emails_from_text(text: str) -> list[str] | None:
def create_session(proxy: dict | None = None, is_tls: bool = True, has_retry: bool = False, delay: int = 1) -> requests.Session: def create_session(proxy: dict | None = None, is_tls: bool = True, has_retry: bool = False, delay: int = 1) -> requests.Session:
""" """
Creates a requests session with optional tls, proxy, and retry settings. Creates a requests session with optional tls, proxy, and retry settings.
:return: A session object :return: A session object
""" """
if is_tls: if is_tls:
session = tls_client.Session( session = tls_client.Session(random_tls_extension_order=True)
client_identifier="chrome112",
random_tls_extension_order=True,
)
session.proxies = proxy session.proxies = proxy
else: else:
session = requests.Session() session = requests.Session()
@@ -66,7 +56,6 @@ def create_session(proxy: dict | None = None, is_tls: bool = True, has_retry: bo
session.mount('http://', adapter) session.mount('http://', adapter)
session.mount('https://', adapter) session.mount('https://', adapter)
return session return session

View File

@@ -6,33 +6,75 @@ This module contains routines to scrape ZipRecruiter.
""" """
import math import math
import time import time
from datetime import datetime, timezone from datetime import datetime
from typing import Optional, Tuple, Any from typing import Optional, Tuple, Any
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..exceptions import ZipRecruiterException from ..utils import (
from ...jobs import JobPost, Compensation, Location, JobResponse, JobType, Country logger,
from ..utils import count_urgent_words, extract_emails_from_text, create_session extract_emails_from_text,
create_session,
markdown_converter
)
from ...jobs import (
JobPost,
Compensation,
Location,
JobResponse,
JobType,
Country,
DescriptionFormat
)
class ZipRecruiterScraper(Scraper): class ZipRecruiterScraper(Scraper):
base_url = "https://www.ziprecruiter.com"
api_url = "https://api.ziprecruiter.com"
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxy: Optional[str] = None):
""" """
Initializes ZipRecruiterScraper with the ZipRecruiter job search url Initializes ZipRecruiterScraper with the ZipRecruiter job search url
""" """
site = Site(Site.ZIP_RECRUITER) self.scraper_input = None
self.url = "https://www.ziprecruiter.com"
self.session = create_session(proxy) self.session = create_session(proxy)
self.get_cookies() self._get_cookies()
super().__init__(site, proxy=proxy) super().__init__(Site.ZIP_RECRUITER, proxy=proxy)
self.delay = 5
self.jobs_per_page = 20 self.jobs_per_page = 20
self.seen_urls = set() self.seen_urls = set()
self.delay = 5
def find_jobs_in_page( def scrape(self, scraper_input: ScraperInput) -> JobResponse:
"""
Scrapes ZipRecruiter for jobs with scraper_input criteria.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
"""
self.scraper_input = scraper_input
job_list: list[JobPost] = []
continue_token = None
max_pages = math.ceil(scraper_input.results_wanted / self.jobs_per_page)
for page in range(1, max_pages + 1):
if len(job_list) >= scraper_input.results_wanted:
break
if page > 1:
time.sleep(self.delay)
logger.info(f'ZipRecruiter search page: {page}')
jobs_on_page, continue_token = self._find_jobs_in_page(
scraper_input, continue_token
)
if jobs_on_page:
job_list.extend(jobs_on_page)
else:
break
if not continue_token:
break
return JobResponse(jobs=job_list[: scraper_input.results_wanted])
def _find_jobs_in_page(
self, scraper_input: ScraperInput, continue_token: str | None = None self, scraper_input: ScraperInput, continue_token: str | None = None
) -> Tuple[list[JobPost], Optional[str]]: ) -> Tuple[list[JobPost], Optional[str]]:
""" """
@@ -41,73 +83,51 @@ class ZipRecruiterScraper(Scraper):
:param continue_token: :param continue_token:
:return: jobs found on page :return: jobs found on page
""" """
params = self.add_params(scraper_input) jobs_list = []
params = self._add_params(scraper_input)
if continue_token: if continue_token:
params["continue_from"] = continue_token params["continue_from"] = continue_token
try: try:
response = self.session.get( res= self.session.get(
f"https://api.ziprecruiter.com/jobs-app/jobs", f"{self.api_url}/jobs-app/jobs",
headers=self.headers(), headers=self.headers,
params=params params=params
) )
if response.status_code != 200: if res.status_code not in range(200, 400):
raise ZipRecruiterException( if res.status_code == 429:
f"bad response status code: {response.status_code}" logger.error(f'429 Response - Blocked by ZipRecruiter for too many requests')
) else:
logger.error(f'ZipRecruiter response status code {res.status_code}')
return jobs_list, ""
except Exception as e: except Exception as e:
if "Proxy responded with non 200 code" in str(e): if "Proxy responded with" in str(e):
raise ZipRecruiterException("bad proxy") logger.error(f'Indeed: Bad proxy')
raise ZipRecruiterException(str(e)) else:
logger.error(f'Indeed: {str(e)}')
return jobs_list, ""
response_data = response.json()
jobs_list = response_data.get("jobs", [])
next_continue_token = response_data.get("continue", None)
res_data = res.json()
jobs_list = res_data.get("jobs", [])
next_continue_token = res_data.get("continue", None)
with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor: with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
job_results = [executor.submit(self.process_job, job) for job in jobs_list] job_results = [executor.submit(self._process_job, job) for job in jobs_list]
job_list = list(filter(None, (result.result() for result in job_results))) job_list = list(filter(None, (result.result() for result in job_results)))
return job_list, next_continue_token return job_list, next_continue_token
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def _process_job(self, job: dict) -> JobPost | None:
""" """
Scrapes ZipRecruiter for jobs with scraper_input criteria. Processes an individual job dict from the response
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
""" """
job_list: list[JobPost] = []
continue_token = None
max_pages = math.ceil(scraper_input.results_wanted / self.jobs_per_page)
for page in range(1, max_pages + 1):
if len(job_list) >= scraper_input.results_wanted:
break
if page > 1:
time.sleep(self.delay)
jobs_on_page, continue_token = self.find_jobs_in_page(
scraper_input, continue_token
)
if jobs_on_page:
job_list.extend(jobs_on_page)
if not continue_token:
break
return JobResponse(jobs=job_list[: scraper_input.results_wanted])
def process_job(self, job: dict) -> JobPost | None:
"""Processes an individual job dict from the response"""
title = job.get("name") title = job.get("name")
job_url = f"https://www.ziprecruiter.com/jobs//j?lvk={job['listing_key']}" job_url = f"{self.base_url}/jobs//j?lvk={job['listing_key']}"
if job_url in self.seen_urls: if job_url in self.seen_urls:
return return
self.seen_urls.add(job_url) self.seen_urls.add(job_url)
description = job.get("job_description", "").strip() description = job.get("job_description", "").strip()
description = markdown_converter(description) if self.scraper_input.description_format == DescriptionFormat.MARKDOWN else description
company = job.get("hiring_company", {}).get("name") company = job.get("hiring_company", {}).get("name")
country_value = "usa" if job.get("job_country") == "US" else "canada" country_value = "usa" if job.get("job_country") == "US" else "canada"
country_enum = Country.from_string(country_value) country_enum = Country.from_string(country_value)
@@ -115,11 +135,10 @@ class ZipRecruiterScraper(Scraper):
location = Location( location = Location(
city=job.get("job_city"), state=job.get("job_state"), country=country_enum city=job.get("job_city"), state=job.get("job_state"), country=country_enum
) )
job_type = ZipRecruiterScraper.get_job_type_enum( job_type = self._get_job_type_enum(
job.get("employment_type", "").replace("_", "").lower() job.get("employment_type", "").replace("_", "").lower()
) )
date_posted = datetime.fromisoformat(job['posted_time'].rstrip("Z")).date() date_posted = datetime.fromisoformat(job['posted_time'].rstrip("Z")).date()
return JobPost( return JobPost(
title=title, title=title,
company_name=company, company_name=company,
@@ -141,23 +160,21 @@ class ZipRecruiterScraper(Scraper):
job_url=job_url, job_url=job_url,
description=description, description=description,
emails=extract_emails_from_text(description) if description else None, emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None,
) )
def get_cookies(self): def _get_cookies(self):
url="https://api.ziprecruiter.com/jobs-app/event"
data="event_type=session&logged_in=false&number_of_retry=1&property=model%3AiPhone&property=os%3AiOS&property=locale%3Aen_us&property=app_build_number%3A4734&property=app_version%3A91.0&property=manufacturer%3AApple&property=timestamp%3A2024-01-12T12%3A04%3A42-06%3A00&property=screen_height%3A852&property=os_version%3A16.6.1&property=source%3Ainstall&property=screen_width%3A393&property=device_model%3AiPhone%2014%20Pro&property=brand%3AApple" data="event_type=session&logged_in=false&number_of_retry=1&property=model%3AiPhone&property=os%3AiOS&property=locale%3Aen_us&property=app_build_number%3A4734&property=app_version%3A91.0&property=manufacturer%3AApple&property=timestamp%3A2024-01-12T12%3A04%3A42-06%3A00&property=screen_height%3A852&property=os_version%3A16.6.1&property=source%3Ainstall&property=screen_width%3A393&property=device_model%3AiPhone%2014%20Pro&property=brand%3AApple"
self.session.post(url, data=data, headers=ZipRecruiterScraper.headers()) self.session.post(f"{self.api_url}/jobs-app/event", data=data, headers=self.headers)
@staticmethod @staticmethod
def get_job_type_enum(job_type_str: str) -> list[JobType] | None: def _get_job_type_enum(job_type_str: str) -> list[JobType] | None:
for job_type in JobType: for job_type in JobType:
if job_type_str in job_type.value: if job_type_str in job_type.value:
return [job_type] return [job_type]
return None return None
@staticmethod @staticmethod
def add_params(scraper_input) -> dict[str, str | Any]: def _add_params(scraper_input) -> dict[str, str | Any]:
params = { params = {
"search": scraper_input.search_term, "search": scraper_input.search_term,
"location": scraper_input.location, "location": scraper_input.location,
@@ -177,18 +194,9 @@ class ZipRecruiterScraper(Scraper):
params["remote"] = 1 params["remote"] = 1
if scraper_input.distance: if scraper_input.distance:
params["radius"] = scraper_input.distance params["radius"] = scraper_input.distance
return {k: v for k, v in params.items() if v is not None}
params = {k: v for k, v in params.items() if v is not None} headers = {
return params
@staticmethod
def headers() -> dict:
"""
Returns headers needed for requests
:return: dict - Dictionary containing headers
"""
return {
"Host": "api.ziprecruiter.com", "Host": "api.ziprecruiter.com",
"accept": "*/*", "accept": "*/*",
"x-zr-zva-override": "100000000;vid:ZT1huzm_EQlDTVEc", "x-zr-zva-override": "100000000;vid:ZT1huzm_EQlDTVEc",