Compare commits

...

9 Commits

Author SHA1 Message Date
Cullen
78c1ec8e9f [fix] add compensation 2023-10-28 16:13:10 -05:00
Cullen
a2dd93aca1 [enh] use ziprecuriter api 2023-10-28 15:50:28 -05:00
Cullen Watson
ff1eb0f7b0 [docs] update readme 2023-10-18 14:32:21 -05:00
Cullen Watson
f2cc74b7f2 Fix Indeed exceptions on parsing description 2023-10-18 14:25:53 -05:00
Cullen Watson
5e71866630 [docs] link change 2023-10-18 11:18:03 -05:00
Zachary Hampton
4e67c6e5a3 Update README.md 2023-10-17 20:22:56 -07:00
Cullen Watson
caf655525a docs: update readme 2023-10-10 11:54:14 -05:00
Cullen Watson
90fa4a4c4f feat: utils.py 2023-10-10 11:29:29 -05:00
Cullen Watson
e5353e604d Multiple job types for Indeed, urgent keywords column (#56)
* enh(indeed): mult job types

* feat(jobs):  urgent kws

* fix(indeed): use new session obj per request

* fix: emails as comma separated in output

* fix: put num urgent words in output

* chore: readme
2023-10-10 11:23:04 -05:00
15 changed files with 1248 additions and 1305 deletions

View File

@@ -4,10 +4,10 @@
**Not technical?** Try out the web scraping tool on our site at [usejobspy.com](https://usejobspy.com).
*Looking to build a data-focused software product?* **[Book a call](https://calendly.com/zachary-products/15min)** *to
*Looking to build a data-focused software product?* **[Book a call](https://calendly.com/bunsly/15min)** *to
work with us.*
\
Check out another project we wrote: ***[HomeHarvest](https://github.com/ZacharyHampton/HomeHarvest)** a Python package
Check out another project we wrote: ***[HomeHarvest](https://github.com/Bunsly/HomeHarvest)** a Python package
for real estate scraping*
## Features
@@ -24,7 +24,7 @@ Updated for release v1.1.3
### Installation
```
pip install --upgrade python-jobspy
pip install python-jobspy
```
_Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_
@@ -33,39 +33,17 @@ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/)
```python
from jobspy import scrape_jobs
import pandas as pd
jobs: pd.DataFrame = scrape_jobs(
jobs = scrape_jobs(
site_name=["indeed", "linkedin", "zip_recruiter"],
search_term="software engineer",
location="Dallas, TX",
results_wanted=10,
country_indeed='USA' # only needed for indeed
# use if you want to use a proxy
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
# offset=25 # use if you want to start at a specific offset
)
# formatting for pandas
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', 50) # set to 0 to see full job url / desc
# 1 output to console
print(jobs)
# 2 display in Jupyter Notebook (1. pip install jupyter 2. jupyter notebook)
# display(jobs)
# 3 output to .csv
# jobs.to_csv('jobs.csv', index=False)
# 4 output to .xlsx
# jobs.to_xlsx('jobs.xlsx', index=False)
print(f"Found {len(jobs)} jobs")
print(jobs.head())
jobs.to_csv("jobs.csv", index=False) # / to_xlsx
```
### Output
@@ -110,13 +88,16 @@ JobPost
│ ├── city (str)
│ ├── state (str)
├── description (str)
├── job_type (enum): fulltime, parttime, internship, contract
├── job_type (str): fulltime, parttime, internship, contract
├── compensation (object)
│ ├── interval (enum): yearly, monthly, weekly, daily, hourly
│ ├── interval (str): yearly, monthly, weekly, daily, hourly
│ ├── min_amount (int)
│ ├── max_amount (int)
│ └── currency (enum)
└── date_posted (date)
└── emails (str)
└── num_urgent_words (int)
└── is_remote (bool)
```
### Exceptions
@@ -169,13 +150,12 @@ You can specify the following countries when searching on Indeed (use the exact
**Q: Encountering issues with your queries?**
**A:** Try reducing the number of `results_wanted` and/or broadening the filters. If problems
persist, [submit an issue](https://github.com/cullenwatson/JobSpy/issues).
persist, [submit an issue](https://github.com/Bunsly/JobSpy/issues).
---
**Q: Received a response code 429?**
**A:** This indicates that you have been blocked by the job board site for sending too many requests. Currently, *
*LinkedIn** is particularly aggressive with blocking. We recommend:
**A:** This indicates that you have been blocked by the job board site for sending too many requests. All of the job board sites are aggressive with blocking. We recommend:
- Waiting a few seconds between requests.
- Trying a VPN or proxy to change your IP address.

View File

@@ -6,23 +6,23 @@ jobs: pd.DataFrame = scrape_jobs(
search_term="software engineer",
location="Dallas, TX",
results_wanted=50, # be wary the higher it is, the more likey you'll get blocked (rotating proxy should work tho)
country_indeed='USA',
country_indeed="USA",
offset=25 # start jobs from an offset (use if search failed and want to continue)
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
)
# formatting for pandas
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', 50) # set to 0 to see full job url / desc
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", 50) # set to 0 to see full job url / desc
# 1: output to console
print(jobs)
# 2: output to .csv
jobs.to_csv('./jobs.csv', index=False)
print('outputted to jobs.csv')
jobs.to_csv("./jobs.csv", index=False)
print("outputted to jobs.csv")
# 3: output to .xlsx
# jobs.to_xlsx('jobs.xlsx', index=False)

1793
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,9 @@
[tool.poetry]
name = "python-jobspy"
version = "1.1.12"
version = "1.1.16"
description = "Job scraper for LinkedIn, Indeed & ZipRecruiter"
authors = ["Zachary Hampton <zachary@zacharysproducts.com>", "Cullen Watson <cullen@cullen.ai>"]
homepage = "https://github.com/cullenwatson/JobSpy"
authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"]
homepage = "https://github.com/Bunsly/JobSpy"
readme = "README.md"
packages = [
@@ -16,6 +16,7 @@ requests = "^2.31.0"
tls-client = "^0.2.1"
beautifulsoup4 = "^4.12.2"
pandas = "^2.1.0"
NUMPY = "1.24.2"
pydantic = "^2.3.0"

View File

@@ -1,7 +1,7 @@
import pandas as pd
import concurrent.futures
from concurrent.futures import ThreadPoolExecutor
from typing import List, Tuple, Optional
from typing import Tuple, Optional
from .jobs import JobType, Location
from .scrapers.indeed import IndeedScraper
@@ -26,18 +26,18 @@ def _map_str_to_site(site_name: str) -> Site:
def scrape_jobs(
site_name: str | List[str] | Site | List[Site],
search_term: str,
location: str = "",
distance: int = None,
is_remote: bool = False,
job_type: str = None,
easy_apply: bool = False, # linkedin
results_wanted: int = 15,
country_indeed: str = "usa",
hyperlinks: bool = False,
proxy: Optional[str] = None,
offset: Optional[int] = 0
site_name: str | list[str] | Site | list[Site],
search_term: str,
location: str = "",
distance: int = None,
is_remote: bool = False,
job_type: str = None,
easy_apply: bool = False, # linkedin
results_wanted: int = 15,
country_indeed: str = "usa",
hyperlinks: bool = False,
proxy: Optional[str] = None,
offset: Optional[int] = 0,
) -> pd.DataFrame:
"""
Simultaneously scrapes job data from multiple job sites.
@@ -72,7 +72,7 @@ def scrape_jobs(
job_type=job_type,
easy_apply=easy_apply,
results_wanted=results_wanted,
offset=offset
offset=offset,
)
def scrape_site(site: Site) -> Tuple[str, JobResponse]:
@@ -84,13 +84,12 @@ def scrape_jobs(
except (LinkedInException, IndeedException, ZipRecruiterException) as lie:
raise lie
except Exception as e:
# unhandled exceptions
if site == Site.LINKEDIN:
raise LinkedInException()
raise LinkedInException(str(e))
if site == Site.INDEED:
raise IndeedException()
raise IndeedException(str(e))
if site == Site.ZIP_RECRUITER:
raise ZipRecruiterException()
raise ZipRecruiterException(str(e))
else:
raise e
return site.value, scraped_data
@@ -98,8 +97,8 @@ def scrape_jobs(
site_to_jobs_dict = {}
def worker(site):
site_value, scraped_data = scrape_site(site)
return site_value, scraped_data
site_val, scraped_info = scrape_site(site)
return site_val, scraped_info
with ThreadPoolExecutor() as executor:
future_to_site = {
@@ -110,7 +109,7 @@ def scrape_jobs(
site_value, scraped_data = future.result()
site_to_jobs_dict[site_value] = scraped_data
jobs_dfs: List[pd.DataFrame] = []
jobs_dfs: list[pd.DataFrame] = []
for site, job_response in site_to_jobs_dict.items():
for job in job_response.jobs:
@@ -120,12 +119,14 @@ def scrape_jobs(
] = f'<a href="{job_data["job_url"]}">{job_data["job_url"]}</a>'
job_data["site"] = site
job_data["company"] = job_data["company_name"]
if job_data["job_type"]:
# Take the first value from the job type tuple
job_data["job_type"] = job_data["job_type"].value[0]
else:
job_data["job_type"] = None
job_data["job_type"] = (
", ".join(job_type.value[0] for job_type in job_data["job_type"])
if job_data["job_type"]
else None
)
job_data["emails"] = (
", ".join(job_data["emails"]) if job_data["emails"] else None
)
job_data["location"] = Location(**job_data["location"]).display_location()
compensation_obj = job_data.get("compensation")
@@ -149,7 +150,7 @@ def scrape_jobs(
if jobs_dfs:
jobs_df = pd.concat(jobs_dfs, ignore_index=True)
desired_order: List[str] = [
desired_order: list[str] = [
"job_url_hyper" if hyperlinks else "job_url",
"site",
"title",
@@ -158,12 +159,13 @@ def scrape_jobs(
"job_type",
"date_posted",
"interval",
"benefits",
"min_amount",
"max_amount",
"currency",
"is_remote",
"num_urgent_words",
"benefits",
"emails",
"job_url_hyper" if hyperlinks else "job_url",
"description",
]
jobs_formatted_df = jobs_df[desired_order]

View File

@@ -37,10 +37,16 @@ class JobType(Enum):
"повназайнятість",
"toànthờigian",
)
PART_TIME = ("parttime", "teilzeit")
PART_TIME = ("parttime", "teilzeit", "částečnýúvazek", "deltid")
CONTRACT = ("contract", "contractor")
TEMPORARY = ("temporary",)
INTERNSHIP = ("internship", "prácticas", "ojt(onthejobtraining)", "praktikum")
INTERNSHIP = (
"internship",
"prácticas",
"ojt(onthejobtraining)",
"praktikum",
"praktik",
)
PER_DIEM = ("perdiem",)
NIGHTS = ("nights",)
@@ -171,8 +177,8 @@ class CompensationInterval(Enum):
class Compensation(BaseModel):
interval: Optional[CompensationInterval] = None
min_amount: int = None
max_amount: int = None
min_amount: int | None = None
max_amount: int | None = None
currency: Optional[str] = "USD"
@@ -182,12 +188,15 @@ class JobPost(BaseModel):
job_url: str
location: Optional[Location]
description: Optional[str] = None
job_type: Optional[JobType] = None
compensation: Optional[Compensation] = None
date_posted: Optional[date] = None
benefits: Optional[str] = None
emails: Optional[list[str]] = None
description: str | None = None
job_type: list[JobType] | None = None
compensation: Compensation | None = None
date_posted: date | None = None
benefits: str | None = None
emails: list[str] | None = None
num_urgent_words: int | None = None
is_remote: bool | None = None
# company_industry: str | None = None
class JobResponse(BaseModel):

View File

@@ -7,12 +7,15 @@ This module contains the set of Scrapers' exceptions.
class LinkedInException(Exception):
"""Failed to scrape LinkedIn"""
def __init__(self, message=None):
super().__init__(message or "An error occurred with LinkedIn")
class IndeedException(Exception):
"""Failed to scrape Indeed"""
def __init__(self, message=None):
super().__init__(message or "An error occurred with Indeed")
class ZipRecruiterException(Exception):
"""Failed to scrape ZipRecruiter"""
def __init__(self, message=None):
super().__init__(message or "An error occurred with ZipRecruiter")

View File

@@ -9,15 +9,19 @@ import math
import io
import json
from datetime import datetime
from typing import Optional
import tls_client
import urllib.parse
from bs4 import BeautifulSoup
from bs4.element import Tag
from concurrent.futures import ThreadPoolExecutor, Future
from ..exceptions import IndeedException
from ..utils import (
count_urgent_words,
extract_emails_from_text,
create_session,
get_enum_from_job_type,
)
from ...jobs import (
JobPost,
Compensation,
@@ -28,15 +32,9 @@ from ...jobs import (
)
from .. import Scraper, ScraperInput, Site
def extract_emails_from_text(text: str) -> Optional[list[str]]:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
class IndeedScraper(Scraper):
def __init__(self, proxy: Optional[str] = None):
def __init__(self, proxy: str | None = None):
"""
Initializes IndeedScraper with the Indeed job search url
"""
@@ -49,20 +47,18 @@ class IndeedScraper(Scraper):
self.seen_urls = set()
def scrape_page(
self, scraper_input: ScraperInput, page: int, session: tls_client.Session
self, scraper_input: ScraperInput, page: int
) -> tuple[list[JobPost], int]:
"""
Scrapes a page of Indeed for jobs with scraper_input criteria
:param scraper_input:
:param page:
:param session:
:return: jobs found on page, total number of jobs found for search
"""
self.country = scraper_input.country
domain = self.country.domain_value
self.url = f"https://{domain}.indeed.com"
job_list: list[JobPost] = []
session = create_session(self.proxy)
params = {
"q": scraper_input.search_term,
@@ -84,9 +80,9 @@ class IndeedScraper(Scraper):
try:
response = session.get(
f"{self.url}/jobs",
headers=self.get_headers(),
params=params,
allow_redirects=True,
proxy=self.proxy,
timeout_seconds=10,
)
if response.status_code not in range(200, 400):
@@ -108,13 +104,13 @@ class IndeedScraper(Scraper):
total_num_jobs = IndeedScraper.total_jobs(soup)
if (
not jobs.get("metaData", {})
.get("mosaicProviderJobCardsModel", {})
.get("results")
not jobs.get("metaData", {})
.get("mosaicProviderJobCardsModel", {})
.get("results")
):
raise IndeedException("No jobs found.")
def process_job(job) -> Optional[JobPost]:
def process_job(job) -> JobPost | None:
job_url = f'{self.url}/jobs/viewjob?jk={job["jobkey"]}'
job_url_client = f'{self.url}/viewjob?jk={job["jobkey"]}'
if job_url in self.seen_urls:
@@ -143,8 +139,7 @@ class IndeedScraper(Scraper):
date_posted = datetime.fromtimestamp(timestamp_seconds)
date_posted = date_posted.strftime("%Y-%m-%d")
description = self.get_description(job_url, session)
emails = extract_emails_from_text(description)
description = self.get_description(job_url)
with io.StringIO(job["snippet"]) as f:
soup_io = BeautifulSoup(f, "html.parser")
li_elements = soup_io.find_all("li")
@@ -160,18 +155,22 @@ class IndeedScraper(Scraper):
state=job.get("jobLocationState"),
country=self.country,
),
emails=extract_emails_from_text(description),
job_type=job_type,
compensation=compensation,
date_posted=date_posted,
job_url=job_url_client,
emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description)
if description
else None,
is_remote=self.is_remote_job(job),
)
return job_post
jobs = jobs["metaData"]["mosaicProviderJobCardsModel"]["results"]
with ThreadPoolExecutor(max_workers=1) as executor:
job_results: list[Future] = [
executor.submit(process_job, job)
for job in jobs["metaData"]["mosaicProviderJobCardsModel"]["results"]
executor.submit(process_job, job) for job in jobs
]
job_list = [result.result() for result in job_results if result.result()]
@@ -184,20 +183,16 @@ class IndeedScraper(Scraper):
:param scraper_input:
:return: job_response
"""
session = tls_client.Session(
client_identifier="chrome112", random_tls_extension_order=True
)
pages_to_process = (
math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1
math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1
)
#: get first page to initialize session
job_list, total_results = self.scrape_page(scraper_input, 0, session)
job_list, total_results = self.scrape_page(scraper_input, 0)
with ThreadPoolExecutor(max_workers=1) as executor:
futures: list[Future] = [
executor.submit(self.scrape_page, scraper_input, page, session)
executor.submit(self.scrape_page, scraper_input, page)
for page in range(1, pages_to_process + 1)
]
@@ -215,21 +210,24 @@ class IndeedScraper(Scraper):
)
return job_response
def get_description(self, job_page_url: str, session: tls_client.Session) -> Optional[str]:
def get_description(self, job_page_url: str) -> str | None:
"""
Retrieves job description by going to the job page url
:param job_page_url:
:param session:
:return: description
"""
parsed_url = urllib.parse.urlparse(job_page_url)
params = urllib.parse.parse_qs(parsed_url.query)
jk_value = params.get("jk", [None])[0]
formatted_url = f"{self.url}/viewjob?jk={jk_value}&spa=1"
session = create_session(self.proxy)
try:
response = session.get(
formatted_url, allow_redirects=True, timeout_seconds=5, proxy=self.proxy
formatted_url,
headers=self.get_headers(),
allow_redirects=True,
timeout_seconds=5,
)
except Exception as e:
return None
@@ -237,40 +235,56 @@ class IndeedScraper(Scraper):
if response.status_code not in range(200, 400):
return None
raw_description = response.json()["body"]["jobInfoWrapperModel"][
"jobInfoModel"
]["sanitizedJobDescription"]
with io.StringIO(raw_description) as f:
soup = BeautifulSoup(f, "html.parser")
text_content = " ".join(soup.get_text().split()).strip()
return text_content
soup = BeautifulSoup(response.text, "html.parser")
script_tag = soup.find(
"script", text=lambda x: x and "window._initialData" in x
)
if not script_tag:
return None
script_code = script_tag.string
match = re.search(r"window\._initialData\s*=\s*({.*?})\s*;", script_code, re.S)
if not match:
return None
json_string = match.group(1)
data = json.loads(json_string)
try:
job_description = data["jobInfoWrapperModel"]["jobInfoModel"][
"sanitizedJobDescription"
]
except (KeyError, TypeError, IndexError):
return None
soup = BeautifulSoup(
job_description, "html.parser"
)
text_content = " ".join(
soup.get_text(separator=" ").split()
).strip()
return text_content
@staticmethod
def get_job_type(job: dict) -> Optional[JobType]:
def get_job_type(job: dict) -> list[JobType] | None:
"""
Parses the job to get JobTypeIndeed
Parses the job to get list of job types
:param job:
:return:
"""
job_types: list[JobType] = []
for taxonomy in job["taxonomyAttributes"]:
if taxonomy["label"] == "job-types":
if len(taxonomy["attributes"]) > 0:
label = taxonomy["attributes"][0].get("label")
for i in range(len(taxonomy["attributes"])):
label = taxonomy["attributes"][i].get("label")
if label:
job_type_str = label.replace("-", "").replace(" ", "").lower()
return IndeedScraper.get_enum_from_job_type(job_type_str)
return None
@staticmethod
def get_enum_from_job_type(job_type_str):
"""
Given a string, returns the corresponding JobType enum member if a match is found.
for job_type in JobType:
"""
for job_type in JobType:
if job_type_str in job_type.value:
return job_type
return None
job_type = get_enum_from_job_type(job_type_str)
if job_type:
job_types.append(job_type)
return job_types
@staticmethod
def parse_jobs(soup: BeautifulSoup) -> dict:
@@ -280,7 +294,7 @@ class IndeedScraper(Scraper):
:return: jobs
"""
def find_mosaic_script() -> Optional[Tag]:
def find_mosaic_script() -> Tag | None:
"""
Finds jobcards script tag
:return: script_tag
@@ -289,9 +303,9 @@ class IndeedScraper(Scraper):
for tag in script_tags:
if (
tag.string
and "mosaic.providerData" in tag.string
and "mosaic-provider-jobcards" in tag.string
tag.string
and "mosaic.providerData" in tag.string
and "mosaic-provider-jobcards" in tag.string
):
return tag
return None
@@ -330,3 +344,30 @@ class IndeedScraper(Scraper):
data = json.loads(json_str)
total_num_jobs = int(data["searchTitleBarModel"]["totalNumResults"])
return total_num_jobs
@staticmethod
def get_headers():
return {
"authority": "www.indeed.com",
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"referer": "https://www.indeed.com/viewjob?jk=fe6182337d72c7b1&tk=1hcbfcmd0k62t802&from=serp&vjs=3&advn=8132938064490989&adid=408692607&ad=-6NYlbfkN0A3Osc99MJFDKjquSk4WOGT28ALb_ad4QMtrHreCb9ICg6MiSVy9oDAp3evvOrI7Q-O9qOtQTg1EPbthP9xWtBN2cOuVeHQijxHjHpJC65TjDtftH3AXeINjBvAyDrE8DrRaAXl8LD3Fs1e_xuDHQIssdZ2Mlzcav8m5jHrA0fA64ZaqJV77myldaNlM7-qyQpy4AsJQfvg9iR2MY7qeC5_FnjIgjKIy_lNi9OPMOjGRWXA94CuvC7zC6WeiJmBQCHISl8IOBxf7EdJZlYdtzgae3593TFxbkd6LUwbijAfjax39aAuuCXy3s9C4YgcEP3TwEFGQoTpYu9Pmle-Ae1tHGPgsjxwXkgMm7Cz5mBBdJioglRCj9pssn-1u1blHZM4uL1nK9p1Y6HoFgPUU9xvKQTHjKGdH8d4y4ETyCMoNF4hAIyUaysCKdJKitC8PXoYaWhDqFtSMR4Jys8UPqUV&xkcb=SoDD-_M3JLQfWnQTDh0LbzkdCdPP&xpse=SoBa6_I3JLW9FlWZlB0PbzkdCdPP&sjdu=i6xVERweJM_pVUvgf-MzuaunBTY7G71J5eEX6t4DrDs5EMPQdODrX7Nn-WIPMezoqr5wA_l7Of-3CtoiUawcHw",
"sec-ch-ua": '"Google Chrome";v="119", "Chromium";v="119", "Not?A_Brand";v="24"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": '"Windows"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36",
}
@staticmethod
def is_remote_job(job: dict) -> bool:
"""
:param job:
:return: bool
"""
for taxonomy in job.get("taxonomyAttributes", []):
if taxonomy["label"] == "remote" and len(taxonomy["attributes"]) > 0:
return True
return False

View File

@@ -9,7 +9,6 @@ from datetime import datetime
import requests
import time
import re
from requests.exceptions import ProxyError
from concurrent.futures import ThreadPoolExecutor, as_completed
from bs4 import BeautifulSoup
@@ -17,6 +16,7 @@ from bs4.element import Tag
from threading import Lock
from .. import Scraper, ScraperInput, Site
from ..utils import count_urgent_words, extract_emails_from_text, get_enum_from_job_type
from ..exceptions import LinkedInException
from ...jobs import (
JobPost,
@@ -26,13 +26,6 @@ from ...jobs import (
)
def extract_emails_from_text(text: str) -> Optional[list[str]]:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
class LinkedInScraper(Scraper):
MAX_RETRIES = 3
DELAY = 10
@@ -99,13 +92,15 @@ class LinkedInScraper(Scraper):
break
except requests.HTTPError as e:
if hasattr(e, 'response') and e.response is not None:
if hasattr(e, "response") and e.response is not None:
if e.response.status_code == 429:
time.sleep(self.DELAY)
retries += 1
continue
else:
raise LinkedInException(f"bad response status code: {e.response.status_code}")
raise LinkedInException(
f"bad response status code: {e.response.status_code}"
)
else:
raise
except ProxyError as e:
@@ -114,7 +109,9 @@ class LinkedInScraper(Scraper):
raise LinkedInException(str(e))
else:
# Raise an exception if the maximum number of retries is reached
raise LinkedInException("Max retries reached, failed to get a valid response")
raise LinkedInException(
"Max retries reached, failed to get a valid response"
)
soup = BeautifulSoup(response.text, "html.parser")
@@ -141,7 +138,9 @@ class LinkedInScraper(Scraper):
if job_post:
job_list.append(job_post)
except Exception as e:
raise LinkedInException("Exception occurred while processing jobs")
raise LinkedInException(
"Exception occurred while processing jobs"
)
page += 25
job_list = job_list[: scraper_input.results_wanted]
@@ -158,7 +157,11 @@ class LinkedInScraper(Scraper):
metadata_card = job_card.find("div", class_="base-search-card__metadata")
location = self.get_location(metadata_card)
datetime_tag = metadata_card.find("time", class_="job-search-card__listdate") if metadata_card else None
datetime_tag = (
metadata_card.find("time", class_="job-search-card__listdate")
if metadata_card
else None
)
date_posted = None
if datetime_tag and "datetime" in datetime_tag.attrs:
datetime_str = datetime_tag["datetime"]
@@ -178,13 +181,16 @@ class LinkedInScraper(Scraper):
location=location,
date_posted=date_posted,
job_url=job_url,
# job_type=[JobType.FULL_TIME],
job_type=job_type,
benefits=benefits,
emails=extract_emails_from_text(description)
emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None,
)
def get_job_description(self, job_page_url: str) -> tuple[None, None] | tuple[
str | None, tuple[str | None, JobType | None]]:
def get_job_description(
self, job_page_url: str
) -> tuple[None, None] | tuple[str | None, tuple[str | None, JobType | None]]:
"""
Retrieves job description by going to the job page url
:param job_page_url:
@@ -206,8 +212,8 @@ class LinkedInScraper(Scraper):
description = " ".join(div_content.get_text().split()).strip()
def get_job_type(
soup_job_type: BeautifulSoup,
) -> JobType | None:
soup_job_type: BeautifulSoup,
) -> list[JobType] | None:
"""
Gets the job type from job page
:param soup_job_type:
@@ -230,17 +236,10 @@ class LinkedInScraper(Scraper):
employment_type = employment_type.lower()
employment_type = employment_type.replace("-", "")
return LinkedInScraper.get_enum_from_value(employment_type)
return [get_enum_from_job_type(employment_type)]
return description, get_job_type(soup)
@staticmethod
def get_enum_from_value(value_str):
for job_type in JobType:
if value_str in job_type.value:
return job_type
return None
def get_location(self, metadata_card: Optional[Tag]) -> Location:
"""
Extracts the location data from the job metadata card.
@@ -263,9 +262,3 @@ class LinkedInScraper(Scraper):
)
return location
def extract_emails_from_text(text: str) -> Optional[list[str]]:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)

View File

@@ -0,0 +1,56 @@
import re
import tls_client
from ..jobs import JobType
def count_urgent_words(description: str) -> int:
"""
Count the number of urgent words or phrases in a job description.
"""
urgent_patterns = re.compile(
r"\burgen(t|cy)|\bimmediate(ly)?\b|start asap|\bhiring (now|immediate(ly)?)\b",
re.IGNORECASE,
)
matches = re.findall(urgent_patterns, description)
count = len(matches)
return count
def extract_emails_from_text(text: str) -> list[str] | None:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
def create_session(proxy: str | None = None):
"""
Creates a tls client session
:return: A session object with or without proxies.
"""
session = tls_client.Session(
client_identifier="chrome112",
random_tls_extension_order=True,
)
session.proxies = proxy
# TODO multiple proxies
# if self.proxies:
# session.proxies = {
# "http": random.choice(self.proxies),
# "https": random.choice(self.proxies),
# }
return session
def get_enum_from_job_type(job_type_str: str) -> JobType | None:
"""
Given a string, returns the corresponding JobType enum member if a match is found.
"""
res = None
for job_type in JobType:
if job_type_str in job_type.value:
res = job_type
return res

View File

@@ -11,7 +11,6 @@ from datetime import datetime, date
from typing import Optional, Tuple, Any
from urllib.parse import urlparse, parse_qs, urlunparse
import tls_client
import requests
from bs4 import BeautifulSoup
from bs4.element import Tag
@@ -19,6 +18,7 @@ from concurrent.futures import ThreadPoolExecutor, Future
from .. import Scraper, ScraperInput, Site
from ..exceptions import ZipRecruiterException
from ..utils import count_urgent_words, extract_emails_from_text, create_session
from ...jobs import (
JobPost,
Compensation,
@@ -29,12 +29,6 @@ from ...jobs import (
Country,
)
def extract_emails_from_text(text: str) -> Optional[list[str]]:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
class ZipRecruiterScraper(Scraper):
def __init__(self, proxy: Optional[str] = None):
@@ -47,28 +41,23 @@ class ZipRecruiterScraper(Scraper):
self.jobs_per_page = 20
self.seen_urls = set()
self.session = tls_client.Session(
client_identifier="chrome112", random_tls_extension_order=True
)
def find_jobs_in_page(
self, scraper_input: ScraperInput, page: int
) -> list[JobPost]:
def find_jobs_in_page(self, scraper_input: ScraperInput, continue_token: Optional[str] = None) -> Tuple[list[JobPost], Optional[str]]:
"""
Scrapes a page of ZipRecruiter for jobs with scraper_input criteria
:param scraper_input:
:param page:
:return: jobs found on page
"""
job_list: list[JobPost] = []
params = self.add_params(scraper_input)
if continue_token:
params['continue'] = continue_token
try:
response = self.session.get(
f"{self.url}/jobs-search",
headers=ZipRecruiterScraper.headers(),
params=ZipRecruiterScraper.add_params(scraper_input, page),
response = requests.get(
f"https://api.ziprecruiter.com/jobs-app/jobs",
headers=self.headers(),
params=self.add_params(scraper_input),
allow_redirects=True,
proxy=self.proxy,
timeout_seconds=10,
timeout=10,
)
if response.status_code != 200:
raise ZipRecruiterException(
@@ -78,195 +67,64 @@ class ZipRecruiterScraper(Scraper):
if "Proxy responded with non 200 code" in str(e):
raise ZipRecruiterException("bad proxy")
raise ZipRecruiterException(str(e))
else:
soup = BeautifulSoup(response.text, "html.parser")
js_tag = soup.find("script", {"id": "js_variables"})
if js_tag:
page_json = json.loads(js_tag.string)
jobs_list = page_json.get("jobList")
if jobs_list:
page_variant = "javascript"
# print('type javascript', len(jobs_list))
else:
page_variant = "html_2"
jobs_list = soup.find_all("div", {"class": "job_content"})
# print('type 2 html', len(jobs_list))
else:
page_variant = "html_1"
jobs_list = soup.find_all("li", {"class": "job-listing"})
# print('type 1 html', len(jobs_list))
response_data = response.json()
jobs_list = response_data.get("jobs", [])
next_continue_token = response_data.get('continue', None)
with ThreadPoolExecutor(max_workers=10) as executor:
if page_variant == "javascript":
job_results = [
executor.submit(self.process_job_javascript, job)
for job in jobs_list
]
elif page_variant == "html_1":
job_results = [
executor.submit(self.process_job_html_1, job) for job in jobs_list
]
elif page_variant == "html_2":
job_results = [
executor.submit(self.process_job_html_2, job) for job in jobs_list
]
job_results = [
executor.submit(self.process_job, job)
for job in jobs_list
]
job_list = [result.result() for result in job_results if result.result()]
return job_list
return job_list, next_continue_token
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
"""
Scrapes ZipRecruiter for jobs with scraper_input criteria
:param scraper_input:
:return: job_response
Scrapes ZipRecruiter for jobs with scraper_input criteria.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
"""
start_page = (scraper_input.offset // self.jobs_per_page) + 1 if scraper_input.offset else 1
#: get first page to initialize session
job_list: list[JobPost] = self.find_jobs_in_page(scraper_input, start_page)
pages_to_process = max(
3, math.ceil(scraper_input.results_wanted / self.jobs_per_page)
)
job_list: list[JobPost] = []
continue_token = None
with ThreadPoolExecutor(max_workers=10) as executor:
futures: list[Future] = [
executor.submit(self.find_jobs_in_page, scraper_input, page)
for page in range(start_page + 1, start_page + pages_to_process + 2)
]
max_pages = math.ceil(scraper_input.results_wanted / self.jobs_per_page)
for future in futures:
jobs = future.result()
for page in range(1, max_pages + 1):
if len(job_list) >= scraper_input.results_wanted:
break
job_list += jobs
jobs_on_page, continue_token = self.find_jobs_in_page(scraper_input, continue_token)
if jobs_on_page:
job_list.extend(jobs_on_page)
if not continue_token:
break
if len(job_list) > scraper_input.results_wanted:
job_list = job_list[:scraper_input.results_wanted]
job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list)
def process_job_html_1(self, job: Tag) -> Optional[JobPost]:
"""
Parses a job from the job content tag
:param job: BeautifulSoup Tag for one job post
:return JobPost
TODO this method isnt finished due to not encountering this type of html often
"""
job_url = self.cleanurl(job.find("a", {"class": "job_link"})["href"])
if job_url in self.seen_urls:
return None
def process_job(self, job: dict) -> JobPost:
"""the most common type of jobs page on ZR"""
title = job.get("name")
job_url = job.get("job_url")
title = job.find("h2", {"class": "title"}).text
company = job.find("a", {"class": "company_name"}).text.strip()
description = BeautifulSoup(
job.get("job_description", "").strip(), "html.parser"
).get_text()
description, updated_job_url = self.get_description(job_url)
# job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = job.find("p", {"class": "job_snippet"}).text.strip()
job_type_element = job.find("li", {"class": "perk_item perk_type"})
job_type = None
if job_type_element:
job_type_text = (
job_type_element.text.strip().lower().replace("_", "").replace(" ", "")
)
job_type = ZipRecruiterScraper.get_job_type_enum(job_type_text)
date_posted = ZipRecruiterScraper.get_date_posted(job)
job_post = JobPost(
title=title,
description=description,
company_name=company,
location=ZipRecruiterScraper.get_location(job),
job_type=job_type,
compensation=ZipRecruiterScraper.get_compensation(job),
date_posted=date_posted,
job_url=job_url,
emails=extract_emails_from_text(description),
)
return job_post
def process_job_html_2(self, job: Tag) -> Optional[JobPost]:
"""
Parses a job from the job content tag for a second variat of HTML that ZR uses
:param job: BeautifulSoup Tag for one job post
:return JobPost
"""
job_url = self.cleanurl(job.find("a", class_="job_link")["href"])
title = job.find("h2", class_="title").text
company = job.find("a", class_="company_name").text.strip()
description, updated_job_url = self.get_description(job_url)
# job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = job.find("p", class_="job_snippet").get_text().strip()
job_type_text = job.find("li", class_="perk_item perk_type")
job_type = None
if job_type_text:
job_type_text = (
job_type_text.get_text()
.strip()
.lower()
.replace("-", "")
.replace(" ", "")
)
job_type = ZipRecruiterScraper.get_job_type_enum(job_type_text)
date_posted = ZipRecruiterScraper.get_date_posted(job)
job_post = JobPost(
title=title,
description=description,
company_name=company,
location=ZipRecruiterScraper.get_location(job),
job_type=job_type,
compensation=ZipRecruiterScraper.get_compensation(job),
date_posted=date_posted,
job_url=job_url,
)
return job_post
def process_job_javascript(self, job: dict) -> JobPost:
title = job.get("Title")
job_url = self.cleanurl(job.get("JobURL"))
description, updated_job_url = self.get_description(job_url)
# job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = BeautifulSoup(
job.get("Snippet", "").strip(), "html.parser"
).get_text()
company = job.get("OrgName")
company = job['hiring_company'].get("name") if "hiring_company" in job else None
location = Location(
city=job.get("City"), state=job.get("State"), country=Country.US_CANADA
city=job.get("job_city"), state=job.get("job_state"), country='usa' if job.get("job_country") == 'US' else 'canada'
)
job_type = ZipRecruiterScraper.get_job_type_enum(
job.get("EmploymentType", "").replace("-", "").lower()
job.get("employment_type", "").replace("_", "").lower()
)
formatted_salary = job.get("FormattedSalaryShort", "")
salary_parts = formatted_salary.split(" ")
min_salary_str = salary_parts[0][1:].replace(",", "")
if "." in min_salary_str:
min_amount = int(float(min_salary_str) * 1000)
else:
min_amount = int(min_salary_str.replace("K", "000"))
if len(salary_parts) >= 3 and salary_parts[2].startswith("$"):
max_salary_str = salary_parts[2][1:].replace(",", "")
if "." in max_salary_str:
max_amount = int(float(max_salary_str) * 1000)
else:
max_amount = int(max_salary_str.replace("K", "000"))
else:
max_amount = 0
compensation = Compensation(
interval=CompensationInterval.YEARLY,
min_amount=min_amount,
max_amount=max_amount,
currency="USD/CAD",
)
save_job_url = job.get("SaveJobURL", "")
posted_time_match = re.search(
r"posted_time=(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z)", save_job_url
@@ -278,60 +136,37 @@ class ZipRecruiterScraper(Scraper):
else:
date_posted = date.today()
return JobPost(
title=title,
description=description,
company_name=company,
location=location,
job_type=job_type,
compensation=compensation,
compensation=Compensation(
interval="yearly" if job.get("compensation_interval") == "annual" else job.get("compensation_interval") ,
min_amount=int(job["compensation_min"]) if "compensation_min" in job else None,
max_amount=int(job["compensation_max"]) if "compensation_max" in job else None,
currency=job.get("compensation_currency"),
),
date_posted=date_posted,
job_url=job_url,
description=description,
emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None,
)
return job_post
@staticmethod
def get_job_type_enum(job_type_str: str) -> Optional[JobType]:
def get_job_type_enum(job_type_str: str) -> list[JobType] | None:
for job_type in JobType:
if job_type_str in job_type.value:
a = True
return job_type
return [job_type]
return None
def get_description(self, job_page_url: str) -> Tuple[Optional[str], Optional[str]]:
"""
Retrieves job description by going to the job page url
:param job_page_url:
:param session:
:return: description or None, response url
"""
try:
response = requests.get(
job_page_url,
headers=ZipRecruiterScraper.headers(),
allow_redirects=True,
timeout=5,
proxies=self.proxy,
)
if response.status_code not in range(200, 400):
return None, None
except Exception as e:
return None, None
html_string = response.content
soup_job = BeautifulSoup(html_string, "html.parser")
job_description_div = soup_job.find("div", {"class": "job_description"})
if job_description_div:
return job_description_div.text.strip(), response.url
return None, response.url
@staticmethod
def add_params(scraper_input, page) -> dict[str, str | Any]:
def add_params(scraper_input) -> dict[str, str | Any]:
params = {
"search": scraper_input.search_term,
"location": scraper_input.location,
"page": page,
"form": "jobs-landing",
}
job_type_value = None
@@ -464,11 +299,13 @@ class ZipRecruiterScraper(Scraper):
:return: dict - Dictionary containing headers
"""
return {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36"
'Host': 'api.ziprecruiter.com',
'Cookie': 'ziprecruiter_browser=018188e0-045b-4ad7-aa50-627a6c3d43aa; ziprecruiter_session=5259b2219bf95b6d2299a1417424bc2edc9f4b38; SplitSV=2016-10-19%3AU2FsdGVkX19f9%2Bx70knxc%2FeR3xXR8lWoTcYfq5QjmLU%3D%0A; __cf_bm=qXim3DtLPbOL83GIp.ddQEOFVFTc1OBGPckiHYxcz3o-1698521532-0-AfUOCkgCZyVbiW1ziUwyefCfzNrJJTTKPYnif1FZGQkT60dMowmSU/Y/lP+WiygkFPW/KbYJmyc+MQSkkad5YygYaARflaRj51abnD+SyF9V; zglobalid=68d49bd5-0326-428e-aba8-8a04b64bc67c.af2d99ff7c03.653d61bb; ziprecruiter_browser=018188e0-045b-4ad7-aa50-627a6c3d43aa; ziprecruiter_session=5259b2219bf95b6d2299a1417424bc2edc9f4b38',
'accept': '*/*',
'x-zr-zva-override': '100000000;vid:ZT1huzm_EQlDTVEc',
'x-pushnotificationid': '0ff4983d38d7fc5b3370297f2bcffcf4b3321c418f5c22dd152a0264707602a0',
'x-deviceid': 'D77B3A92-E589-46A4-8A39-6EF6F1D86006',
'user-agent': 'Job Search/87.0 (iPhone; CPU iOS 16_6_1 like Mac OS X)',
'authorization': 'Basic YTBlZjMyZDYtN2I0Yy00MWVkLWEyODMtYTI1NDAzMzI0YTcyOg==',
'accept-language': 'en-US,en;q=0.9'
}
@staticmethod
def cleanurl(url):
parsed_url = urlparse(url)
return urlunparse((parsed_url.scheme, parsed_url.netloc, parsed_url.path, parsed_url.params, '', ''))

View File

@@ -9,4 +9,6 @@ def test_all():
results_wanted=5,
)
assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"
assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"

View File

@@ -7,4 +7,6 @@ def test_indeed():
site_name="indeed",
search_term="software engineer",
)
assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"
assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"

View File

@@ -7,4 +7,6 @@ def test_linkedin():
site_name="linkedin",
search_term="software engineer",
)
assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"
assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"

View File

@@ -8,4 +8,6 @@ def test_ziprecruiter():
search_term="software engineer",
)
assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"
assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"