Compare commits

...

29 Commits

Author SHA1 Message Date
Cullen Watson
bff39a2625 [fix] util func 2023-09-28 18:33:14 -05:00
Cullen Watson
c676050dc0 [fix] util func 2023-09-28 18:33:02 -05:00
Cullen Watson
37976f7ec2 [chore] version number 2023-09-28 18:26:55 -05:00
Cullen Watson
9fb2fdd80f [fix] add utils.py 2023-09-28 18:25:56 -05:00
Cullen Watson
af07c1ecbd add offset param & email extraction (#51)
* add offset param

* [enh]: extract emails
2023-09-28 18:11:28 -05:00
Cullen Watson
286b9e1256 chore: version number 2023-09-21 20:28:57 -05:00
Cullen Watson
162dd40b0f docs: add usejobspy.com 2023-09-21 20:27:04 -05:00
Cullen Watson
558e352939 fix: job type param bug 2023-09-21 17:42:24 -05:00
Zachary Hampton
efad1a1b7d Update README.md 2023-09-21 09:52:18 -07:00
Cullen Watson
eaa481c2f4 docs: add macos catalina to faq 2023-09-19 12:50:14 -05:00
Zachary Hampton
b914aa6449 Update README.md 2023-09-16 13:52:30 -07:00
Zachary Hampton
6adbfb8b29 Update README.md 2023-09-16 13:51:45 -07:00
Zachary Hampton
a3b9dd50ff (docs) homepage 2023-09-15 16:14:26 -07:00
Zachary Hampton
d3ba3a4878 docs: sales call 2023-09-15 11:51:22 -07:00
Cullen Watson
f524789d74 docs: grammar readme 2023-09-15 10:18:24 -05:00
Cullen Watson
f3890d4830 docs: update 2023-09-09 10:55:33 -05:00
Cullen Watson
60c9728691 docs: typo 2023-09-08 12:27:49 -05:00
Cullen Watson
f79d975e5f docs: clarify - README.md 2023-09-07 13:46:14 -05:00
Cullen Watson
d6368f909b docs: typo 2023-09-07 13:39:56 -05:00
Cullen Watson
6fcf7f666e docs: update typo in example 2023-09-07 13:37:53 -05:00
Cullen Watson
4406f9350f docs: update vid 2023-09-07 13:35:10 -05:00
Cullen Watson
ca5155f234 docs: add feature 2023-09-07 11:36:16 -05:00
Cullen Watson
822a55783e docs: temp update 2023-09-07 11:35:14 -05:00
Cullen Watson
59f739018a Proxy support (#44)
* add proxy support

* return as data frame
2023-09-07 11:28:17 -05:00
Zachary Hampton
a37e7f235e Merge pull request #42 from cullenwatson/fix/class-type-error
- refactor & #41 bug fix
2023-09-06 16:33:59 -07:00
Zachary Hampton
690739e858 - refactor & #41 bug fix 2023-09-06 16:32:51 -07:00
Cullen Watson
43eb2fe0e8 remove gitattr 2023-09-06 11:34:51 -05:00
Cullen Watson
e50227bba6 clear output jupyter 2023-09-06 11:32:32 -05:00
Cullen Watson
45c2d76e15 add yt guide 2023-09-06 11:26:55 -05:00
19 changed files with 1757 additions and 2550 deletions

View File

@@ -7,27 +7,27 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: "3.10" python-version: "3.10"
- name: Install poetry - name: Install poetry
run: >- run: >-
python3 -m python3 -m
pip install pip install
poetry poetry
--user --user
- name: Build distribution 📦 - name: Build distribution 📦
run: >- run: >-
python3 -m python3 -m
poetry poetry
build build
- name: Publish distribution 📦 to PyPI - name: Publish distribution 📦 to PyPI
if: startsWith(github.ref, 'refs/tags') if: startsWith(github.ref, 'refs/tags')
uses: pypa/gh-action-pypi-publish@release/v1 uses: pypa/gh-action-pypi-publish@release/v1
with: with:
password: ${{ secrets.PYPI_API_TOKEN }} password: ${{ secrets.PYPI_API_TOKEN }}

10
.gitignore vendored
View File

@@ -1,10 +1,10 @@
/.idea
**/.DS_Store
/venv/ /venv/
/ven/ /.idea
**/__pycache__/ **/__pycache__/
**/.pytest_cache/ **/.pytest_cache/
/.ipynb_checkpoints/
**/output/
**/.DS_Store
*.pyc *.pyc
.env .env
dist dist
/.ipynb_checkpoints/

File diff suppressed because one or more lines are too long

142
README.md
View File

@@ -1,20 +1,33 @@
<img src="https://github.com/cullenwatson/JobSpy/assets/78247585/ae185b7e-e444-4712-8bb9-fa97f53e896b" width="400"> <img src="https://github.com/cullenwatson/JobSpy/assets/78247585/ae185b7e-e444-4712-8bb9-fa97f53e896b" width="400">
**JobSpy** is a simple, yet comprehensive, job scraping library. **JobSpy** is a simple, yet comprehensive, job scraping library.
## Features
**Not technical?** Try out the web scraping tool on our site at [usejobspy.com](https://usejobspy.com).
*Looking to build a data-focused software product?* **[Book a call](https://calendly.com/zachary-products/15min)** *to
work with us.*
\
Check out another project we wrote: ***[HomeHarvest](https://github.com/ZacharyHampton/HomeHarvest)** a Python package
for real estate scraping*
## Features
- Scrapes job postings from **LinkedIn**, **Indeed** & **ZipRecruiter** simultaneously - Scrapes job postings from **LinkedIn**, **Indeed** & **ZipRecruiter** simultaneously
- Aggregates the job postings in a Pandas DataFrame - Aggregates the job postings in a Pandas DataFrame
- Proxy support (HTTP/S, SOCKS)
[Video Guide for JobSpy](https://www.youtube.com/watch?v=RuP1HrAZnxs&pp=ygUgam9icyBzY3JhcGVyIGJvdCBsaW5rZWRpbiBpbmRlZWQ%3D) -
Updated for release v1.1.3
![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57) ![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57)
### Installation ### Installation
``` ```
pip install python-jobspy pip install --upgrade python-jobspy
``` ```
_Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_
### Usage ### Usage
@@ -27,29 +40,36 @@ jobs: pd.DataFrame = scrape_jobs(
search_term="software engineer", search_term="software engineer",
location="Dallas, TX", location="Dallas, TX",
results_wanted=10, results_wanted=10,
country='USA' # only needed for indeed country_indeed='USA' # only needed for indeed
# use if you want to use a proxy
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
# offset=25 # use if you want to start at a specific offset
) )
if jobs.empty: # formatting for pandas
print("No jobs found.") pd.set_option('display.max_columns', None)
else: pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None) pd.set_option('display.width', None)
pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', 50) # set to 0 to see full job url / desc
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', 50) # set to 0 to see full job url / desc
#1 output # 1 output to console
print(jobs) print(jobs)
#2 display in Jupyter Notebook # 2 display in Jupyter Notebook (1. pip install jupyter 2. jupyter notebook)
#display(jobs) # display(jobs)
# 3 output to .csv
# jobs.to_csv('jobs.csv', index=False)
# 4 output to .xlsx
# jobs.to_xlsx('jobs.xlsx', index=False)
#3 output to .csv
#jobs.to_csv('jobs.csv', index=False)
``` ```
### Output ### Output
``` ```
SITE TITLE COMPANY_NAME CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION SITE TITLE COMPANY_NAME CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION
indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!... indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!...
@@ -59,7 +79,9 @@ linkedin Full-Stack Software Engineer Rain New York
zip_recruiter Software Engineer - New Grad ZipRecruiter Santa Monica CA fulltime yearly 130000 150000 https://www.ziprecruiter.com/jobs/ziprecruiter... We offer a hybrid work environment. Most US-ba... zip_recruiter Software Engineer - New Grad ZipRecruiter Santa Monica CA fulltime yearly 130000 150000 https://www.ziprecruiter.com/jobs/ziprecruiter... We offer a hybrid work environment. Most US-ba...
zip_recruiter Software Developer TEKsystems Phoenix AZ fulltime hourly 65 75 https://www.ziprecruiter.com/jobs/teksystems-0... Top Skills' Details• 6 years of Java developme... zip_recruiter Software Developer TEKsystems Phoenix AZ fulltime hourly 65 75 https://www.ziprecruiter.com/jobs/teksystems-0... Top Skills' Details• 6 years of Java developme...
``` ```
### Parameters for `scrape_jobs()` ### Parameters for `scrape_jobs()`
```plaintext ```plaintext
Required Required
├── site_type (List[enum]): linkedin, zip_recruiter, indeed ├── site_type (List[enum]): linkedin, zip_recruiter, indeed
@@ -68,14 +90,16 @@ Optional
├── location (int) ├── location (int)
├── distance (int): in miles ├── distance (int): in miles
├── job_type (enum): fulltime, parttime, internship, contract ├── job_type (enum): fulltime, parttime, internship, contract
├── proxy (str): in format 'http://user:pass@host:port' or [https, socks]
├── is_remote (bool) ├── is_remote (bool)
├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type' ├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type'
├── easy_apply (bool): filters for jobs that are hosted on LinkedIn ├── easy_apply (bool): filters for jobs that are hosted on LinkedIn
├── country (enum): filters the country on Indeed ├── country_indeed (enum): filters the country on Indeed (see below for correct spelling)
├── offset (enum): starts the search from an offset (e.g. 25 will start the search from the 25th result)
``` ```
### JobPost Schema ### JobPost Schema
```plaintext ```plaintext
JobPost JobPost
├── title (str) ├── title (str)
@@ -95,60 +119,74 @@ JobPost
└── date_posted (date) └── date_posted (date)
``` ```
## Supported Countries for Job Searching ### Exceptions
The following exceptions may be raised when using JobSpy:
* `LinkedInException`
* `IndeedException`
* `ZipRecruiterException`
## Supported Countries for Job Searching
### **LinkedIn** ### **LinkedIn**
LinkedIn searches globally & uses only the `location` parameter LinkedIn searches globally & uses only the `location` parameter.
### **ZipRecruiter** ### **ZipRecruiter**
ZipRecruiter searches for jobs in US/Canada & uses only the `location` parameter ZipRecruiter searches for jobs in **US/Canada** & uses only the `location` parameter.
### **Indeed** ### **Indeed**
For Indeed, the `country` parameter is required. Additionally, use the `location` parameter and include the city or state if necessary.
You can specify the following countries when searching on Indeed (use the exact name): Indeed supports most countries, but the `country_indeed` parameter is required. Additionally, use the `location`
parameter to narrow down the location, e.g. city & state if necessary.
You can specify the following countries when searching on Indeed (use the exact name):
| | | | | | | | | |
|------|------|------|------| |----------------------|--------------|------------|----------------|
| Argentina | Australia | Austria | Bahrain | | Argentina | Australia | Austria | Bahrain |
| Belgium | Brazil | Canada | Chile | | Belgium | Brazil | Canada | Chile |
| China | Colombia | Costa Rica | Czech Republic | | China | Colombia | Costa Rica | Czech Republic |
| Denmark | Ecuador | Egypt | Finland | | Denmark | Ecuador | Egypt | Finland |
| France | Germany | Greece | Hong Kong | | France | Germany | Greece | Hong Kong |
| Hungary | India | Indonesia | Ireland | | Hungary | India | Indonesia | Ireland |
| Israel | Italy | Japan | Kuwait | | Israel | Italy | Japan | Kuwait |
| Luxembourg | Malaysia | Mexico | Morocco | | Luxembourg | Malaysia | Mexico | Morocco |
| Netherlands | New Zealand | Nigeria | Norway | | Netherlands | New Zealand | Nigeria | Norway |
| Oman | Pakistan | Panama | Peru | | Oman | Pakistan | Panama | Peru |
| Philippines | Poland | Portugal | Qatar | | Philippines | Poland | Portugal | Qatar |
| Romania | Saudi Arabia | Singapore | South Africa | | Romania | Saudi Arabia | Singapore | South Africa |
| South Korea | Spain | Sweden | Switzerland | | South Korea | Spain | Sweden | Switzerland |
| Taiwan | Thailand | Turkey | Ukraine | | Taiwan | Thailand | Turkey | Ukraine |
| United Arab Emirates | UK | USA | Uruguay | | United Arab Emirates | UK | USA | Uruguay |
| Venezuela | Vietnam | | | | Venezuela | Vietnam | | |
## Frequently Asked Questions ## Frequently Asked Questions
--- ---
**Q: Encountering issues with your queries?** **Q: Encountering issues with your queries?**
**A:** Try reducing the number of `results_wanted` and/or broadening the filters. If problems persist, [submit an issue](#). **A:** Try reducing the number of `results_wanted` and/or broadening the filters. If problems
persist, [submit an issue](https://github.com/cullenwatson/JobSpy/issues).
--- ---
**Q: Received a response code 429?** **Q: Received a response code 429?**
**A:** This indicates that you have been blocked by the job board site for sending too many requests. Currently, **ZipRecruiter** is particularly aggressive with blocking. We recommend: **A:** This indicates that you have been blocked by the job board site for sending too many requests. Currently, *
*LinkedIn** is particularly aggressive with blocking. We recommend:
- Waiting a few seconds between requests. - Waiting a few seconds between requests.
- Trying a VPN to change your IP address. - Trying a VPN or proxy to change your IP address.
**Note:** Proxy support is in development and coming soon!
--- ---
**Q: Experiencing a "Segmentation fault: 11" on macOS Catalina?**
**A:** This is due to `tls_client` dependency not supporting your architecture. Solutions and workarounds include:
- Upgrade to a newer version of MacOS
- Reach out to the maintainers of [tls_client](https://github.com/bogdanfinn/tls-client) for fixes

167
examples/JobSpy_Demo.ipynb Normal file
View File

@@ -0,0 +1,167 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "00a94b47-f47b-420f-ba7e-714ef219c006",
"metadata": {},
"outputs": [],
"source": [
"from jobspy import scrape_jobs\n",
"import pandas as pd\n",
"from IPython.display import display, HTML"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f773e6c-d9fc-42cc-b0ef-63b739e78435",
"metadata": {},
"outputs": [],
"source": [
"pd.set_option('display.max_columns', None)\n",
"pd.set_option('display.max_rows', None)\n",
"pd.set_option('display.width', None)\n",
"pd.set_option('display.max_colwidth', 50)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1253c1f8-9437-492e-9dd3-e7fe51099420",
"metadata": {},
"outputs": [],
"source": [
"# example 1 (no hyperlinks, USA)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\"],\n",
" location='san francisco',\n",
" search_term=\"engineer\",\n",
" results_wanted=5,\n",
"\n",
" # use if you want to use a proxy\n",
" # proxy=\"socks5://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" proxy=\"http://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" #proxy=\"https://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
")\n",
"display(jobs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a581b2d-f7da-4fac-868d-9efe143ee20a",
"metadata": {},
"outputs": [],
"source": [
"# example 2 - remote USA & hyperlinks\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\", \"zip_recruiter\", \"indeed\"],\n",
" # location='san francisco',\n",
" search_term=\"software engineer\",\n",
" country_indeed=\"USA\",\n",
" hyperlinks=True,\n",
" is_remote=True,\n",
" results_wanted=5, \n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fe8289bc-5b64-4202-9a64-7c117c83fd9a",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "951c2fe1-52ff-407d-8bb1-068049b36777",
"metadata": {},
"outputs": [],
"source": [
"# example 3 - with hyperlinks, international - linkedin (no zip_recruiter)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"linkedin\"],\n",
" location='berlin',\n",
" search_term=\"engineer\",\n",
" hyperlinks=True,\n",
" results_wanted=5,\n",
" easy_apply=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e37a521-caef-441c-8fc2-2eb5b2e7da62",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0650e608-0b58-4bf5-ae86-68348035b16a",
"metadata": {},
"outputs": [],
"source": [
"# example 4 - international indeed (no zip_recruiter)\n",
"jobs = scrape_jobs(\n",
" site_name=[\"indeed\"],\n",
" search_term=\"engineer\",\n",
" country_indeed = \"China\",\n",
" hyperlinks=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40913ac8-3f8a-4d7e-ac47-afb88316432b",
"metadata": {},
"outputs": [],
"source": [
"# use if hyperlinks=True\n",
"html = jobs.to_html(escape=False)\n",
"# change max-width: 200px to show more or less of the content\n",
"truncate_width = f'<style>.dataframe td {{ max-width: 200px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }}</style>{html}'\n",
"display(HTML(truncate_width))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

31
examples/JobSpy_Demo.py Normal file
View File

@@ -0,0 +1,31 @@
from jobspy import scrape_jobs
import pandas as pd
jobs: pd.DataFrame = scrape_jobs(
site_name=["indeed", "linkedin", "zip_recruiter"],
search_term="software engineer",
location="Dallas, TX",
results_wanted=50, # be wary the higher it is, the more likey you'll get blocked (rotating proxy should work tho)
country_indeed='USA',
offset=25 # start jobs from an offset (use if search failed and want to continue)
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
)
# formatting for pandas
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', 50) # set to 0 to see full job url / desc
# 1: output to console
print(jobs)
# 2: output to .csv
jobs.to_csv('./jobs.csv', index=False)
print('outputted to jobs.csv')
# 3: output to .xlsx
# jobs.to_xlsx('jobs.xlsx', index=False)
# 4: display in Jupyter Notebook (1. pip install jupyter 2. jupyter notebook)
# display(jobs)

1774
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,9 @@
[tool.poetry] [tool.poetry]
name = "python-jobspy" name = "python-jobspy"
version = "1.1.1" version = "1.1.10"
description = "Job scraper for LinkedIn, Indeed & ZipRecruiter" description = "Job scraper for LinkedIn, Indeed & ZipRecruiter"
authors = ["Zachary Hampton <zachary@zacharysproducts.com>", "Cullen Watson <cullen@cullen.ai>"] authors = ["Zachary Hampton <zachary@zacharysproducts.com>", "Cullen Watson <cullen@cullen.ai>"]
homepage = "https://github.com/cullenwatson/JobSpy"
readme = "README.md" readme = "README.md"
packages = [ packages = [

View File

@@ -1,13 +1,18 @@
import pandas as pd import pandas as pd
import concurrent.futures import concurrent.futures
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from typing import List, Tuple, NamedTuple, Dict from typing import List, Tuple, Optional
from .jobs import JobType, Location from .jobs import JobType, Location
from .scrapers.indeed import IndeedScraper from .scrapers.indeed import IndeedScraper
from .scrapers.ziprecruiter import ZipRecruiterScraper from .scrapers.ziprecruiter import ZipRecruiterScraper
from .scrapers.linkedin import LinkedInScraper from .scrapers.linkedin import LinkedInScraper
from .scrapers import ScraperInput, Site, JobResponse, Country from .scrapers import ScraperInput, Site, JobResponse, Country
from .scrapers.exceptions import (
LinkedInException,
IndeedException,
ZipRecruiterException,
)
SCRAPER_MAPPING = { SCRAPER_MAPPING = {
Site.LINKEDIN: LinkedInScraper, Site.LINKEDIN: LinkedInScraper,
@@ -16,38 +21,47 @@ SCRAPER_MAPPING = {
} }
class ScrapeResults(NamedTuple):
jobs: pd.DataFrame
errors: pd.DataFrame
def _map_str_to_site(site_name: str) -> Site: def _map_str_to_site(site_name: str) -> Site:
return Site[site_name.upper()] return Site[site_name.upper()]
def scrape_jobs( def scrape_jobs(
site_name: str | Site | List[Site], site_name: str | List[str] | Site | List[Site],
search_term: str, search_term: str,
location: str = "", location: str = "",
distance: int = None, distance: int = None,
is_remote: bool = False, is_remote: bool = False,
job_type: JobType = None, job_type: str = None,
easy_apply: bool = False, # linkedin easy_apply: bool = False, # linkedin
results_wanted: int = 15, results_wanted: int = 15,
country_indeed: str = "usa", country_indeed: str = "usa",
hyperlinks: bool = False hyperlinks: bool = False,
) -> ScrapeResults: proxy: Optional[str] = None,
offset: Optional[int] = 0
) -> pd.DataFrame:
""" """
Asynchronously scrapes job data from multiple job sites. Simultaneously scrapes job data from multiple job sites.
:return: results_wanted: pandas dataframe containing job data :return: results_wanted: pandas dataframe containing job data
""" """
def get_enum_from_value(value_str):
for job_type in JobType:
if value_str in job_type.value:
return job_type
raise Exception(f"Invalid job type: {value_str}")
job_type = get_enum_from_value(job_type) if job_type else None
if type(site_name) == str: if type(site_name) == str:
site_name = _map_str_to_site(site_name) site_type = [_map_str_to_site(site_name)]
else: #: if type(site_name) == list
site_type = [
_map_str_to_site(site) if type(site) == str else site_name
for site in site_name
]
country_enum = Country.from_string(country_indeed) country_enum = Country.from_string(country_indeed)
site_type = [site_name] if type(site_name) == Site else site_name
scraper_input = ScraperInput( scraper_input = ScraperInput(
site_type=site_type, site_type=site_type,
country=country_enum, country=country_enum,
@@ -58,103 +72,101 @@ def scrape_jobs(
job_type=job_type, job_type=job_type,
easy_apply=easy_apply, easy_apply=easy_apply,
results_wanted=results_wanted, results_wanted=results_wanted,
offset=offset
) )
def scrape_site(site: Site) -> Tuple[str, JobResponse]: def scrape_site(site: Site) -> Tuple[str, JobResponse]:
scraper_class = SCRAPER_MAPPING[site]
scraper = scraper_class(proxy=proxy)
try: try:
scraper_class = SCRAPER_MAPPING[site]
scraper = scraper_class()
scraped_data: JobResponse = scraper.scrape(scraper_input) scraped_data: JobResponse = scraper.scrape(scraper_input)
except (LinkedInException, IndeedException, ZipRecruiterException) as lie:
raise lie
except Exception as e: except Exception as e:
scraped_data = JobResponse(jobs=[], error=str(e), success=False) # unhandled exceptions
if site == Site.LINKEDIN:
raise LinkedInException()
if site == Site.INDEED:
raise IndeedException()
if site == Site.ZIP_RECRUITER:
raise ZipRecruiterException()
else:
raise e
return site.value, scraped_data return site.value, scraped_data
results, errors = {}, {} site_to_jobs_dict = {}
def worker(site): def worker(site):
site_value, scraped_data = scrape_site(site) site_value, scraped_data = scrape_site(site)
return site_value, scraped_data return site_value, scraped_data
with ThreadPoolExecutor() as executor: with ThreadPoolExecutor() as executor:
future_to_site = {executor.submit(worker, site): site for site in scraper_input.site_type} future_to_site = {
executor.submit(worker, site): site for site in scraper_input.site_type
}
for future in concurrent.futures.as_completed(future_to_site): for future in concurrent.futures.as_completed(future_to_site):
site_value, scraped_data = future.result() site_value, scraped_data = future.result()
results[site_value] = scraped_data site_to_jobs_dict[site_value] = scraped_data
if scraped_data.error:
errors[site_value] = scraped_data.error
dfs = [] jobs_dfs: List[pd.DataFrame] = []
for site, job_response in results.items(): for site, job_response in site_to_jobs_dict.items():
for job in job_response.jobs: for job in job_response.jobs:
data = job.dict() job_data = job.dict()
data["job_url_hyper"] = f'<a href="{data["job_url"]}">{data["job_url"]}</a>' job_data[
data["site"] = site "job_url_hyper"
data["company"] = data["company_name"] ] = f'<a href="{job_data["job_url"]}">{job_data["job_url"]}</a>'
if data["job_type"]: job_data["site"] = site
job_data["company"] = job_data["company_name"]
if job_data["job_type"]:
# Take the first value from the job type tuple # Take the first value from the job type tuple
data["job_type"] = data["job_type"].value[0] job_data["job_type"] = job_data["job_type"].value[0]
else: else:
data["job_type"] = None job_data["job_type"] = None
data["location"] = Location(**data["location"]).display_location() job_data["location"] = Location(**job_data["location"]).display_location()
compensation_obj = data.get("compensation") compensation_obj = job_data.get("compensation")
if compensation_obj and isinstance(compensation_obj, dict): if compensation_obj and isinstance(compensation_obj, dict):
data["interval"] = ( job_data["interval"] = (
compensation_obj.get("interval").value compensation_obj.get("interval").value
if compensation_obj.get("interval") if compensation_obj.get("interval")
else None else None
) )
data["min_amount"] = compensation_obj.get("min_amount") job_data["min_amount"] = compensation_obj.get("min_amount")
data["max_amount"] = compensation_obj.get("max_amount") job_data["max_amount"] = compensation_obj.get("max_amount")
data["currency"] = compensation_obj.get("currency", "USD") job_data["currency"] = compensation_obj.get("currency", "USD")
else: else:
data["interval"] = None job_data["interval"] = None
data["min_amount"] = None job_data["min_amount"] = None
data["max_amount"] = None job_data["max_amount"] = None
data["currency"] = None job_data["currency"] = None
job_df = pd.DataFrame([data]) job_df = pd.DataFrame([job_data])
dfs.append(job_df) jobs_dfs.append(job_df)
errors_list = [(key, value) for key, value in errors.items()] if jobs_dfs:
errors_df = pd.DataFrame(errors_list, columns=["Site", "Error"]) jobs_df = pd.concat(jobs_dfs, ignore_index=True)
desired_order: List[str] = [
"job_url_hyper" if hyperlinks else "job_url",
if dfs: "site",
df = pd.concat(dfs, ignore_index=True) "title",
if hyperlinks: "company",
desired_order = [ "location",
"site", "job_type",
"title", "date_posted",
"company", "interval",
"location", "benefits",
"job_type", "min_amount",
"interval", "max_amount",
"min_amount", "currency",
"max_amount", "emails",
"currency", "description",
"job_url_hyper", ]
"description", jobs_formatted_df = jobs_df[desired_order]
]
else:
desired_order = [
"site",
"title",
"company",
"location",
"job_type",
"interval",
"min_amount",
"max_amount",
"currency",
"job_url",
"description",
]
df = df[desired_order]
else: else:
df = pd.DataFrame() jobs_formatted_df = pd.DataFrame()
return ScrapeResults(jobs=df, errors=errors_df) return jobs_formatted_df

View File

@@ -170,7 +170,7 @@ class CompensationInterval(Enum):
class Compensation(BaseModel): class Compensation(BaseModel):
interval: CompensationInterval interval: Optional[CompensationInterval] = None
min_amount: int = None min_amount: int = None
max_amount: int = None max_amount: int = None
currency: Optional[str] = "USD" currency: Optional[str] = "USD"
@@ -186,25 +186,9 @@ class JobPost(BaseModel):
job_type: Optional[JobType] = None job_type: Optional[JobType] = None
compensation: Optional[Compensation] = None compensation: Optional[Compensation] = None
date_posted: Optional[date] = None date_posted: Optional[date] = None
benefits: Optional[str] = None
emails: Optional[list[str]] = None
class JobResponse(BaseModel): class JobResponse(BaseModel):
success: bool
error: str = None
total_results: Optional[int] = None
jobs: list[JobPost] = [] jobs: list[JobPost] = []
returned_results: int = None
@validator("returned_results", pre=True, always=True)
def set_returned_results(cls, v, values):
jobs_list = values.get("jobs")
if v is None:
if jobs_list is not None:
return len(jobs_list)
else:
return 0
return v

View File

@@ -2,11 +2,6 @@ from ..jobs import Enum, BaseModel, JobType, JobResponse, Country
from typing import List, Optional, Any from typing import List, Optional, Any
class StatusException(Exception):
def __init__(self, status_code: int):
self.status_code = status_code
class Site(Enum): class Site(Enum):
LINKEDIN = "linkedin" LINKEDIN = "linkedin"
INDEED = "indeed" INDEED = "indeed"
@@ -23,13 +18,15 @@ class ScraperInput(BaseModel):
is_remote: bool = False is_remote: bool = False
job_type: Optional[JobType] = None job_type: Optional[JobType] = None
easy_apply: bool = None # linkedin easy_apply: bool = None # linkedin
offset: int = 0
results_wanted: int = 15 results_wanted: int = 15
class Scraper: class Scraper:
def __init__(self, site: Site): def __init__(self, site: Site, proxy: Optional[List[str]] = None):
self.site = site self.site = site
self.proxy = (lambda p: {"http": p, "https": p} if p else None)(proxy)
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
... ...

View File

@@ -0,0 +1,18 @@
"""
jobspy.scrapers.exceptions
~~~~~~~~~~~~~~~~~~~
This module contains the set of Scrapers' exceptions.
"""
class LinkedInException(Exception):
"""Failed to scrape LinkedIn"""
class IndeedException(Exception):
"""Failed to scrape Indeed"""
class ZipRecruiterException(Exception):
"""Failed to scrape ZipRecruiter"""

View File

@@ -1,8 +1,13 @@
"""
jobspy.scrapers.indeed
~~~~~~~~~~~~~~~~~~~
This module contains routines to scrape Indeed.
"""
import re import re
import math import math
import io import io
import json import json
import traceback
from datetime import datetime from datetime import datetime
from typing import Optional from typing import Optional
@@ -12,6 +17,7 @@ from bs4 import BeautifulSoup
from bs4.element import Tag from bs4.element import Tag
from concurrent.futures import ThreadPoolExecutor, Future from concurrent.futures import ThreadPoolExecutor, Future
from ..exceptions import IndeedException
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Compensation, Compensation,
@@ -20,26 +26,30 @@ from ...jobs import (
JobResponse, JobResponse,
JobType, JobType,
) )
from .. import Scraper, ScraperInput, Site, Country, StatusException from .. import Scraper, ScraperInput, Site
def extract_emails_from_text(text: str) -> Optional[list[str]]:
class ParsingException(Exception): if not text:
pass return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
class IndeedScraper(Scraper): class IndeedScraper(Scraper):
def __init__(self): def __init__(self, proxy: Optional[str] = None):
""" """
Initializes IndeedScraper with the Indeed job search url Initializes IndeedScraper with the Indeed job search url
""" """
self.url = None
self.country = None
site = Site(Site.INDEED) site = Site(Site.INDEED)
super().__init__(site) super().__init__(site, proxy=proxy)
self.jobs_per_page = 15 self.jobs_per_page = 15
self.seen_urls = set() self.seen_urls = set()
def scrape_page( def scrape_page(
self, scraper_input: ScraperInput, page: int, session: tls_client.Session self, scraper_input: ScraperInput, page: int, session: tls_client.Session
) -> tuple[list[JobPost], int]: ) -> tuple[list[JobPost], int]:
""" """
Scrapes a page of Indeed for jobs with scraper_input criteria Scrapes a page of Indeed for jobs with scraper_input criteria
@@ -52,13 +62,13 @@ class IndeedScraper(Scraper):
domain = self.country.domain_value domain = self.country.domain_value
self.url = f"https://{domain}.indeed.com" self.url = f"https://{domain}.indeed.com"
job_list = [] job_list: list[JobPost] = []
params = { params = {
"q": scraper_input.search_term, "q": scraper_input.search_term,
"l": scraper_input.location, "l": scraper_input.location,
"filter": 0, "filter": 0,
"start": 0 + page * 10, "start": scraper_input.offset + page * 10,
} }
if scraper_input.distance: if scraper_input.distance:
params["radius"] = scraper_input.distance params["radius"] = scraper_input.distance
@@ -71,17 +81,26 @@ class IndeedScraper(Scraper):
if sc_values: if sc_values:
params["sc"] = "0kf:" + "".join(sc_values) + ";" params["sc"] = "0kf:" + "".join(sc_values) + ";"
response = session.get(self.url + "/jobs", params=params, allow_redirects=True) try:
# print(response.status_code) response = session.get(
f"{self.url}/jobs",
if response.status_code not in range(200, 400): params=params,
raise StatusException(response.status_code) allow_redirects=True,
proxy=self.proxy,
timeout_seconds=10,
)
if response.status_code not in range(200, 400):
raise IndeedException(
f"bad response with status code: {response.status_code}"
)
except Exception as e:
if "Proxy responded with" in str(e):
raise IndeedException("bad proxy")
raise IndeedException(str(e))
soup = BeautifulSoup(response.content, "html.parser") soup = BeautifulSoup(response.content, "html.parser")
with open("text2.html", "w", encoding="utf-8") as f: if "did not match any jobs" in response.text:
f.write(str(soup)) raise IndeedException("Parsing exception: Search did not match any jobs")
if "did not match any jobs" in str(soup):
raise ParsingException("Search did not match any jobs")
jobs = IndeedScraper.parse_jobs( jobs = IndeedScraper.parse_jobs(
soup soup
@@ -89,11 +108,11 @@ class IndeedScraper(Scraper):
total_num_jobs = IndeedScraper.total_jobs(soup) total_num_jobs = IndeedScraper.total_jobs(soup)
if ( if (
not jobs.get("metaData", {}) not jobs.get("metaData", {})
.get("mosaicProviderJobCardsModel", {}) .get("mosaicProviderJobCardsModel", {})
.get("results") .get("results")
): ):
raise Exception("No jobs found.") raise IndeedException("No jobs found.")
def process_job(job) -> Optional[JobPost]: def process_job(job) -> Optional[JobPost]:
job_url = f'{self.url}/jobs/viewjob?jk={job["jobkey"]}' job_url = f'{self.url}/jobs/viewjob?jk={job["jobkey"]}'
@@ -125,9 +144,10 @@ class IndeedScraper(Scraper):
date_posted = date_posted.strftime("%Y-%m-%d") date_posted = date_posted.strftime("%Y-%m-%d")
description = self.get_description(job_url, session) description = self.get_description(job_url, session)
emails = extract_emails_from_text(description)
with io.StringIO(job["snippet"]) as f: with io.StringIO(job["snippet"]) as f:
soup = BeautifulSoup(f, "html.parser") soup_io = BeautifulSoup(f, "html.parser")
li_elements = soup.find_all("li") li_elements = soup_io.find_all("li")
if description is None and li_elements: if description is None and li_elements:
description = " ".join(li.text for li in li_elements) description = " ".join(li.text for li in li_elements)
@@ -140,6 +160,7 @@ class IndeedScraper(Scraper):
state=job.get("jobLocationState"), state=job.get("jobLocationState"),
country=self.country, country=self.country,
), ),
emails=extract_emails_from_text(description),
job_type=job_type, job_type=job_type,
compensation=compensation, compensation=compensation,
date_posted=date_posted, date_posted=date_posted,
@@ -168,51 +189,33 @@ class IndeedScraper(Scraper):
) )
pages_to_process = ( pages_to_process = (
math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1 math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1
) )
try: #: get first page to initialize session
#: get first page to initialize session job_list, total_results = self.scrape_page(scraper_input, 0, session)
job_list, total_results = self.scrape_page(scraper_input, 0, session)
with ThreadPoolExecutor(max_workers=1) as executor: with ThreadPoolExecutor(max_workers=1) as executor:
futures: list[Future] = [ futures: list[Future] = [
executor.submit(self.scrape_page, scraper_input, page, session) executor.submit(self.scrape_page, scraper_input, page, session)
for page in range(1, pages_to_process + 1) for page in range(1, pages_to_process + 1)
] ]
for future in futures: for future in futures:
jobs, _ = future.result() jobs, _ = future.result()
job_list += jobs job_list += jobs
except StatusException as e:
return JobResponse(
success=False,
error=f"Indeed returned status code {e.status_code}",
)
except ParsingException as e:
return JobResponse(
success=False,
error=f"Indeed failed to parse response: {e}",
)
except Exception as e:
return JobResponse(
success=False,
error=f"Indeed failed to scrape: {e}",
)
if len(job_list) > scraper_input.results_wanted: if len(job_list) > scraper_input.results_wanted:
job_list = job_list[: scraper_input.results_wanted] job_list = job_list[: scraper_input.results_wanted]
job_response = JobResponse( job_response = JobResponse(
success=True,
jobs=job_list, jobs=job_list,
total_results=total_results, total_results=total_results,
) )
return job_response return job_response
def get_description(self, job_page_url: str, session: tls_client.Session) -> str: def get_description(self, job_page_url: str, session: tls_client.Session) -> Optional[str]:
""" """
Retrieves job description by going to the job page url Retrieves job description by going to the job page url
:param job_page_url: :param job_page_url:
@@ -226,9 +229,9 @@ class IndeedScraper(Scraper):
try: try:
response = session.get( response = session.get(
formatted_url, allow_redirects=True, timeout_seconds=5 formatted_url, allow_redirects=True, timeout_seconds=5, proxy=self.proxy
) )
except requests.exceptions.Timeout: except Exception as e:
return None return None
if response.status_code not in range(200, 400): if response.status_code not in range(200, 400):
@@ -255,14 +258,17 @@ class IndeedScraper(Scraper):
label = taxonomy["attributes"][0].get("label") label = taxonomy["attributes"][0].get("label")
if label: if label:
job_type_str = label.replace("-", "").replace(" ", "").lower() job_type_str = label.replace("-", "").replace(" ", "").lower()
# print(f"Debug: job_type_str = {job_type_str}") return IndeedScraper.get_enum_from_job_type(job_type_str)
return IndeedScraper.get_enum_from_value(job_type_str)
return None return None
@staticmethod @staticmethod
def get_enum_from_value(value_str): def get_enum_from_job_type(job_type_str):
"""
Given a string, returns the corresponding JobType enum member if a match is found.
for job_type in JobType: for job_type in JobType:
if value_str in job_type.value: """
for job_type in JobType:
if job_type_str in job_type.value:
return job_type return job_type
return None return None
@@ -283,9 +289,9 @@ class IndeedScraper(Scraper):
for tag in script_tags: for tag in script_tags:
if ( if (
tag.string tag.string
and "mosaic.providerData" in tag.string and "mosaic.providerData" in tag.string
and "mosaic-provider-jobcards" in tag.string and "mosaic-provider-jobcards" in tag.string
): ):
return tag return tag
return None return None
@@ -301,9 +307,9 @@ class IndeedScraper(Scraper):
jobs = json.loads(m.group(1).strip()) jobs = json.loads(m.group(1).strip())
return jobs return jobs
else: else:
raise ParsingException("Could not find mosaic provider job cards data") raise IndeedException("Could not find mosaic provider job cards data")
else: else:
raise ParsingException( raise IndeedException(
"Could not find a script tag containing mosaic provider data" "Could not find a script tag containing mosaic provider data"
) )

View File

@@ -1,31 +1,50 @@
from typing import Optional, Tuple """
jobspy.scrapers.linkedin
~~~~~~~~~~~~~~~~~~~
This module contains routines to scrape LinkedIn.
"""
from typing import Optional
from datetime import datetime from datetime import datetime
import traceback
import requests import requests
from requests.exceptions import Timeout import time
import re
from requests.exceptions import ProxyError
from concurrent.futures import ThreadPoolExecutor, as_completed
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
from bs4.element import Tag from bs4.element import Tag
from threading import Lock
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..exceptions import LinkedInException
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Location, Location,
JobResponse, JobResponse,
JobType, JobType,
Compensation,
CompensationInterval,
) )
def extract_emails_from_text(text: str) -> Optional[list[str]]:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
class LinkedInScraper(Scraper): class LinkedInScraper(Scraper):
def __init__(self): MAX_RETRIES = 3
DELAY = 10
def __init__(self, proxy: Optional[str] = None):
""" """
Initializes LinkedInScraper with the LinkedIn job search url Initializes LinkedInScraper with the LinkedIn job search url
""" """
site = Site(Site.LINKEDIN) site = Site(Site.LINKEDIN)
self.country = "worldwide"
self.url = "https://www.linkedin.com" self.url = "https://www.linkedin.com"
super().__init__(site) super().__init__(site, proxy=proxy)
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
@@ -33,12 +52,12 @@ class LinkedInScraper(Scraper):
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
self.country = "worldwide"
job_list: list[JobPost] = [] job_list: list[JobPost] = []
seen_urls = set() seen_urls = set()
page, processed_jobs, job_count = 0, 0, 0 url_lock = Lock()
page = scraper_input.offset // 25 + 25 if scraper_input.offset else 0
def job_type_code(job_type): def job_type_code(job_type_enum):
mapping = { mapping = {
JobType.FULL_TIME: "F", JobType.FULL_TIME: "F",
JobType.PART_TIME: "P", JobType.PART_TIME: "P",
@@ -47,129 +66,134 @@ class LinkedInScraper(Scraper):
JobType.TEMPORARY: "T", JobType.TEMPORARY: "T",
} }
return mapping.get(job_type, "") return mapping.get(job_type_enum, "")
with requests.Session() as session: while len(job_list) < scraper_input.results_wanted and page < 1000:
while len(job_list) < scraper_input.results_wanted: params = {
params = { "keywords": scraper_input.search_term,
"keywords": scraper_input.search_term, "location": scraper_input.location,
"location": scraper_input.location, "distance": scraper_input.distance,
"distance": scraper_input.distance, "f_WT": 2 if scraper_input.is_remote else None,
"f_WT": 2 if scraper_input.is_remote else None, "f_JT": job_type_code(scraper_input.job_type)
"f_JT": job_type_code(scraper_input.job_type) if scraper_input.job_type
if scraper_input.job_type else None,
else None, "pageNum": 0,
"pageNum": page, page: page + scraper_input.offset,
"f_AL": "true" if scraper_input.easy_apply else None, "f_AL": "true" if scraper_input.easy_apply else None,
} }
params = {k: v for k, v in params.items() if v is not None} params = {k: v for k, v in params.items() if v is not None}
response = session.get(
f"{self.url}/jobs/search", params=params, allow_redirects=True
)
if response.status_code != 200: params = {k: v for k, v in params.items() if v is not None}
reason = ' (too many requests)' if response.status_code == 429 else '' retries = 0
return JobResponse( while retries < self.MAX_RETRIES:
success=False, try:
error=f"LinkedIn returned {response.status_code} {reason}", response = requests.get(
jobs=job_list, f"{self.url}/jobs-guest/jobs/api/seeMoreJobPostings/search?",
total_results=job_count, params=params,
allow_redirects=True,
proxies=self.proxy,
timeout=10,
) )
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser") break
except requests.HTTPError as e:
if page == 0: if hasattr(e, 'response') and e.response is not None:
job_count_text = soup.find( if e.response.status_code == 429:
"span", class_="results-context-header__job-count" time.sleep(self.DELAY)
).text retries += 1
job_count = int("".join(filter(str.isdigit, job_count_text))) continue
else:
for job_card in soup.find_all( raise LinkedInException(f"bad response status code: {e.response.status_code}")
"div",
class_="base-card relative w-full hover:no-underline focus:no-underline base-card--link base-search-card base-search-card--link job-search-card",
):
processed_jobs += 1
data_entity_urn = job_card.get("data-entity-urn", "")
job_id = (
data_entity_urn.split(":")[-1] if data_entity_urn else "N/A"
)
job_url = f"{self.url}/jobs/view/{job_id}"
if job_url in seen_urls:
continue
seen_urls.add(job_url)
job_info = job_card.find("div", class_="base-search-card__info")
if job_info is None:
continue
title_tag = job_info.find("h3", class_="base-search-card__title")
title = title_tag.text.strip() if title_tag else "N/A"
company_tag = job_info.find("a", class_="hidden-nested-link")
company = company_tag.text.strip() if company_tag else "N/A"
metadata_card = job_info.find(
"div", class_="base-search-card__metadata"
)
location: Location = self.get_location(metadata_card)
datetime_tag = metadata_card.find(
"time", class_="job-search-card__listdate"
)
description, job_type = LinkedInScraper.get_description(job_url)
if datetime_tag:
datetime_str = datetime_tag["datetime"]
try:
date_posted = datetime.strptime(datetime_str, "%Y-%m-%d")
except Exception as e:
date_posted = None
else: else:
date_posted = None raise
except ProxyError as e:
raise LinkedInException("bad proxy")
except Exception as e:
raise LinkedInException(str(e))
else:
# Raise an exception if the maximum number of retries is reached
raise LinkedInException("Max retries reached, failed to get a valid response")
job_post = JobPost( soup = BeautifulSoup(response.text, "html.parser")
title=title,
description=description,
company_name=company,
location=location,
date_posted=date_posted,
job_url=job_url,
job_type=job_type,
compensation=Compensation(
interval=CompensationInterval.YEARLY, currency=None
),
)
job_list.append(job_post)
if processed_jobs >= job_count:
break
if len(job_list) >= scraper_input.results_wanted:
break
if processed_jobs >= job_count:
break
if len(job_list) >= scraper_input.results_wanted:
break
page += 1 with ThreadPoolExecutor(max_workers=5) as executor:
futures = []
for job_card in soup.find_all("div", class_="base-search-card"):
job_url = None
href_tag = job_card.find("a", class_="base-card__full-link")
if href_tag and "href" in href_tag.attrs:
href = href_tag.attrs["href"].split("?")[0]
job_id = href.split("-")[-1]
job_url = f"{self.url}/jobs/view/{job_id}"
with url_lock:
if job_url in seen_urls:
continue
seen_urls.add(job_url)
futures.append(executor.submit(self.process_job, job_card, job_url))
for future in as_completed(futures):
try:
job_post = future.result()
if job_post:
job_list.append(job_post)
except Exception as e:
raise LinkedInException("Exception occurred while processing jobs")
page += 25
job_list = job_list[: scraper_input.results_wanted] job_list = job_list[: scraper_input.results_wanted]
job_response = JobResponse( return JobResponse(jobs=job_list)
success=True,
jobs=job_list,
total_results=job_count,
)
return job_response
@staticmethod def process_job(self, job_card: Tag, job_url: str) -> Optional[JobPost]:
def get_description(job_page_url: str) -> Optional[str]: title_tag = job_card.find("span", class_="sr-only")
title = title_tag.get_text(strip=True) if title_tag else "N/A"
company_tag = job_card.find("h4", class_="base-search-card__subtitle")
company_a_tag = company_tag.find("a") if company_tag else None
company = company_a_tag.get_text(strip=True) if company_a_tag else "N/A"
metadata_card = job_card.find("div", class_="base-search-card__metadata")
location = self.get_location(metadata_card)
datetime_tag = metadata_card.find("time", class_="job-search-card__listdate") if metadata_card else None
date_posted = None
if datetime_tag and "datetime" in datetime_tag.attrs:
datetime_str = datetime_tag["datetime"]
try:
date_posted = datetime.strptime(datetime_str, "%Y-%m-%d")
except Exception as e:
date_posted = None
benefits_tag = job_card.find("span", class_="result-benefits__text")
benefits = " ".join(benefits_tag.get_text().split()) if benefits_tag else None
description, job_type = self.get_job_description(job_url)
return JobPost(
title=title,
description=description,
company_name=company,
location=location,
date_posted=date_posted,
job_url=job_url,
job_type=job_type,
benefits=benefits,
emails=extract_emails_from_text(description)
)
def get_job_description(self, job_page_url: str) -> tuple[None, None] | tuple[
str | None, tuple[str | None, JobType | None]]:
""" """
Retrieves job description by going to the job page url Retrieves job description by going to the job page url
:param job_page_url: :param job_page_url:
:return: description or None :return: description or None
""" """
try: try:
response = requests.get(job_page_url, timeout=5) response = requests.get(job_page_url, timeout=5, proxies=self.proxy)
except Timeout: response.raise_for_status()
return None, None except Exception as e:
if response.status_code not in range(200, 400):
return None, None return None, None
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
@@ -177,19 +201,19 @@ class LinkedInScraper(Scraper):
"div", class_=lambda x: x and "show-more-less-html__markup" in x "div", class_=lambda x: x and "show-more-less-html__markup" in x
) )
text_content = None description = None
if div_content: if div_content:
text_content = " ".join(div_content.get_text().split()).strip() description = " ".join(div_content.get_text().split()).strip()
def get_job_type( def get_job_type(
soup: BeautifulSoup, soup_job_type: BeautifulSoup,
) -> Tuple[Optional[str], Optional[JobType]]: ) -> JobType | None:
""" """
Gets the job type from job page Gets the job type from job page
:param soup: :param soup_job_type:
:return: JobType :return: JobType
""" """
h3_tag = soup.find( h3_tag = soup_job_type.find(
"h3", "h3",
class_="description__job-criteria-subheader", class_="description__job-criteria-subheader",
string=lambda text: "Employment type" in text, string=lambda text: "Employment type" in text,
@@ -208,7 +232,7 @@ class LinkedInScraper(Scraper):
return LinkedInScraper.get_enum_from_value(employment_type) return LinkedInScraper.get_enum_from_value(employment_type)
return text_content, get_job_type(soup) return description, get_job_type(soup)
@staticmethod @staticmethod
def get_enum_from_value(value_str): def get_enum_from_value(value_str):
@@ -239,3 +263,9 @@ class LinkedInScraper(Scraper):
) )
return location return location
def extract_emails_from_text(text: str) -> Optional[list[str]]:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)

View File

@@ -1,17 +1,24 @@
"""
jobspy.scrapers.ziprecruiter
~~~~~~~~~~~~~~~~~~~
This module contains routines to scrape ZipRecruiter.
"""
import math import math
import json import json
import re import re
import traceback from datetime import datetime, date
from datetime import datetime from typing import Optional, Tuple, Any
from typing import Optional, Tuple from urllib.parse import urlparse, parse_qs, urlunparse
from urllib.parse import urlparse, parse_qs
import tls_client import tls_client
import requests
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
from bs4.element import Tag from bs4.element import Tag
from concurrent.futures import ThreadPoolExecutor, Future from concurrent.futures import ThreadPoolExecutor, Future
from .. import Scraper, ScraperInput, Site, StatusException from .. import Scraper, ScraperInput, Site
from ..exceptions import ZipRecruiterException
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Compensation, Compensation,
@@ -22,15 +29,21 @@ from ...jobs import (
Country, Country,
) )
def extract_emails_from_text(text: str) -> Optional[list[str]]:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
class ZipRecruiterScraper(Scraper): class ZipRecruiterScraper(Scraper):
def __init__(self): def __init__(self, proxy: Optional[str] = None):
""" """
Initializes LinkedInScraper with the ZipRecruiter job search url Initializes LinkedInScraper with the ZipRecruiter job search url
""" """
site = Site(Site.ZIP_RECRUITER) site = Site(Site.ZIP_RECRUITER)
self.url = "https://www.ziprecruiter.com" self.url = "https://www.ziprecruiter.com"
super().__init__(site) super().__init__(site, proxy=proxy)
self.jobs_per_page = 20 self.jobs_per_page = 20
self.seen_urls = set() self.seen_urls = set()
@@ -38,83 +51,69 @@ class ZipRecruiterScraper(Scraper):
client_identifier="chrome112", random_tls_extension_order=True client_identifier="chrome112", random_tls_extension_order=True
) )
def scrape_page( def find_jobs_in_page(
self, scraper_input: ScraperInput, page: int self, scraper_input: ScraperInput, page: int
) -> tuple[list[JobPost], int | None]: ) -> list[JobPost]:
""" """
Scrapes a page of ZipRecruiter for jobs with scraper_input criteria Scrapes a page of ZipRecruiter for jobs with scraper_input criteria
:param scraper_input: :param scraper_input:
:param page: :param page:
:param session: :return: jobs found on page
:return: jobs found on page, total number of jobs found for search
""" """
job_list: list[JobPost] = []
job_list = [] try:
response = self.session.get(
job_type_value = None f"{self.url}/jobs-search",
if scraper_input.job_type: headers=ZipRecruiterScraper.headers(),
if scraper_input.job_type.value == "fulltime": params=ZipRecruiterScraper.add_params(scraper_input, page),
job_type_value = "full_time" allow_redirects=True,
elif scraper_input.job_type.value == "parttime": proxy=self.proxy,
job_type_value = "part_time" timeout_seconds=10,
else: )
job_type_value = scraper_input.job_type.value if response.status_code != 200:
raise ZipRecruiterException(
params = { f"bad response status code: {response.status_code}"
"search": scraper_input.search_term, )
"location": scraper_input.location, except Exception as e:
"page": page, if "Proxy responded with non 200 code" in str(e):
"form": "jobs-landing", raise ZipRecruiterException("bad proxy")
} raise ZipRecruiterException(str(e))
if scraper_input.is_remote:
params["refine_by_location_type"] = "only_remote"
if scraper_input.distance:
params["radius"] = scraper_input.distance
if job_type_value:
params[
"refine_by_employment"
] = f"employment_type:employment_type:{job_type_value}"
response = self.session.get(
self.url + "/jobs-search",
headers=ZipRecruiterScraper.headers(),
params=params,
allow_redirects=True,
)
# print(response.status_code)
if response.status_code != 200:
raise StatusException(response.status_code)
html_string = response.text
soup = BeautifulSoup(html_string, "html.parser")
script_tag = soup.find("script", {"id": "js_variables"})
data = json.loads(script_tag.string)
if page == 1:
job_count = int(data["totalJobCount"].replace(",", ""))
else: else:
job_count = None soup = BeautifulSoup(response.text, "html.parser")
js_tag = soup.find("script", {"id": "js_variables"})
if js_tag:
page_json = json.loads(js_tag.string)
jobs_list = page_json.get("jobList")
if jobs_list:
page_variant = "javascript"
# print('type javascript', len(jobs_list))
else:
page_variant = "html_2"
jobs_list = soup.find_all("div", {"class": "job_content"})
# print('type 2 html', len(jobs_list))
else:
page_variant = "html_1"
jobs_list = soup.find_all("li", {"class": "job-listing"})
# print('type 1 html', len(jobs_list))
with ThreadPoolExecutor(max_workers=10) as executor: with ThreadPoolExecutor(max_workers=10) as executor:
if "jobList" in data and data["jobList"]: if page_variant == "javascript":
jobs_js = data["jobList"]
job_results = [ job_results = [
executor.submit(self.process_job_js, job) for job in jobs_js executor.submit(self.process_job_javascript, job)
for job in jobs_list
] ]
else: elif page_variant == "html_1":
jobs_html = soup.find_all("div", {"class": "job_content"})
job_results = [ job_results = [
executor.submit(self.process_job_html, job) for job in jobs_html executor.submit(self.process_job_html_1, job) for job in jobs_list
]
elif page_variant == "html_2":
job_results = [
executor.submit(self.process_job_html_2, job) for job in jobs_list
] ]
job_list = [result.result() for result in job_results if result.result()] job_list = [result.result() for result in job_results if result.result()]
return job_list
return job_list, job_count
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
@@ -122,56 +121,35 @@ class ZipRecruiterScraper(Scraper):
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
start_page = (scraper_input.offset // self.jobs_per_page) + 1 if scraper_input.offset else 1
#: get first page to initialize session
job_list: list[JobPost] = self.find_jobs_in_page(scraper_input, start_page)
pages_to_process = max( pages_to_process = max(
3, math.ceil(scraper_input.results_wanted / self.jobs_per_page) 3, math.ceil(scraper_input.results_wanted / self.jobs_per_page)
) )
try: with ThreadPoolExecutor(max_workers=10) as executor:
#: get first page to initialize session futures: list[Future] = [
job_list, total_results = self.scrape_page(scraper_input, 1) executor.submit(self.find_jobs_in_page, scraper_input, page)
for page in range(start_page + 1, start_page + pages_to_process + 2)
]
with ThreadPoolExecutor(max_workers=10) as executor: for future in futures:
futures: list[Future] = [ jobs = future.result()
executor.submit(self.scrape_page, scraper_input, page)
for page in range(2, pages_to_process + 1)
]
for future in futures: job_list += jobs
jobs, _ = future.result()
job_list += jobs job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list)
except StatusException as e: def process_job_html_1(self, job: Tag) -> Optional[JobPost]:
return JobResponse(
success=False,
error=f"ZipRecruiter returned status code {e.status_code}",
)
except Exception as e:
return JobResponse(
success=False,
error=f"ZipRecruiter failed to scrape: {e}",
)
#: note: this does not handle if the results are more or less than the results_wanted
if len(job_list) > scraper_input.results_wanted:
job_list = job_list[: scraper_input.results_wanted]
job_response = JobResponse(
success=True,
jobs=job_list,
total_results=total_results,
)
return job_response
def process_job_html(self, job: Tag) -> Optional[JobPost]:
""" """
Parses a job from the job content tag Parses a job from the job content tag
:param job: BeautifulSoup Tag for one job post :param job: BeautifulSoup Tag for one job post
:return JobPost :return JobPost
TODO this method isnt finished due to not encountering this type of html often
""" """
job_url = job.find("a", {"class": "job_link"})["href"] job_url = self.cleanurl(job.find("a", {"class": "job_link"})["href"])
if job_url in self.seen_urls: if job_url in self.seen_urls:
return None return None
@@ -179,8 +157,7 @@ class ZipRecruiterScraper(Scraper):
company = job.find("a", {"class": "company_name"}).text.strip() company = job.find("a", {"class": "company_name"}).text.strip()
description, updated_job_url = self.get_description(job_url) description, updated_job_url = self.get_description(job_url)
if updated_job_url is not None: # job_url = updated_job_url if updated_job_url else job_url
job_url = updated_job_url
if description is None: if description is None:
description = job.find("p", {"class": "job_snippet"}).text.strip() description = job.find("p", {"class": "job_snippet"}).text.strip()
@@ -188,7 +165,7 @@ class ZipRecruiterScraper(Scraper):
job_type = None job_type = None
if job_type_element: if job_type_element:
job_type_text = ( job_type_text = (
job_type_element.text.strip().lower().replace("-", "").replace(" ", "") job_type_element.text.strip().lower().replace("_", "").replace(" ", "")
) )
job_type = ZipRecruiterScraper.get_job_type_enum(job_type_text) job_type = ZipRecruiterScraper.get_job_type_enum(job_type_text)
@@ -203,26 +180,68 @@ class ZipRecruiterScraper(Scraper):
compensation=ZipRecruiterScraper.get_compensation(job), compensation=ZipRecruiterScraper.get_compensation(job),
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
emails=extract_emails_from_text(description),
) )
return job_post return job_post
def process_job_js(self, job: dict) -> JobPost: def process_job_html_2(self, job: Tag) -> Optional[JobPost]:
"""
Parses a job from the job content tag for a second variat of HTML that ZR uses
:param job: BeautifulSoup Tag for one job post
:return JobPost
"""
job_url = self.cleanurl(job.find("a", class_="job_link")["href"])
title = job.find("h2", class_="title").text
company = job.find("a", class_="company_name").text.strip()
description, updated_job_url = self.get_description(job_url)
# job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = job.find("p", class_="job_snippet").get_text().strip()
job_type_text = job.find("li", class_="perk_item perk_type")
job_type = None
if job_type_text:
job_type_text = (
job_type_text.get_text()
.strip()
.lower()
.replace("-", "")
.replace(" ", "")
)
job_type = ZipRecruiterScraper.get_job_type_enum(job_type_text)
date_posted = ZipRecruiterScraper.get_date_posted(job)
job_post = JobPost(
title=title,
description=description,
company_name=company,
location=ZipRecruiterScraper.get_location(job),
job_type=job_type,
compensation=ZipRecruiterScraper.get_compensation(job),
date_posted=date_posted,
job_url=job_url,
)
return job_post
def process_job_javascript(self, job: dict) -> JobPost:
title = job.get("Title") title = job.get("Title")
description = BeautifulSoup( job_url = self.cleanurl(job.get("JobURL"))
job.get("Snippet", "").strip(), "html.parser"
).get_text() description, updated_job_url = self.get_description(job_url)
# job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = BeautifulSoup(
job.get("Snippet", "").strip(), "html.parser"
).get_text()
company = job.get("OrgName") company = job.get("OrgName")
location = Location( location = Location(
city=job.get("City"), state=job.get("State"), country=Country.US_CANADA city=job.get("City"), state=job.get("State"), country=Country.US_CANADA
) )
try: job_type = ZipRecruiterScraper.get_job_type_enum(
job_type = ZipRecruiterScraper.get_job_type_enum( job.get("EmploymentType", "").replace("-", "").lower()
job.get("EmploymentType", "").replace("-", "_").lower() )
)
except ValueError:
# print(f"Skipping job due to unrecognized job type: {job.get('EmploymentType')}")
return None
formatted_salary = job.get("FormattedSalaryShort", "") formatted_salary = job.get("FormattedSalaryShort", "")
salary_parts = formatted_salary.split(" ") salary_parts = formatted_salary.split(" ")
@@ -258,7 +277,6 @@ class ZipRecruiterScraper(Scraper):
date_posted = date_posted_obj.date() date_posted = date_posted_obj.date()
else: else:
date_posted = date.today() date_posted = date.today()
job_url = job.get("JobURL")
return JobPost( return JobPost(
title=title, title=title,
@@ -272,17 +290,11 @@ class ZipRecruiterScraper(Scraper):
) )
return job_post return job_post
@staticmethod
def get_enum_from_value(value_str):
for job_type in JobType:
if value_str in job_type.value:
return job_type
return None
@staticmethod @staticmethod
def get_job_type_enum(job_type_str: str) -> Optional[JobType]: def get_job_type_enum(job_type_str: str) -> Optional[JobType]:
for job_type in JobType: for job_type in JobType:
if job_type_str in job_type.value: if job_type_str in job_type.value:
a = True
return job_type return job_type
return None return None
@@ -294,14 +306,17 @@ class ZipRecruiterScraper(Scraper):
:return: description or None, response url :return: description or None, response url
""" """
try: try:
response = self.session.get( response = requests.get(
job_page_url, job_page_url,
headers=ZipRecruiterScraper.headers(), headers=ZipRecruiterScraper.headers(),
allow_redirects=True, allow_redirects=True,
timeout_seconds=5, timeout=5,
proxies=self.proxy,
) )
except requests.exceptions.Timeout: if response.status_code not in range(200, 400):
return None return None, None
except Exception as e:
return None, None
html_string = response.content html_string = response.content
soup_job = BeautifulSoup(html_string, "html.parser") soup_job = BeautifulSoup(html_string, "html.parser")
@@ -311,6 +326,36 @@ class ZipRecruiterScraper(Scraper):
return job_description_div.text.strip(), response.url return job_description_div.text.strip(), response.url
return None, response.url return None, response.url
@staticmethod
def add_params(scraper_input, page) -> dict[str, str | Any]:
params = {
"search": scraper_input.search_term,
"location": scraper_input.location,
"page": page,
"form": "jobs-landing",
}
job_type_value = None
if scraper_input.job_type:
if scraper_input.job_type.value == "fulltime":
job_type_value = "full_time"
elif scraper_input.job_type.value == "parttime":
job_type_value = "part_time"
else:
job_type_value = scraper_input.job_type.value
if job_type_value:
params[
"refine_by_employment"
] = f"employment_type:employment_type:{job_type_value}"
if scraper_input.is_remote:
params["refine_by_location_type"] = "only_remote"
if scraper_input.distance:
params["radius"] = scraper_input.distance
return params
@staticmethod @staticmethod
def get_interval(interval_str: str): def get_interval(interval_str: str):
""" """
@@ -327,7 +372,7 @@ class ZipRecruiterScraper(Scraper):
return CompensationInterval(interval_str) return CompensationInterval(interval_str)
@staticmethod @staticmethod
def get_date_posted(job: BeautifulSoup) -> Optional[datetime.date]: def get_date_posted(job: Tag) -> Optional[datetime.date]:
""" """
Extracts the date a job was posted Extracts the date a job was posted
:param job :param job
@@ -353,7 +398,7 @@ class ZipRecruiterScraper(Scraper):
return None return None
@staticmethod @staticmethod
def get_compensation(job: BeautifulSoup) -> Optional[Compensation]: def get_compensation(job: Tag) -> Optional[Compensation]:
""" """
Parses the compensation tag from the job BeautifulSoup object Parses the compensation tag from the job BeautifulSoup object
:param job :param job
@@ -394,7 +439,7 @@ class ZipRecruiterScraper(Scraper):
return create_compensation_object(pay) return create_compensation_object(pay)
@staticmethod @staticmethod
def get_location(job: BeautifulSoup) -> Location: def get_location(job: Tag) -> Location:
""" """
Extracts the job location from BeatifulSoup object Extracts the job location from BeatifulSoup object
:param job: :param job:
@@ -421,3 +466,9 @@ class ZipRecruiterScraper(Scraper):
return { return {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36" "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36"
} }
@staticmethod
def cleanurl(url):
parsed_url = urlparse(url)
return urlunparse((parsed_url.scheme, parsed_url.netloc, parsed_url.path, parsed_url.params, '', ''))

12
src/tests/test_all.py Normal file
View File

@@ -0,0 +1,12 @@
from ..jobspy import scrape_jobs
import pandas as pd
def test_all():
result = scrape_jobs(
site_name=["linkedin", "indeed", "zip_recruiter"],
search_term="software engineer",
results_wanted=5,
)
assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"

View File

@@ -1,4 +1,5 @@
from ..jobspy import scrape_jobs from ..jobspy import scrape_jobs
import pandas as pd
def test_indeed(): def test_indeed():
@@ -6,4 +7,4 @@ def test_indeed():
site_name="indeed", site_name="indeed",
search_term="software engineer", search_term="software engineer",
) )
assert result is not None assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"

View File

@@ -1,4 +1,5 @@
from jobspy import scrape_jobs from ..jobspy import scrape_jobs
import pandas as pd
def test_linkedin(): def test_linkedin():
@@ -6,4 +7,4 @@ def test_linkedin():
site_name="linkedin", site_name="linkedin",
search_term="software engineer", search_term="software engineer",
) )
assert result is not None assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"

View File

@@ -1,4 +1,5 @@
from jobspy import scrape_jobs from ..jobspy import scrape_jobs
import pandas as pd
def test_ziprecruiter(): def test_ziprecruiter():
@@ -7,4 +8,4 @@ def test_ziprecruiter():
search_term="software engineer", search_term="software engineer",
) )
assert result is not None assert isinstance(result, pd.DataFrame) and not result.empty, "Result should be a non-empty DataFrame"