Compare commits

..

52 Commits

Author SHA1 Message Date
Vincent Yan
eed7fca300 Get full indeed description (#70) 2023-11-27 15:00:36 -06:00
Faraz Khan
dfb8c18c51 include location with 3 parts (#69) 2023-11-10 16:59:42 -06:00
Faraz Khan
81f70ff8a5 added salary data for linkedin (#68) 2023-11-09 14:57:15 -06:00
Cullen Watson
cc9e7866b7 fix linkedin bug & add linkedin company url (#67) 2023-11-08 15:51:07 -06:00
Zachary Hampton
a2c8fe046e Update README.md 2023-11-06 22:13:19 -07:00
Cullen Watson
2b7fea40a5 [fix] glassdoor duplicates 2023-10-30 20:29:55 -05:00
Cullen Watson
d37f86e1b9 [fix] glassdoor location 2023-10-30 20:19:56 -05:00
Cullen Watson
0302ab14f5 glassdoor keywords 2023-10-30 20:07:31 -05:00
Cullen Watson
3f2b582445 add glassdoor (#66) 2023-10-30 19:57:36 -05:00
Cullen Watson
93223b6a38 bug fix 2023-10-30 13:57:23 -05:00
Cullen Watson
e3fc222eb5 readd proxy support for zip (#64) 2023-10-29 08:54:56 -05:00
Cullen
b303b3f841 chore: version 2023-10-28 16:58:32 -05:00
Cullen
1a0c75f323 chore: version 2023-10-28 16:54:04 -05:00
Cullen
e2f6885d61 chore: format 2023-10-28 16:52:05 -05:00
Cullen
8d65d1b652 [chore] version 2023-10-28 16:43:44 -05:00
Cullen
216d3fd39f ziprecruiter: 5s delay 2023-10-28 16:41:32 -05:00
Cullen Watson
d3bfdc0a6e ziprecruiter api (#63) 2023-10-28 16:17:28 -05:00
Cullen Watson
ba5ed803ca use ziprecuriter api (#62) 2023-10-28 15:51:29 -05:00
Cullen Watson
ff1eb0f7b0 [docs] update readme 2023-10-18 14:32:21 -05:00
Cullen Watson
f2cc74b7f2 Fix Indeed exceptions on parsing description 2023-10-18 14:25:53 -05:00
Cullen Watson
5e71866630 [docs] link change 2023-10-18 11:18:03 -05:00
Zachary Hampton
4e67c6e5a3 Update README.md 2023-10-17 20:22:56 -07:00
Cullen Watson
caf655525a docs: update readme 2023-10-10 11:54:14 -05:00
Cullen Watson
90fa4a4c4f feat: utils.py 2023-10-10 11:29:29 -05:00
Cullen Watson
e5353e604d Multiple job types for Indeed, urgent keywords column (#56)
* enh(indeed): mult job types

* feat(jobs):  urgent kws

* fix(indeed): use new session obj per request

* fix: emails as comma separated in output

* fix: put num urgent words in output

* chore: readme
2023-10-10 11:23:04 -05:00
Cullen Watson
628f4dee9c [fix] indeed - min & max values swapped (#54) 2023-10-03 09:22:18 -05:00
Cullen Watson
2e59ab03e3 Merge branch 'main' of https://github.com/cullenwatson/JobSpy 2023-09-28 18:53:59 -05:00
Cullen Watson
008ca61e12 [fix] readd hyperlink param 2023-09-28 18:53:21 -05:00
Cullen Watson
8fc4c3bf90 [docs] readme 2023-09-28 18:35:40 -05:00
Cullen Watson
bff39a2625 [fix] util func 2023-09-28 18:33:14 -05:00
Cullen Watson
c676050dc0 [fix] util func 2023-09-28 18:33:02 -05:00
Cullen Watson
37976f7ec2 [chore] version number 2023-09-28 18:26:55 -05:00
Cullen Watson
9fb2fdd80f [fix] add utils.py 2023-09-28 18:25:56 -05:00
Cullen Watson
af07c1ecbd add offset param & email extraction (#51)
* add offset param

* [enh]: extract emails
2023-09-28 18:11:28 -05:00
Cullen Watson
286b9e1256 chore: version number 2023-09-21 20:28:57 -05:00
Cullen Watson
162dd40b0f docs: add usejobspy.com 2023-09-21 20:27:04 -05:00
Cullen Watson
558e352939 fix: job type param bug 2023-09-21 17:42:24 -05:00
Zachary Hampton
efad1a1b7d Update README.md 2023-09-21 09:52:18 -07:00
Cullen Watson
eaa481c2f4 docs: add macos catalina to faq 2023-09-19 12:50:14 -05:00
Zachary Hampton
b914aa6449 Update README.md 2023-09-16 13:52:30 -07:00
Zachary Hampton
6adbfb8b29 Update README.md 2023-09-16 13:51:45 -07:00
Zachary Hampton
a3b9dd50ff (docs) homepage 2023-09-15 16:14:26 -07:00
Zachary Hampton
d3ba3a4878 docs: sales call 2023-09-15 11:51:22 -07:00
Cullen Watson
f524789d74 docs: grammar readme 2023-09-15 10:18:24 -05:00
Cullen Watson
f3890d4830 docs: update 2023-09-09 10:55:33 -05:00
Cullen Watson
60c9728691 docs: typo 2023-09-08 12:27:49 -05:00
Cullen Watson
f79d975e5f docs: clarify - README.md 2023-09-07 13:46:14 -05:00
Cullen Watson
d6368f909b docs: typo 2023-09-07 13:39:56 -05:00
Cullen Watson
6fcf7f666e docs: update typo in example 2023-09-07 13:37:53 -05:00
Cullen Watson
4406f9350f docs: update vid 2023-09-07 13:35:10 -05:00
Cullen Watson
ca5155f234 docs: add feature 2023-09-07 11:36:16 -05:00
Cullen Watson
822a55783e docs: temp update 2023-09-07 11:35:14 -05:00
21 changed files with 1048 additions and 726 deletions

8
.gitignore vendored
View File

@@ -1,10 +1,10 @@
/.idea
**/.DS_Store
/venv/ /venv/
/ven/ /.idea
**/__pycache__/ **/__pycache__/
**/.pytest_cache/ **/.pytest_cache/
/.ipynb_checkpoints/
**/output/
**/.DS_Store
*.pyc *.pyc
.env .env
dist dist
/.ipynb_checkpoints/

133
README.md
View File

@@ -1,63 +1,53 @@
<img src="https://github.com/cullenwatson/JobSpy/assets/78247585/ae185b7e-e444-4712-8bb9-fa97f53e896b" width="400"> <img src="https://github.com/cullenwatson/JobSpy/assets/78247585/ae185b7e-e444-4712-8bb9-fa97f53e896b" width="400">
**JobSpy** is a simple, yet comprehensive, job scraping library. **JobSpy** is a simple, yet comprehensive, job scraping library.
**Not technical?** Try out the web scraping tool on our site at [usejobspy.com](https://usejobspy.com).
*Looking to build a data-focused software product?* **[Book a call](https://bunsly.com/)** *to
work with us.*
Check out another project we wrote: ***[HomeHarvest](https://github.com/Bunsly/HomeHarvest)** a Python package
for real estate scraping*
## Features ## Features
- Scrapes job postings from **LinkedIn**, **Indeed**, **Glassdoor**, & **ZipRecruiter** simultaneously
- Scrapes job postings from **LinkedIn**, **Indeed** & **ZipRecruiter** simultaneously
- Aggregates the job postings in a Pandas DataFrame - Aggregates the job postings in a Pandas DataFrame
- Proxy support (HTTP/S, SOCKS)
[Video Guide for JobSpy](https://www.youtube.com/watch?v=-yS3mgI5H-4) [Video Guide for JobSpy](https://www.youtube.com/watch?v=RuP1HrAZnxs&pp=ygUgam9icyBzY3JhcGVyIGJvdCBsaW5rZWRpbiBpbmRlZWQ%3D) -
Updated for release v1.1.3
![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57) ![jobspy](https://github.com/cullenwatson/JobSpy/assets/78247585/ec7ef355-05f6-4fd3-8161-a817e31c5c57)
### Installation ### Installation
``` ```
pip install python-jobspy pip install python-jobspy
``` ```
_Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_ _Python version >= [3.10](https://www.python.org/downloads/release/python-3100/) required_
### Usage ### Usage
```python ```python
from jobspy import scrape_jobs from jobspy import scrape_jobs
import pandas as pd
jobs: pd.DataFrame = scrape_jobs( jobs = scrape_jobs(
site_name=["indeed", "linkedin", "zip_recruiter"], site_name=["indeed", "linkedin", "zip_recruiter", "glassdoor"],
search_term="software engineer", search_term="software engineer",
location="Dallas, TX", location="Dallas, TX",
results_wanted=10, results_wanted=10,
country_indeed='USA' # only needed for indeed / glassdoor
country_indeed='USA' # only needed for indeed
# use if you want to use a proxy
# proxy="socks5://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
# proxy="https://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
) )
print(f"Found {len(jobs)} jobs")
pd.set_option('display.max_columns', None) print(jobs.head())
pd.set_option('display.max_rows', None) jobs.to_csv("jobs.csv", index=False) # to_xlsx
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', 50) # set to 0 to see full job url / desc
#1 output
print(jobs)
print(errors)
#2 display in Jupyter Notebook
#display(jobs)
#display(errors)
#3 output to .csv
#result.jobs.to_csv('result.jobs.csv', index=False)
``` ```
### Output ### Output
``` ```
SITE TITLE COMPANY_NAME CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION SITE TITLE COMPANY_NAME CITY STATE JOB_TYPE INTERVAL MIN_AMOUNT MAX_AMOUNT JOB_URL DESCRIPTION
indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!... indeed Software Engineer AMERICAN SYSTEMS Arlington VA None yearly 200000 150000 https://www.indeed.com/viewjob?jk=5e409e577046... THIS POSITION COMES WITH A 10K SIGNING BONUS!...
@@ -67,23 +57,27 @@ linkedin Full-Stack Software Engineer Rain New York
zip_recruiter Software Engineer - New Grad ZipRecruiter Santa Monica CA fulltime yearly 130000 150000 https://www.ziprecruiter.com/jobs/ziprecruiter... We offer a hybrid work environment. Most US-ba... zip_recruiter Software Engineer - New Grad ZipRecruiter Santa Monica CA fulltime yearly 130000 150000 https://www.ziprecruiter.com/jobs/ziprecruiter... We offer a hybrid work environment. Most US-ba...
zip_recruiter Software Developer TEKsystems Phoenix AZ fulltime hourly 65 75 https://www.ziprecruiter.com/jobs/teksystems-0... Top Skills' Details• 6 years of Java developme... zip_recruiter Software Developer TEKsystems Phoenix AZ fulltime hourly 65 75 https://www.ziprecruiter.com/jobs/teksystems-0... Top Skills' Details• 6 years of Java developme...
``` ```
### Parameters for `scrape_jobs()` ### Parameters for `scrape_jobs()`
```plaintext ```plaintext
Required Required
├── site_type (List[enum]): linkedin, zip_recruiter, indeed ├── site_type (List[enum]): linkedin, zip_recruiter, indeed, glassdoor
└── search_term (str) └── search_term (str)
Optional Optional
├── location (int) ├── location (int)
├── distance (int): in miles ├── distance (int): in miles
├── job_type (enum): fulltime, parttime, internship, contract ├── job_type (enum): fulltime, parttime, internship, contract
├── proxy (str): in format 'http://user:pass@host:port' or [https, socks]
├── is_remote (bool) ├── is_remote (bool)
├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type' ├── results_wanted (int): number of job results to retrieve for each site specified in 'site_type'
├── easy_apply (bool): filters for jobs that are hosted on LinkedIn ├── easy_apply (bool): filters for jobs that are hosted on LinkedIn
├── country_indeed (enum): filters the country on Indeed ├── country_indeed (enum): filters the country on Indeed (see below for correct spelling)
├── offset (num): starts the search from an offset (e.g. 25 will start the search from the 25th result)
``` ```
### JobPost Schema ### JobPost Schema
```plaintext ```plaintext
JobPost JobPost
├── title (str) ├── title (str)
@@ -94,69 +88,88 @@ JobPost
│ ├── city (str) │ ├── city (str)
│ ├── state (str) │ ├── state (str)
├── description (str) ├── description (str)
├── job_type (enum): fulltime, parttime, internship, contract ├── job_type (str): fulltime, parttime, internship, contract
├── compensation (object) ├── compensation (object)
│ ├── interval (enum): yearly, monthly, weekly, daily, hourly │ ├── interval (str): yearly, monthly, weekly, daily, hourly
│ ├── min_amount (int) │ ├── min_amount (int)
│ ├── max_amount (int) │ ├── max_amount (int)
│ └── currency (enum) │ └── currency (enum)
└── date_posted (date) └── date_posted (date)
└── emails (str)
└── num_urgent_words (int)
└── is_remote (bool)
``` ```
### Exceptions
The following exceptions may be raised when using JobSpy:
* `LinkedInException`
* `IndeedException`
* `ZipRecruiterException`
* `GlassdoorException`
## Supported Countries for Job Searching ## Supported Countries for Job Searching
### **LinkedIn** ### **LinkedIn**
LinkedIn searches globally & uses only the `location` parameter LinkedIn searches globally & uses only the `location` parameter. You can only fetch 1000 jobs max from the LinkedIn endpoint we're using
### **ZipRecruiter** ### **ZipRecruiter**
ZipRecruiter searches for jobs in US/Canada & uses only the `location` parameter ZipRecruiter searches for jobs in **US/Canada** & uses only the `location` parameter.
### **Indeed / Glassdoor**
### **Indeed** Indeed & Glassdoor supports most countries, but the `country_indeed` parameter is required. Additionally, use the `location`
For Indeed, the `country_indeed` parameter is required. Additionally, use the `location` parameter and include the city or state if necessary. parameter to narrow down the location, e.g. city & state if necessary.
You can specify the following countries when searching on Indeed (use the exact name):
You can specify the following countries when searching on Indeed (use the exact name, * indicates support for Glassdoor):
| | | | | | | | | |
|------|------|------|------| |----------------------|--------------|------------|----------------|
| Argentina | Australia | Austria | Bahrain | | Argentina | Australia* | Austria* | Bahrain |
| Belgium | Brazil | Canada | Chile | | Belgium* | Brazil* | Canada* | Chile |
| China | Colombia | Costa Rica | Czech Republic | | China | Colombia | Costa Rica | Czech Republic |
| Denmark | Ecuador | Egypt | Finland | | Denmark | Ecuador | Egypt | Finland |
| France | Germany | Greece | Hong Kong | | France* | Germany* | Greece | Hong Kong* |
| Hungary | India | Indonesia | Ireland | | Hungary | India* | Indonesia | Ireland* |
| Israel | Italy | Japan | Kuwait | | Israel | Italy* | Japan | Kuwait |
| Luxembourg | Malaysia | Mexico | Morocco | | Luxembourg | Malaysia | Mexico* | Morocco |
| Netherlands | New Zealand | Nigeria | Norway | | Netherlands* | New Zealand* | Nigeria | Norway |
| Oman | Pakistan | Panama | Peru | | Oman | Pakistan | Panama | Peru |
| Philippines | Poland | Portugal | Qatar | | Philippines | Poland | Portugal | Qatar |
| Romania | Saudi Arabia | Singapore | South Africa | | Romania | Saudi Arabia | Singapore* | South Africa |
| South Korea | Spain | Sweden | Switzerland | | South Korea | Spain* | Sweden | Switzerland* |
| Taiwan | Thailand | Turkey | Ukraine | | Taiwan | Thailand | Turkey | Ukraine |
| United Arab Emirates | UK | USA | Uruguay | | United Arab Emirates | UK* | USA* | Uruguay |
| Venezuela | Vietnam | | | | Venezuela | Vietnam | | |
Glassdoor can only fetch 900 jobs from the endpoint we're using on a given search.
## Frequently Asked Questions ## Frequently Asked Questions
--- ---
**Q: Encountering issues with your queries?** **Q: Encountering issues with your queries?**
**A:** Try reducing the number of `results_wanted` and/or broadening the filters. If problems persist, [submit an issue](#). **A:** Try reducing the number of `results_wanted` and/or broadening the filters. If problems
persist, [submit an issue](https://github.com/Bunsly/JobSpy/issues).
--- ---
**Q: Received a response code 429?** **Q: Received a response code 429?**
**A:** This indicates that you have been blocked by the job board site for sending too many requests. Currently, **ZipRecruiter** is particularly aggressive with blocking. We recommend: **A:** This indicates that you have been blocked by the job board site for sending too many requests. All of the job board sites are aggressive with blocking. We recommend:
- Waiting a few seconds between requests. - Waiting a few seconds between requests.
- Trying a VPN to change your IP address. - Trying a VPN or proxy to change your IP address.
**Note:** Proxy support is in development and coming soon!
--- ---
**Q: Experiencing a "Segmentation fault: 11" on macOS Catalina?**
**A:** This is due to `tls_client` dependency not supporting your architecture. Solutions and workarounds include:
- Upgrade to a newer version of MacOS
- Reach out to the maintainers of [tls_client](https://github.com/bogdanfinn/tls-client) for fixes

View File

@@ -9,7 +9,7 @@
"source": [ "source": [
"from jobspy import scrape_jobs\n", "from jobspy import scrape_jobs\n",
"import pandas as pd\n", "import pandas as pd\n",
"from IPython.display import display, HTML\n" "from IPython.display import display, HTML"
] ]
}, },
{ {
@@ -34,18 +34,16 @@
"source": [ "source": [
"# example 1 (no hyperlinks, USA)\n", "# example 1 (no hyperlinks, USA)\n",
"jobs = scrape_jobs(\n", "jobs = scrape_jobs(\n",
" site_name=[\"linkedin\", \"zip_recruiter\"],\n", " site_name=[\"linkedin\"],\n",
" location='san francisco',\n", " location='san francisco',\n",
" search_term=\"engineer\",\n", " search_term=\"engineer\",\n",
" results_wanted=5,\n", " results_wanted=5,\n",
"\n", "\n",
" # use if you want to use a proxy\n", " # use if you want to use a proxy\n",
" # proxy=\"socks5://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001\",\n", " # proxy=\"socks5://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" # proxy=\"http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001\",\n", " proxy=\"http://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
" # proxy=\"https://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001\",\n", " #proxy=\"https://jobspy:5a4vpWtj4EeJ2hoYzk@us.smartproxy.com:10001\",\n",
"\n",
")\n", ")\n",
"\n",
"display(jobs)" "display(jobs)"
] ]
}, },
@@ -97,9 +95,6 @@
" hyperlinks=True,\n", " hyperlinks=True,\n",
" results_wanted=5,\n", " results_wanted=5,\n",
" easy_apply=True\n", " easy_apply=True\n",
"\n",
"\n",
"\n",
")" ")"
] ]
}, },
@@ -125,11 +120,10 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"# example 4 - international indeed (no zip_recruiter)\n", "# example 4 - international indeed (no zip_recruiter)\n",
"result = scrape_jobs(\n", "jobs = scrape_jobs(\n",
" site_name=[\"indeed\"],\n", " site_name=[\"indeed\"],\n",
" location='berlin',\n",
" search_term=\"engineer\",\n", " search_term=\"engineer\",\n",
" country_indeed = \"Germany\",\n", " country_indeed = \"China\",\n",
" hyperlinks=True\n", " hyperlinks=True\n",
")" ")"
] ]
@@ -165,7 +159,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.11" "version": "3.11.5"
} }
}, },
"nbformat": 4, "nbformat": 4,

31
examples/JobSpy_Demo.py Normal file
View File

@@ -0,0 +1,31 @@
from jobspy import scrape_jobs
import pandas as pd
jobs: pd.DataFrame = scrape_jobs(
site_name=["indeed", "linkedin", "zip_recruiter"],
search_term="software engineer",
location="Dallas, TX",
results_wanted=50, # be wary the higher it is, the more likey you'll get blocked (rotating proxy should work tho)
country_indeed="USA",
offset=25 # start jobs from an offset (use if search failed and want to continue)
# proxy="http://jobspy:5a4vpWtj8EeJ2hoYzk@ca.smartproxy.com:20001",
)
# formatting for pandas
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
pd.set_option("display.width", None)
pd.set_option("display.max_colwidth", 50) # set to 0 to see full job url / desc
# 1: output to console
print(jobs)
# 2: output to .csv
jobs.to_csv("./jobs.csv", index=False)
print("outputted to jobs.csv")
# 3: output to .xlsx
# jobs.to_xlsx('jobs.xlsx', index=False)
# 4: display in Jupyter Notebook (1. pip install jupyter 2. jupyter notebook)
# display(jobs)

69
poetry.lock generated
View File

@@ -1053,6 +1053,16 @@ files = [
{file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"}, {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
{file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"}, {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
{file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"}, {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
{file = "MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:f698de3fd0c4e6972b92290a45bd9b1536bffe8c6759c62471efaa8acb4c37bc"},
{file = "MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:aa57bd9cf8ae831a362185ee444e15a93ecb2e344c8e52e4d721ea3ab6ef1823"},
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ffcc3f7c66b5f5b7931a5aa68fc9cecc51e685ef90282f4a82f0f5e9b704ad11"},
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47d4f1c5f80fc62fdd7777d0d40a2e9dda0a05883ab11374334f6c4de38adffd"},
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f67c7038d560d92149c060157d623c542173016c4babc0c1913cca0564b9939"},
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:9aad3c1755095ce347e26488214ef77e0485a3c34a50c5a5e2471dff60b9dd9c"},
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:14ff806850827afd6b07a5f32bd917fb7f45b046ba40c57abdb636674a8b559c"},
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8f9293864fe09b8149f0cc42ce56e3f0e54de883a9de90cd427f191c346eb2e1"},
{file = "MarkupSafe-2.1.3-cp312-cp312-win32.whl", hash = "sha256:715d3562f79d540f251b99ebd6d8baa547118974341db04f5ad06d5ea3eb8007"},
{file = "MarkupSafe-2.1.3-cp312-cp312-win_amd64.whl", hash = "sha256:1b8dd8c3fd14349433c79fa8abeb573a55fc0fdd769133baac1f5e07abf54aeb"},
{file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"}, {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
{file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"}, {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
{file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"}, {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
@@ -1243,36 +1253,39 @@ test = ["pytest", "pytest-console-scripts", "pytest-jupyter", "pytest-tornasync"
[[package]] [[package]]
name = "numpy" name = "numpy"
version = "1.25.2" version = "1.24.2"
description = "Fundamental package for array computing in Python" description = "Fundamental package for array computing in Python"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.8"
files = [ files = [
{file = "numpy-1.25.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:db3ccc4e37a6873045580d413fe79b68e47a681af8db2e046f1dacfa11f86eb3"}, {file = "numpy-1.24.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:eef70b4fc1e872ebddc38cddacc87c19a3709c0e3e5d20bf3954c147b1dd941d"},
{file = "numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:90319e4f002795ccfc9050110bbbaa16c944b1c37c0baeea43c5fb881693ae1f"}, {file = "numpy-1.24.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e8d2859428712785e8a8b7d2b3ef0a1d1565892367b32f915c4a4df44d0e64f5"},
{file = "numpy-1.25.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dfe4a913e29b418d096e696ddd422d8a5d13ffba4ea91f9f60440a3b759b0187"}, {file = "numpy-1.24.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6524630f71631be2dabe0c541e7675db82651eb998496bbe16bc4f77f0772253"},
{file = "numpy-1.25.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f08f2e037bba04e707eebf4bc934f1972a315c883a9e0ebfa8a7756eabf9e357"}, {file = "numpy-1.24.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a51725a815a6188c662fb66fb32077709a9ca38053f0274640293a14fdd22978"},
{file = "numpy-1.25.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:bec1e7213c7cb00d67093247f8c4db156fd03075f49876957dca4711306d39c9"}, {file = "numpy-1.24.2-cp310-cp310-win32.whl", hash = "sha256:2620e8592136e073bd12ee4536149380695fbe9ebeae845b81237f986479ffc9"},
{file = "numpy-1.25.2-cp310-cp310-win32.whl", hash = "sha256:7dc869c0c75988e1c693d0e2d5b26034644399dd929bc049db55395b1379e044"}, {file = "numpy-1.24.2-cp310-cp310-win_amd64.whl", hash = "sha256:97cf27e51fa078078c649a51d7ade3c92d9e709ba2bfb97493007103c741f1d0"},
{file = "numpy-1.25.2-cp310-cp310-win_amd64.whl", hash = "sha256:834b386f2b8210dca38c71a6e0f4fd6922f7d3fcff935dbe3a570945acb1b545"}, {file = "numpy-1.24.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7de8fdde0003f4294655aa5d5f0a89c26b9f22c0a58790c38fae1ed392d44a5a"},
{file = "numpy-1.25.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c5462d19336db4560041517dbb7759c21d181a67cb01b36ca109b2ae37d32418"}, {file = "numpy-1.24.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4173bde9fa2a005c2c6e2ea8ac1618e2ed2c1c6ec8a7657237854d42094123a0"},
{file = "numpy-1.25.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c5652ea24d33585ea39eb6a6a15dac87a1206a692719ff45d53c5282e66d4a8f"}, {file = "numpy-1.24.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4cecaed30dc14123020f77b03601559fff3e6cd0c048f8b5289f4eeabb0eb281"},
{file = "numpy-1.25.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d60fbae8e0019865fc4784745814cff1c421df5afee233db6d88ab4f14655a2"}, {file = "numpy-1.24.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a23f8440561a633204a67fb44617ce2a299beecf3295f0d13c495518908e910"},
{file = "numpy-1.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60e7f0f7f6d0eee8364b9a6304c2845b9c491ac706048c7e8cf47b83123b8dbf"}, {file = "numpy-1.24.2-cp311-cp311-win32.whl", hash = "sha256:e428c4fbfa085f947b536706a2fc349245d7baa8334f0c5723c56a10595f9b95"},
{file = "numpy-1.25.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:bb33d5a1cf360304754913a350edda36d5b8c5331a8237268c48f91253c3a364"}, {file = "numpy-1.24.2-cp311-cp311-win_amd64.whl", hash = "sha256:557d42778a6869c2162deb40ad82612645e21d79e11c1dc62c6e82a2220ffb04"},
{file = "numpy-1.25.2-cp311-cp311-win32.whl", hash = "sha256:5883c06bb92f2e6c8181df7b39971a5fb436288db58b5a1c3967702d4278691d"}, {file = "numpy-1.24.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d0a2db9d20117bf523dde15858398e7c0858aadca7c0f088ac0d6edd360e9ad2"},
{file = "numpy-1.25.2-cp311-cp311-win_amd64.whl", hash = "sha256:5c97325a0ba6f9d041feb9390924614b60b99209a71a69c876f71052521d42a4"}, {file = "numpy-1.24.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c72a6b2f4af1adfe193f7beb91ddf708ff867a3f977ef2ec53c0ffb8283ab9f5"},
{file = "numpy-1.25.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b79e513d7aac42ae918db3ad1341a015488530d0bb2a6abcbdd10a3a829ccfd3"}, {file = "numpy-1.24.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c29e6bd0ec49a44d7690ecb623a8eac5ab8a923bce0bea6293953992edf3a76a"},
{file = "numpy-1.25.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:eb942bfb6f84df5ce05dbf4b46673ffed0d3da59f13635ea9b926af3deb76926"}, {file = "numpy-1.24.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2eabd64ddb96a1239791da78fa5f4e1693ae2dadc82a76bc76a14cbb2b966e96"},
{file = "numpy-1.25.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e0746410e73384e70d286f93abf2520035250aad8c5714240b0492a7302fdca"}, {file = "numpy-1.24.2-cp38-cp38-win32.whl", hash = "sha256:e3ab5d32784e843fc0dd3ab6dcafc67ef806e6b6828dc6af2f689be0eb4d781d"},
{file = "numpy-1.25.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7806500e4f5bdd04095e849265e55de20d8cc4b661b038957354327f6d9b295"}, {file = "numpy-1.24.2-cp38-cp38-win_amd64.whl", hash = "sha256:76807b4063f0002c8532cfeac47a3068a69561e9c8715efdad3c642eb27c0756"},
{file = "numpy-1.25.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8b77775f4b7df768967a7c8b3567e309f617dd5e99aeb886fa14dc1a0791141f"}, {file = "numpy-1.24.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4199e7cfc307a778f72d293372736223e39ec9ac096ff0a2e64853b866a8e18a"},
{file = "numpy-1.25.2-cp39-cp39-win32.whl", hash = "sha256:2792d23d62ec51e50ce4d4b7d73de8f67a2fd3ea710dcbc8563a51a03fb07b01"}, {file = "numpy-1.24.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:adbdce121896fd3a17a77ab0b0b5eedf05a9834a18699db6829a64e1dfccca7f"},
{file = "numpy-1.25.2-cp39-cp39-win_amd64.whl", hash = "sha256:76b4115d42a7dfc5d485d358728cdd8719be33cc5ec6ec08632a5d6fca2ed380"}, {file = "numpy-1.24.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:889b2cc88b837d86eda1b17008ebeb679d82875022200c6e8e4ce6cf549b7acb"},
{file = "numpy-1.25.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:1a1329e26f46230bf77b02cc19e900db9b52f398d6722ca853349a782d4cff55"}, {file = "numpy-1.24.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f64bb98ac59b3ea3bf74b02f13836eb2e24e48e0ab0145bbda646295769bd780"},
{file = "numpy-1.25.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c3abc71e8b6edba80a01a52e66d83c5d14433cbcd26a40c329ec7ed09f37901"}, {file = "numpy-1.24.2-cp39-cp39-win32.whl", hash = "sha256:63e45511ee4d9d976637d11e6c9864eae50e12dc9598f531c035265991910468"},
{file = "numpy-1.25.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:1b9735c27cea5d995496f46a8b1cd7b408b3f34b6d50459d9ac8fe3a20cc17bf"}, {file = "numpy-1.24.2-cp39-cp39-win_amd64.whl", hash = "sha256:a77d3e1163a7770164404607b7ba3967fb49b24782a6ef85d9b5f54126cc39e5"},
{file = "numpy-1.25.2.tar.gz", hash = "sha256:fd608e19c8d7c55021dffd43bfe5492fab8cc105cc8986f813f8c3c048b38760"}, {file = "numpy-1.24.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:92011118955724465fb6853def593cf397b4a1367495e0b59a7e69d40c4eb71d"},
{file = "numpy-1.24.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f9006288bcf4895917d02583cf3411f98631275bc67cce355a7f39f8c14338fa"},
{file = "numpy-1.24.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:150947adbdfeceec4e5926d956a06865c1c690f2fd902efede4ca6fe2e657c3f"},
{file = "numpy-1.24.2.tar.gz", hash = "sha256:003a9f530e880cb2cd177cba1af7220b9aa42def9c4afc2a2fc3ee6be7eb2b22"},
] ]
[[package]] [[package]]
@@ -2432,4 +2445,4 @@ files = [
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = "^3.10" python-versions = "^3.10"
content-hash = "0c50057af9ebbbe5c124c81758b41f05c05636739c3d1747e1bac74e75a046cb" content-hash = "f966f3979873eec2c3b13460067f5aa414c69aa8ab5cd3239c1cfa564fcb5deb"

View File

@@ -1,8 +1,9 @@
[tool.poetry] [tool.poetry]
name = "python-jobspy" name = "python-jobspy"
version = "1.1.3" version = "1.1.29"
description = "Job scraper for LinkedIn, Indeed & ZipRecruiter" description = "Job scraper for LinkedIn, Indeed, Glassdoor & ZipRecruiter"
authors = ["Zachary Hampton <zachary@zacharysproducts.com>", "Cullen Watson <cullen@cullen.ai>"] authors = ["Zachary Hampton <zachary@bunsly.com>", "Cullen Watson <cullen@bunsly.com>"]
homepage = "https://github.com/Bunsly/JobSpy"
readme = "README.md" readme = "README.md"
packages = [ packages = [
@@ -15,6 +16,7 @@ requests = "^2.31.0"
tls-client = "^0.2.1" tls-client = "^0.2.1"
beautifulsoup4 = "^4.12.2" beautifulsoup4 = "^4.12.2"
pandas = "^2.1.0" pandas = "^2.1.0"
NUMPY = "1.24.2"
pydantic = "^2.3.0" pydantic = "^2.3.0"

View File

@@ -1,24 +1,26 @@
import pandas as pd import pandas as pd
import concurrent.futures import concurrent.futures
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from typing import List, Tuple, NamedTuple, Dict, Optional from typing import Tuple, Optional
import traceback
from .jobs import JobType, Location from .jobs import JobType, Location
from .scrapers.indeed import IndeedScraper from .scrapers.indeed import IndeedScraper
from .scrapers.ziprecruiter import ZipRecruiterScraper from .scrapers.ziprecruiter import ZipRecruiterScraper
from .scrapers.glassdoor import GlassdoorScraper
from .scrapers.linkedin import LinkedInScraper from .scrapers.linkedin import LinkedInScraper
from .scrapers import ScraperInput, Site, JobResponse, Country from .scrapers import ScraperInput, Site, JobResponse, Country
from .scrapers.exceptions import ( from .scrapers.exceptions import (
LinkedInException, LinkedInException,
IndeedException, IndeedException,
ZipRecruiterException, ZipRecruiterException,
GlassdoorException,
) )
SCRAPER_MAPPING = { SCRAPER_MAPPING = {
Site.LINKEDIN: LinkedInScraper, Site.LINKEDIN: LinkedInScraper,
Site.INDEED: IndeedScraper, Site.INDEED: IndeedScraper,
Site.ZIP_RECRUITER: ZipRecruiterScraper, Site.ZIP_RECRUITER: ZipRecruiterScraper,
Site.GLASSDOOR: GlassdoorScraper,
} }
@@ -27,23 +29,32 @@ def _map_str_to_site(site_name: str) -> Site:
def scrape_jobs( def scrape_jobs(
site_name: str | List[str] | Site | List[Site], site_name: str | list[str] | Site | list[Site],
search_term: str, search_term: str,
location: str = "", location: str = "",
distance: int = None, distance: int = None,
is_remote: bool = False, is_remote: bool = False,
job_type: JobType = None, job_type: str = None,
easy_apply: bool = False, # linkedin easy_apply: bool = False, # linkedin
results_wanted: int = 15, results_wanted: int = 15,
country_indeed: str = "usa", country_indeed: str = "usa",
hyperlinks: bool = False, hyperlinks: bool = False,
proxy: Optional[str] = None, proxy: Optional[str] = None,
offset: Optional[int] = 0,
) -> pd.DataFrame: ) -> pd.DataFrame:
""" """
Simultaneously scrapes job data from multiple job sites. Simultaneously scrapes job data from multiple job sites.
:return: results_wanted: pandas dataframe containing job data :return: results_wanted: pandas dataframe containing job data
""" """
def get_enum_from_value(value_str):
for job_type in JobType:
if value_str in job_type.value:
return job_type
raise Exception(f"Invalid job type: {value_str}")
job_type = get_enum_from_value(job_type) if job_type else None
if type(site_name) == str: if type(site_name) == str:
site_type = [_map_str_to_site(site_name)] site_type = [_map_str_to_site(site_name)]
else: #: if type(site_name) == list else: #: if type(site_name) == list
@@ -64,6 +75,7 @@ def scrape_jobs(
job_type=job_type, job_type=job_type,
easy_apply=easy_apply, easy_apply=easy_apply,
results_wanted=results_wanted, results_wanted=results_wanted,
offset=offset,
) )
def scrape_site(site: Site) -> Tuple[str, JobResponse]: def scrape_site(site: Site) -> Tuple[str, JobResponse]:
@@ -75,13 +87,14 @@ def scrape_jobs(
except (LinkedInException, IndeedException, ZipRecruiterException) as lie: except (LinkedInException, IndeedException, ZipRecruiterException) as lie:
raise lie raise lie
except Exception as e: except Exception as e:
# unhandled exceptions
if site == Site.LINKEDIN: if site == Site.LINKEDIN:
raise LinkedInException() raise LinkedInException(str(e))
if site == Site.INDEED: if site == Site.INDEED:
raise IndeedException() raise IndeedException(str(e))
if site == Site.ZIP_RECRUITER: if site == Site.ZIP_RECRUITER:
raise ZipRecruiterException() raise ZipRecruiterException(str(e))
if site == Site.GLASSDOOR:
raise GlassdoorException(str(e))
else: else:
raise e raise e
return site.value, scraped_data return site.value, scraped_data
@@ -89,8 +102,8 @@ def scrape_jobs(
site_to_jobs_dict = {} site_to_jobs_dict = {}
def worker(site): def worker(site):
site_value, scraped_data = scrape_site(site) site_val, scraped_info = scrape_site(site)
return site_value, scraped_data return site_val, scraped_info
with ThreadPoolExecutor() as executor: with ThreadPoolExecutor() as executor:
future_to_site = { future_to_site = {
@@ -101,7 +114,7 @@ def scrape_jobs(
site_value, scraped_data = future.result() site_value, scraped_data = future.result()
site_to_jobs_dict[site_value] = scraped_data site_to_jobs_dict[site_value] = scraped_data
jobs_dfs: List[pd.DataFrame] = [] jobs_dfs: list[pd.DataFrame] = []
for site, job_response in site_to_jobs_dict.items(): for site, job_response in site_to_jobs_dict.items():
for job in job_response.jobs: for job in job_response.jobs:
@@ -111,13 +124,18 @@ def scrape_jobs(
] = f'<a href="{job_data["job_url"]}">{job_data["job_url"]}</a>' ] = f'<a href="{job_data["job_url"]}">{job_data["job_url"]}</a>'
job_data["site"] = site job_data["site"] = site
job_data["company"] = job_data["company_name"] job_data["company"] = job_data["company_name"]
if job_data["job_type"]: job_data["job_type"] = (
# Take the first value from the job type tuple ", ".join(job_type.value[0] for job_type in job_data["job_type"])
job_data["job_type"] = job_data["job_type"].value[0] if job_data["job_type"]
else: else None
job_data["job_type"] = None )
job_data["emails"] = (
job_data["location"] = Location(**job_data["location"]).display_location() ", ".join(job_data["emails"]) if job_data["emails"] else None
)
if job_data["location"]:
job_data["location"] = Location(
**job_data["location"]
).display_location()
compensation_obj = job_data.get("compensation") compensation_obj = job_data.get("compensation")
if compensation_obj and isinstance(compensation_obj, dict): if compensation_obj and isinstance(compensation_obj, dict):
@@ -140,18 +158,23 @@ def scrape_jobs(
if jobs_dfs: if jobs_dfs:
jobs_df = pd.concat(jobs_dfs, ignore_index=True) jobs_df = pd.concat(jobs_dfs, ignore_index=True)
desired_order: List[str] = [ desired_order: list[str] = [
"job_url_hyper" if hyperlinks else "job_url",
"site", "site",
"title", "title",
"company", "company",
"company_url",
"location", "location",
"date_posted",
"job_type", "job_type",
"date_posted",
"interval", "interval",
"min_amount", "min_amount",
"max_amount", "max_amount",
"currency", "currency",
"job_url_hyper" if hyperlinks else "job_url", "is_remote",
"num_urgent_words",
"benefits",
"emails",
"description", "description",
] ]
jobs_formatted_df = jobs_df[desired_order] jobs_formatted_df = jobs_df[desired_order]

View File

@@ -1,7 +1,6 @@
from typing import Union, Optional from typing import Union, Optional
from datetime import date from datetime import date
from enum import Enum from enum import Enum
from pydantic import BaseModel, validator from pydantic import BaseModel, validator
@@ -37,10 +36,16 @@ class JobType(Enum):
"повназайнятість", "повназайнятість",
"toànthờigian", "toànthờigian",
) )
PART_TIME = ("parttime", "teilzeit") PART_TIME = ("parttime", "teilzeit", "částečnýúvazek", "deltid")
CONTRACT = ("contract", "contractor") CONTRACT = ("contract", "contractor")
TEMPORARY = ("temporary",) TEMPORARY = ("temporary",)
INTERNSHIP = ("internship", "prácticas", "ojt(onthejobtraining)", "praktikum") INTERNSHIP = (
"internship",
"prácticas",
"ojt(onthejobtraining)",
"praktikum",
"praktik",
)
PER_DIEM = ("perdiem",) PER_DIEM = ("perdiem",)
NIGHTS = ("nights",) NIGHTS = ("nights",)
@@ -50,13 +55,13 @@ class JobType(Enum):
class Country(Enum): class Country(Enum):
ARGENTINA = ("argentina", "ar") ARGENTINA = ("argentina", "com.ar")
AUSTRALIA = ("australia", "au") AUSTRALIA = ("australia", "au", "com.au")
AUSTRIA = ("austria", "at") AUSTRIA = ("austria", "at", "at")
BAHRAIN = ("bahrain", "bh") BAHRAIN = ("bahrain", "bh")
BELGIUM = ("belgium", "be") BELGIUM = ("belgium", "be", "nl:be")
BRAZIL = ("brazil", "br") BRAZIL = ("brazil", "br", "com.br")
CANADA = ("canada", "ca") CANADA = ("canada", "ca", "ca")
CHILE = ("chile", "cl") CHILE = ("chile", "cl")
CHINA = ("china", "cn") CHINA = ("china", "cn")
COLOMBIA = ("colombia", "co") COLOMBIA = ("colombia", "co")
@@ -66,24 +71,24 @@ class Country(Enum):
ECUADOR = ("ecuador", "ec") ECUADOR = ("ecuador", "ec")
EGYPT = ("egypt", "eg") EGYPT = ("egypt", "eg")
FINLAND = ("finland", "fi") FINLAND = ("finland", "fi")
FRANCE = ("france", "fr") FRANCE = ("france", "fr", "fr")
GERMANY = ("germany", "de") GERMANY = ("germany", "de", "de")
GREECE = ("greece", "gr") GREECE = ("greece", "gr")
HONGKONG = ("hong kong", "hk") HONGKONG = ("hong kong", "hk", "com.hk")
HUNGARY = ("hungary", "hu") HUNGARY = ("hungary", "hu")
INDIA = ("india", "in") INDIA = ("india", "in", "co.in")
INDONESIA = ("indonesia", "id") INDONESIA = ("indonesia", "id")
IRELAND = ("ireland", "ie") IRELAND = ("ireland", "ie", "ie")
ISRAEL = ("israel", "il") ISRAEL = ("israel", "il")
ITALY = ("italy", "it") ITALY = ("italy", "it", "it")
JAPAN = ("japan", "jp") JAPAN = ("japan", "jp")
KUWAIT = ("kuwait", "kw") KUWAIT = ("kuwait", "kw")
LUXEMBOURG = ("luxembourg", "lu") LUXEMBOURG = ("luxembourg", "lu")
MALAYSIA = ("malaysia", "malaysia") MALAYSIA = ("malaysia", "malaysia")
MEXICO = ("mexico", "mx") MEXICO = ("mexico", "mx", "com.mx")
MOROCCO = ("morocco", "ma") MOROCCO = ("morocco", "ma")
NETHERLANDS = ("netherlands", "nl") NETHERLANDS = ("netherlands", "nl", "nl")
NEWZEALAND = ("new zealand", "nz") NEWZEALAND = ("new zealand", "nz", "co.nz")
NIGERIA = ("nigeria", "ng") NIGERIA = ("nigeria", "ng")
NORWAY = ("norway", "no") NORWAY = ("norway", "no")
OMAN = ("oman", "om") OMAN = ("oman", "om")
@@ -96,19 +101,19 @@ class Country(Enum):
QATAR = ("qatar", "qa") QATAR = ("qatar", "qa")
ROMANIA = ("romania", "ro") ROMANIA = ("romania", "ro")
SAUDIARABIA = ("saudi arabia", "sa") SAUDIARABIA = ("saudi arabia", "sa")
SINGAPORE = ("singapore", "sg") SINGAPORE = ("singapore", "sg", "sg")
SOUTHAFRICA = ("south africa", "za") SOUTHAFRICA = ("south africa", "za")
SOUTHKOREA = ("south korea", "kr") SOUTHKOREA = ("south korea", "kr")
SPAIN = ("spain", "es") SPAIN = ("spain", "es", "es")
SWEDEN = ("sweden", "se") SWEDEN = ("sweden", "se")
SWITZERLAND = ("switzerland", "ch") SWITZERLAND = ("switzerland", "ch", "de:ch")
TAIWAN = ("taiwan", "tw") TAIWAN = ("taiwan", "tw")
THAILAND = ("thailand", "th") THAILAND = ("thailand", "th")
TURKEY = ("turkey", "tr") TURKEY = ("turkey", "tr")
UKRAINE = ("ukraine", "ua") UKRAINE = ("ukraine", "ua")
UNITEDARABEMIRATES = ("united arab emirates", "ae") UNITEDARABEMIRATES = ("united arab emirates", "ae")
UK = ("uk", "uk") UK = ("uk", "uk", "co.uk")
USA = ("usa", "www") USA = ("usa", "www", "com")
URUGUAY = ("uruguay", "uy") URUGUAY = ("uruguay", "uy")
VENEZUELA = ("venezuela", "ve") VENEZUELA = ("venezuela", "ve")
VIETNAM = ("vietnam", "vn") VIETNAM = ("vietnam", "vn")
@@ -116,34 +121,42 @@ class Country(Enum):
# internal for ziprecruiter # internal for ziprecruiter
US_CANADA = ("usa/ca", "www") US_CANADA = ("usa/ca", "www")
# internal for linkeind # internal for linkedin
WORLDWIDE = ("worldwide", "www") WORLDWIDE = ("worldwide", "www")
def __new__(cls, country, domain): @property
obj = object.__new__(cls) def indeed_domain_value(self):
obj._value_ = country return self.value[1]
obj.domain = domain
return obj
@property @property
def domain_value(self): def glassdoor_domain_value(self):
return self.domain if len(self.value) == 3:
subdomain, _, domain = self.value[2].partition(":")
if subdomain and domain:
return f"{subdomain}.glassdoor.{domain}"
else:
return f"www.glassdoor.{self.value[2]}"
else:
raise Exception(f"Glassdoor is not available for {self.name}")
def get_url(self):
return f"https://{self.glassdoor_domain_value}/"
@classmethod @classmethod
def from_string(cls, country_str: str): def from_string(cls, country_str: str):
"""Convert a string to the corresponding Country enum.""" """Convert a string to the corresponding Country enum."""
country_str = country_str.strip().lower() country_str = country_str.strip().lower()
for country in cls: for country in cls:
if country.value == country_str: if country.value[0] == country_str:
return country return country
valid_countries = [country.value for country in cls] valid_countries = [country.value for country in cls]
raise ValueError( raise ValueError(
f"Invalid country string: '{country_str}'. Valid countries (only include this param for Indeed) are: {', '.join(valid_countries)}" f"Invalid country string: '{country_str}'. Valid countries are: {', '.join([country[0] for country in valid_countries])}"
) )
class Location(BaseModel): class Location(BaseModel):
country: Country = None country: Country | None = None
city: Optional[str] = None city: Optional[str] = None
state: Optional[str] = None state: Optional[str] = None
@@ -154,10 +167,10 @@ class Location(BaseModel):
if self.state: if self.state:
location_parts.append(self.state) location_parts.append(self.state)
if self.country and self.country not in (Country.US_CANADA, Country.WORLDWIDE): if self.country and self.country not in (Country.US_CANADA, Country.WORLDWIDE):
if self.country.value in ("usa", "uk"): if self.country.value[0] in ("usa", "uk"):
location_parts.append(self.country.value.upper()) location_parts.append(self.country.value[0].upper())
else: else:
location_parts.append(self.country.value.title()) location_parts.append(self.country.value[0].title())
return ", ".join(location_parts) return ", ".join(location_parts)
@@ -170,9 +183,9 @@ class CompensationInterval(Enum):
class Compensation(BaseModel): class Compensation(BaseModel):
interval: CompensationInterval interval: Optional[CompensationInterval] = None
min_amount: int = None min_amount: int | None = None
max_amount: int = None max_amount: int | None = None
currency: Optional[str] = "USD" currency: Optional[str] = "USD"
@@ -182,10 +195,17 @@ class JobPost(BaseModel):
job_url: str job_url: str
location: Optional[Location] location: Optional[Location]
description: Optional[str] = None description: str | None = None
job_type: Optional[JobType] = None company_url: str | None = None
compensation: Optional[Compensation] = None
date_posted: Optional[date] = None job_type: list[JobType] | None = None
compensation: Compensation | None = None
date_posted: date | None = None
benefits: str | None = None
emails: list[str] | None = None
num_urgent_words: int | None = None
is_remote: bool | None = None
# company_industry: str | None = None
class JobResponse(BaseModel): class JobResponse(BaseModel):

View File

@@ -6,6 +6,7 @@ class Site(Enum):
LINKEDIN = "linkedin" LINKEDIN = "linkedin"
INDEED = "indeed" INDEED = "indeed"
ZIP_RECRUITER = "zip_recruiter" ZIP_RECRUITER = "zip_recruiter"
GLASSDOOR = "glassdoor"
class ScraperInput(BaseModel): class ScraperInput(BaseModel):
@@ -18,6 +19,7 @@ class ScraperInput(BaseModel):
is_remote: bool = False is_remote: bool = False
job_type: Optional[JobType] = None job_type: Optional[JobType] = None
easy_apply: bool = None # linkedin easy_apply: bool = None # linkedin
offset: int = 0
results_wanted: int = 15 results_wanted: int = 15

View File

@@ -7,12 +7,20 @@ This module contains the set of Scrapers' exceptions.
class LinkedInException(Exception): class LinkedInException(Exception):
"""Failed to scrape LinkedIn""" def __init__(self, message=None):
super().__init__(message or "An error occurred with LinkedIn")
class IndeedException(Exception): class IndeedException(Exception):
"""Failed to scrape Indeed""" def __init__(self, message=None):
super().__init__(message or "An error occurred with Indeed")
class ZipRecruiterException(Exception): class ZipRecruiterException(Exception):
"""Failed to scrape ZipRecruiter""" def __init__(self, message=None):
super().__init__(message or "An error occurred with ZipRecruiter")
class GlassdoorException(Exception):
def __init__(self, message=None):
super().__init__(message or "An error occurred with Glassdoor")

View File

@@ -0,0 +1,286 @@
"""
jobspy.scrapers.glassdoor
~~~~~~~~~~~~~~~~~~~
This module contains routines to scrape Glassdoor.
"""
import math
import time
import re
import json
from datetime import datetime, date
from typing import Optional, Tuple, Any
from bs4 import BeautifulSoup
from .. import Scraper, ScraperInput, Site
from ..exceptions import GlassdoorException
from ..utils import count_urgent_words, extract_emails_from_text, create_session
from ...jobs import (
JobPost,
Compensation,
CompensationInterval,
Location,
JobResponse,
JobType,
Country,
)
class GlassdoorScraper(Scraper):
def __init__(self, proxy: Optional[str] = None):
"""
Initializes GlassdoorScraper with the Glassdoor job search url
"""
site = Site(Site.ZIP_RECRUITER)
super().__init__(site, proxy=proxy)
self.url = None
self.country = None
self.jobs_per_page = 30
self.seen_urls = set()
def fetch_jobs_page(
self,
scraper_input: ScraperInput,
location_id: int,
location_type: str,
page_num: int,
cursor: str | None,
) -> (list[JobPost], str | None):
"""
Scrapes a page of Glassdoor for jobs with scraper_input criteria
:param scraper_input:
:return: jobs found on page
:return: cursor for next page
"""
try:
payload = self.add_payload(
scraper_input, location_id, location_type, page_num, cursor
)
session = create_session(self.proxy, is_tls=False)
response = session.post(
f"{self.url}/graph", headers=self.headers(), timeout=10, data=payload
)
if response.status_code != 200:
raise GlassdoorException(
f"bad response status code: {response.status_code}"
)
res_json = response.json()[0]
if "errors" in res_json:
raise ValueError("Error encountered in API response")
except Exception as e:
raise GlassdoorException(str(e))
jobs_data = res_json["data"]["jobListings"]["jobListings"]
jobs = []
for i, job in enumerate(jobs_data):
job_url = res_json["data"]["jobListings"]["jobListingSeoLinks"][
"linkItems"
][i]["url"]
if job_url in self.seen_urls:
continue
self.seen_urls.add(job_url)
job = job["jobview"]
title = job["job"]["jobTitleText"]
company_name = job["header"]["employerNameFromSearch"]
location_name = job["header"].get("locationName", "")
location_type = job["header"].get("locationType", "")
is_remote = False
location = None
if location_type == "S":
is_remote = True
else:
location = self.parse_location(location_name)
compensation = self.parse_compensation(job["header"])
job = JobPost(
title=title,
company_name=company_name,
job_url=job_url,
location=location,
compensation=compensation,
is_remote=is_remote,
)
jobs.append(job)
return jobs, self.get_cursor_for_page(
res_json["data"]["jobListings"]["paginationCursors"], page_num + 1
)
def scrape(self, scraper_input: ScraperInput) -> JobResponse:
"""
Scrapes Glassdoor for jobs with scraper_input criteria.
:param scraper_input: Information about job search criteria.
:return: JobResponse containing a list of jobs.
"""
self.country = scraper_input.country
self.url = self.country.get_url()
location_id, location_type = self.get_location(
scraper_input.location, scraper_input.is_remote
)
all_jobs: list[JobPost] = []
cursor = None
max_pages = 30
try:
for page in range(
1 + (scraper_input.offset // self.jobs_per_page),
min(
(scraper_input.results_wanted // self.jobs_per_page) + 2,
max_pages + 1,
),
):
try:
jobs, cursor = self.fetch_jobs_page(
scraper_input, location_id, location_type, page, cursor
)
all_jobs.extend(jobs)
if len(all_jobs) >= scraper_input.results_wanted:
all_jobs = all_jobs[: scraper_input.results_wanted]
break
except Exception as e:
raise GlassdoorException(str(e))
except Exception as e:
raise GlassdoorException(str(e))
return JobResponse(jobs=all_jobs)
@staticmethod
def parse_compensation(data: dict) -> Optional[Compensation]:
pay_period = data.get("payPeriod")
adjusted_pay = data.get("payPeriodAdjustedPay")
currency = data.get("payCurrency", "USD")
if not pay_period or not adjusted_pay:
return None
interval = None
if pay_period == "ANNUAL":
interval = CompensationInterval.YEARLY
elif pay_period == "MONTHLY":
interval = CompensationInterval.MONTHLY
elif pay_period == "WEEKLY":
interval = CompensationInterval.WEEKLY
elif pay_period == "DAILY":
interval = CompensationInterval.DAILY
elif pay_period == "HOURLY":
interval = CompensationInterval.HOURLY
min_amount = int(adjusted_pay.get("p10") // 1)
max_amount = int(adjusted_pay.get("p90") // 1)
return Compensation(
interval=interval,
min_amount=min_amount,
max_amount=max_amount,
currency=currency,
)
def get_job_type_enum(self, job_type_str: str) -> list[JobType] | None:
for job_type in JobType:
if job_type_str in job_type.value:
return [job_type]
return None
def get_location(self, location: str, is_remote: bool) -> (int, str):
if not location or is_remote:
return "11047", "STATE" # remote options
url = f"{self.url}/findPopularLocationAjax.htm?maxLocationsToReturn=10&term={location}"
session = create_session(self.proxy)
response = session.get(url)
if response.status_code != 200:
raise GlassdoorException(
f"bad response status code: {response.status_code}"
)
items = response.json()
if not items:
raise ValueError(f"Location '{location}' not found on Glassdoor")
location_type = items[0]["locationType"]
if location_type == "C":
location_type = "CITY"
elif location_type == "S":
location_type = "STATE"
return int(items[0]["locationId"]), location_type
@staticmethod
def add_payload(
scraper_input,
location_id: int,
location_type: str,
page_num: int,
cursor: str | None = None,
) -> dict[str, str | Any]:
payload = {
"operationName": "JobSearchResultsQuery",
"variables": {
"excludeJobListingIds": [],
"filterParams": [],
"keyword": scraper_input.search_term,
"numJobsToShow": 30,
"locationType": location_type,
"locationId": int(location_id),
"parameterUrlInput": f"IL.0,12_I{location_type}{location_id}",
"pageNumber": page_num,
"pageCursor": cursor,
},
"query": "query JobSearchResultsQuery($excludeJobListingIds: [Long!], $keyword: String, $locationId: Int, $locationType: LocationTypeEnum, $numJobsToShow: Int!, $pageCursor: String, $pageNumber: Int, $filterParams: [FilterParams], $originalPageUrl: String, $seoFriendlyUrlInput: String, $parameterUrlInput: String, $seoUrl: Boolean) {\n jobListings(\n contextHolder: {searchParams: {excludeJobListingIds: $excludeJobListingIds, keyword: $keyword, locationId: $locationId, locationType: $locationType, numPerPage: $numJobsToShow, pageCursor: $pageCursor, pageNumber: $pageNumber, filterParams: $filterParams, originalPageUrl: $originalPageUrl, seoFriendlyUrlInput: $seoFriendlyUrlInput, parameterUrlInput: $parameterUrlInput, seoUrl: $seoUrl, searchType: SR}}\n ) {\n companyFilterOptions {\n id\n shortName\n __typename\n }\n filterOptions\n indeedCtk\n jobListings {\n ...JobView\n __typename\n }\n jobListingSeoLinks {\n linkItems {\n position\n url\n __typename\n }\n __typename\n }\n jobSearchTrackingKey\n jobsPageSeoData {\n pageMetaDescription\n pageTitle\n __typename\n }\n paginationCursors {\n cursor\n pageNumber\n __typename\n }\n indexablePageForSeo\n searchResultsMetadata {\n searchCriteria {\n implicitLocation {\n id\n localizedDisplayName\n type\n __typename\n }\n keyword\n location {\n id\n shortName\n localizedShortName\n localizedDisplayName\n type\n __typename\n }\n __typename\n }\n footerVO {\n countryMenu {\n childNavigationLinks {\n id\n link\n textKey\n __typename\n }\n __typename\n }\n __typename\n }\n helpCenterDomain\n helpCenterLocale\n jobAlert {\n jobAlertExists\n __typename\n }\n jobSerpFaq {\n questions {\n answer\n question\n __typename\n }\n __typename\n }\n jobSerpJobOutlook {\n occupation\n paragraph\n __typename\n }\n showMachineReadableJobs\n __typename\n }\n serpSeoLinksVO {\n relatedJobTitlesResults\n searchedJobTitle\n searchedKeyword\n searchedLocationIdAsString\n searchedLocationSeoName\n searchedLocationType\n topCityIdsToNameResults {\n key\n value\n __typename\n }\n topEmployerIdsToNameResults {\n key\n value\n __typename\n }\n topEmployerNameResults\n topOccupationResults\n __typename\n }\n totalJobsCount\n __typename\n }\n}\n\nfragment JobView on JobListingSearchResult {\n jobview {\n header {\n adOrderId\n advertiserType\n adOrderSponsorshipLevel\n ageInDays\n divisionEmployerName\n easyApply\n employer {\n id\n name\n shortName\n __typename\n }\n employerNameFromSearch\n goc\n gocConfidence\n gocId\n jobCountryId\n jobLink\n jobResultTrackingKey\n jobTitleText\n locationName\n locationType\n locId\n needsCommission\n payCurrency\n payPeriod\n payPeriodAdjustedPay {\n p10\n p50\n p90\n __typename\n }\n rating\n salarySource\n savedJobId\n sponsored\n __typename\n }\n job {\n descriptionFragments\n importConfigId\n jobTitleId\n jobTitleText\n listingId\n __typename\n }\n jobListingAdminDetails {\n cpcVal\n importConfigId\n jobListingId\n jobSourceId\n userEligibleForAdminJobDetails\n __typename\n }\n overview {\n shortName\n squareLogoUrl\n __typename\n }\n __typename\n }\n __typename\n}\n",
}
job_type_filters = {
JobType.FULL_TIME: "fulltime",
JobType.PART_TIME: "parttime",
JobType.CONTRACT: "contract",
JobType.INTERNSHIP: "internship",
JobType.TEMPORARY: "temporary",
}
if scraper_input.job_type in job_type_filters:
filter_value = job_type_filters[scraper_input.job_type]
payload["variables"]["filterParams"].append(
{"filterKey": "jobType", "values": filter_value}
)
return json.dumps([payload])
def parse_location(self, location_name: str) -> Location:
if not location_name or location_name == "Remote":
return None
city, _, state = location_name.partition(", ")
return Location(city=city, state=state)
@staticmethod
def get_cursor_for_page(pagination_cursors, page_num):
for cursor_data in pagination_cursors:
if cursor_data["pageNumber"] == page_num:
return cursor_data["cursor"]
return None
@staticmethod
def headers() -> dict:
"""
Returns headers needed for requests
:return: dict - Dictionary containing headers
"""
return {
"authority": "www.glassdoor.com",
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"apollographql-client-name": "job-search-next",
"apollographql-client-version": "4.65.5",
"content-type": "application/json",
"cookie": 'gdId=91e2dfc4-c8b5-4fa7-83d0-11512b80262c; G_ENABLED_IDPS=google; trs=https%3A%2F%2Fwww.redhat.com%2F:referral:referral:2023-07-05+09%3A50%3A14.862:undefined:undefined; g_state={"i_p":1688587331651,"i_l":1}; _cfuvid=.7llazxhYFZWi6EISSPdVjtqF0NMVwzxr_E.cB1jgLs-1697828392979-0-604800000; GSESSIONID=undefined; JSESSIONID=F03DD1B5EE02DB6D842FE42B142F88F3; cass=1; jobsClicked=true; indeedCtk=1hd77b301k79i801; asst=1697829114.2; G_AUTHUSER_H=0; uc=8013A8318C98C517FE6DD0024636DFDEF978FC33266D93A2FAFEF364EACA608949D8B8FA2DC243D62DE271D733EB189D809ABE5B08D7B1AE865D217BD4EEBB97C282F5DA5FEFE79C937E3F6110B2A3A0ADBBA3B4B6DF5A996FEE00516100A65FCB11DA26817BE8D1C1BF6CFE36B5B68A3FDC2CFEC83AB797F7841FBB157C202332FC7E077B56BD39B167BDF3D9866E3B; AWSALB=zxc/Yk1nbWXXT6HjNyn3H4h4950ckVsFV/zOrq5LSoChYLE1qV+hDI8Axi3fUa9rlskndcO0M+Fw+ZnJ+AQ2afBFpyOd1acouLMYgkbEpqpQaWhY6/Gv4QH1zBcJ; AWSALBCORS=zxc/Yk1nbWXXT6HjNyn3H4h4950ckVsFV/zOrq5LSoChYLE1qV+hDI8Axi3fUa9rlskndcO0M+Fw+ZnJ+AQ2afBFpyOd1acouLMYgkbEpqpQaWhY6/Gv4QH1zBcJ; gdsid=1697828393025:1697830776351:668396EDB9E6A832022D34414128093D; at=HkH8Hnqi9uaMC7eu0okqyIwqp07ht9hBvE1_St7E_hRqPvkO9pUeJ1Jcpds4F3g6LL5ADaCNlxrPn0o6DumGMfog8qI1-zxaV_jpiFs3pugntw6WpVyYWdfioIZ1IDKupyteeLQEM1AO4zhGjY_rPZynpsiZBPO_B1au94sKv64rv23yvP56OiWKKfI-8_9hhLACEwWvM-Az7X-4aE2QdFt93VJbXbbGVf07bdDZfimsIkTtgJCLSRhU1V0kEM1Efyu66vo3m77gFFaMW7lxyYnb36I5PdDtEXBm3aL-zR7-qa5ywd94ISEivgqQOA4FPItNhqIlX4XrfD1lxVz6rfPaoTIDi4DI6UMCUjwyPsuv8mn0rYqDfRnmJpZ97fJ5AnhrknAd_6ZWN5v1OrxJczHzcXd8LO820QPoqxzzG13bmSTXLwGSxMUCtSrVsq05hicimQ3jpRt0c1dA4OkTNqF7_770B9JfcHcM8cr8-C4IL56dnOjr9KBGfN1Q2IvZM2cOBRbV7okiNOzKVZ3qJ24AE34WA2F3U6Whiu6H8nIuGG5hSNkVygY6CtglNZfFF9p8pJAZm79PngrrBv-CXFBZmhYLFo46lmFetDkiJ6mirtez4tKpzTIYjIp4_JAkiZFwbLJ2QGH4mK8kyyW0lZiX1DTuQec50N_5wvRo0Gt7nlKxzLsApMnaNhuQeH5ygh_pa381ORo9mQGi0EYF9zk00pa2--z4PtjfQ8KFq36GgpxKy5-o4qgqygZj8F01L8r-FiX2G4C7PREMIpAyHX2A4-_JxA1IS2j12EyqKTLqE9VcP06qm2Z-YuIW3ctmpMxy5G9_KiEiGv17weizhSFnl6SbpAEY-2VSmQ5V6jm3hoMp2jemkuGCRkZeFstLDEPxlzFN7WM; __cf_bm=zGaVjIJw4irf40_7UVw54B6Ohm271RUX4Tc8KVScrbs-1697830777-0-AYv2GnKTnnCU+cY9xHbJunO0DwlLDO6SIBnC/s/qldpKsGK0rRAjD6y8lbyATT/KlS7g29OZaN4fbd0lrJg0KmWbIybZIzfWVLHSYePVuOhu; asst=1697829114.2; at=dFhXf64wsf2TlnWy41xLs7skJkuxgKToEGcjGtDfUvW4oEAJ4tTIR5dKQ8wbwT75aIaGgdCfvcb-da7vwrCGWscCncmfLFQpJ9l-LLwoRfk-pMsxHhd77wvf-W7I0HSm7-Q5lQJqI9WyNGRxOa-RpzBTf4L8_Et4-3FzjPaAoYY5pY1FhuwXbN5asGOAMW-p8cjpbfn3PumlIYuckguWnjrcY2F31YJ_1noeoHM9tCGpymANbqGXRkG6aXY7yCfVXtdgZU1K5SMeaSPZIuF_iLUxjc_corzpNiH6qq7BIAmh-e5Aa-g7cwpZcln1fmwTVw4uTMZf1eLIMTa9WzgqZNkvG-sGaq_XxKA_Wai6xTTkOHfRgm4632Ba2963wdJvkGmUUa3tb_L4_wTgk3eFnHp5JhghLfT2Pe3KidP-yX__vx8JOsqe3fndCkKXgVz7xQKe1Dur-sMNlGwi4LXfguTT2YUI8C5Miq3pj2IHc7dC97eyyAiAM4HvyGWfaXWZcei6oIGrOwMvYgy0AcwFry6SIP2SxLT5TrxinRRuem1r1IcOTJsMJyUPp1QsZ7bOyq9G_0060B4CPyovw5523hEuqLTM-R5e5yavY6C_1DHUyE15C3mrh7kdvmlGZeflnHqkFTEKwwOftm-Mv-CKD5Db9ABFGNxKB2FH7nDH67hfOvm4tGNMzceBPKYJ3wciTt9jK3wy39_7cOYVywfrZ-oLhw_XtsbGSSeGn3HytrfgSADAh2sT0Gg6eCC9Xy1vh-Za337SVLUDXZ73W2xJxxUHBkFzZs8L_Xndo5DsbpWhVs9IYUGyraJdqB3SLgDbAppIBCJl4fx6_DG8-xOQPBvuFMlTROe1JVdHOzXI1GElwFDTuH1pjkg4I2G0NhAbE06Y-1illQE; gdsid=1697828393025:1697831731408:99C30D94108AC3030D61C736DDCDF11C',
"gd-csrf-token": "Ft6oHEWlRZrxDww95Cpazw:0pGUrkb2y3TyOpAIqF2vbPmUXoXVkD3oEGDVkvfeCerceQ5-n8mBg3BovySUIjmCPHCaW0H2nQVdqzbtsYqf4Q:wcqRqeegRUa9MVLJGyujVXB7vWFPjdaS1CtrrzJq-ok",
"origin": "https://www.glassdoor.com",
"referer": "https://www.glassdoor.com/",
"sec-ch-ua": '"Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": '"macOS"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36",
}

View File

@@ -8,17 +8,20 @@ import re
import math import math
import io import io
import json import json
import traceback
from datetime import datetime from datetime import datetime
from typing import Optional
import tls_client
import urllib.parse import urllib.parse
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
from bs4.element import Tag from bs4.element import Tag
from concurrent.futures import ThreadPoolExecutor, Future from concurrent.futures import ThreadPoolExecutor, Future
from ..exceptions import IndeedException from ..exceptions import IndeedException
from ..utils import (
count_urgent_words,
extract_emails_from_text,
create_session,
get_enum_from_job_type,
)
from ...jobs import ( from ...jobs import (
JobPost, JobPost,
Compensation, Compensation,
@@ -27,14 +30,16 @@ from ...jobs import (
JobResponse, JobResponse,
JobType, JobType,
) )
from .. import Scraper, ScraperInput, Site, Country from .. import Scraper, ScraperInput, Site
class IndeedScraper(Scraper): class IndeedScraper(Scraper):
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxy: str | None = None):
""" """
Initializes IndeedScraper with the Indeed job search url Initializes IndeedScraper with the Indeed job search url
""" """
self.url = None
self.country = None
site = Site(Site.INDEED) site = Site(Site.INDEED)
super().__init__(site, proxy=proxy) super().__init__(site, proxy=proxy)
@@ -42,26 +47,23 @@ class IndeedScraper(Scraper):
self.seen_urls = set() self.seen_urls = set()
def scrape_page( def scrape_page(
self, scraper_input: ScraperInput, page: int, session: tls_client.Session self, scraper_input: ScraperInput, page: int
) -> tuple[list[JobPost], int]: ) -> tuple[list[JobPost], int]:
""" """
Scrapes a page of Indeed for jobs with scraper_input criteria Scrapes a page of Indeed for jobs with scraper_input criteria
:param scraper_input: :param scraper_input:
:param page: :param page:
:param session:
:return: jobs found on page, total number of jobs found for search :return: jobs found on page, total number of jobs found for search
""" """
self.country = scraper_input.country self.country = scraper_input.country
domain = self.country.domain_value domain = self.country.indeed_domain_value
self.url = f"https://{domain}.indeed.com" self.url = f"https://{domain}.indeed.com"
job_list: list[JobPost] = []
params = { params = {
"q": scraper_input.search_term, "q": scraper_input.search_term,
"l": scraper_input.location, "l": scraper_input.location,
"filter": 0, "filter": 0,
"start": 0 + page * 10, "start": scraper_input.offset + page * 10,
} }
if scraper_input.distance: if scraper_input.distance:
params["radius"] = scraper_input.distance params["radius"] = scraper_input.distance
@@ -75,11 +77,12 @@ class IndeedScraper(Scraper):
if sc_values: if sc_values:
params["sc"] = "0kf:" + "".join(sc_values) + ";" params["sc"] = "0kf:" + "".join(sc_values) + ";"
try: try:
session = create_session(self.proxy, is_tls=True)
response = session.get( response = session.get(
self.url + "/jobs", f"{self.url}/jobs",
headers=self.get_headers(),
params=params, params=params,
allow_redirects=True, allow_redirects=True,
proxy=self.proxy,
timeout_seconds=10, timeout_seconds=10,
) )
if response.status_code not in range(200, 400): if response.status_code not in range(200, 400):
@@ -107,7 +110,7 @@ class IndeedScraper(Scraper):
): ):
raise IndeedException("No jobs found.") raise IndeedException("No jobs found.")
def process_job(job) -> Optional[JobPost]: def process_job(job) -> JobPost | None:
job_url = f'{self.url}/jobs/viewjob?jk={job["jobkey"]}' job_url = f'{self.url}/jobs/viewjob?jk={job["jobkey"]}'
job_url_client = f'{self.url}/viewjob?jk={job["jobkey"]}' job_url_client = f'{self.url}/viewjob?jk={job["jobkey"]}'
if job_url in self.seen_urls: if job_url in self.seen_urls:
@@ -126,8 +129,8 @@ class IndeedScraper(Scraper):
if interval in CompensationInterval.__members__: if interval in CompensationInterval.__members__:
compensation = Compensation( compensation = Compensation(
interval=CompensationInterval[interval], interval=CompensationInterval[interval],
min_amount=int(extracted_salary.get("max")), min_amount=int(extracted_salary.get("min")),
max_amount=int(extracted_salary.get("min")), max_amount=int(extracted_salary.get("max")),
currency=currency, currency=currency,
) )
@@ -136,10 +139,10 @@ class IndeedScraper(Scraper):
date_posted = datetime.fromtimestamp(timestamp_seconds) date_posted = datetime.fromtimestamp(timestamp_seconds)
date_posted = date_posted.strftime("%Y-%m-%d") date_posted = date_posted.strftime("%Y-%m-%d")
description = self.get_description(job_url, session) description = self.get_description(job_url)
with io.StringIO(job["snippet"]) as f: with io.StringIO(job["snippet"]) as f:
soup = BeautifulSoup(f, "html.parser") soup_io = BeautifulSoup(f, "html.parser")
li_elements = soup.find_all("li") li_elements = soup_io.find_all("li")
if description is None and li_elements: if description is None and li_elements:
description = " ".join(li.text for li in li_elements) description = " ".join(li.text for li in li_elements)
@@ -156,13 +159,18 @@ class IndeedScraper(Scraper):
compensation=compensation, compensation=compensation,
date_posted=date_posted, date_posted=date_posted,
job_url=job_url_client, job_url=job_url_client,
emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description)
if description
else None,
is_remote=self.is_remote_job(job),
) )
return job_post return job_post
jobs = jobs["metaData"]["mosaicProviderJobCardsModel"]["results"]
with ThreadPoolExecutor(max_workers=1) as executor: with ThreadPoolExecutor(max_workers=1) as executor:
job_results: list[Future] = [ job_results: list[Future] = [
executor.submit(process_job, job) executor.submit(process_job, job) for job in jobs
for job in jobs["metaData"]["mosaicProviderJobCardsModel"]["results"]
] ]
job_list = [result.result() for result in job_results if result.result()] job_list = [result.result() for result in job_results if result.result()]
@@ -175,20 +183,16 @@ class IndeedScraper(Scraper):
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
session = tls_client.Session(
client_identifier="chrome112", random_tls_extension_order=True
)
pages_to_process = ( pages_to_process = (
math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1 math.ceil(scraper_input.results_wanted / self.jobs_per_page) - 1
) )
#: get first page to initialize session #: get first page to initialize session
job_list, total_results = self.scrape_page(scraper_input, 0, session) job_list, total_results = self.scrape_page(scraper_input, 0)
with ThreadPoolExecutor(max_workers=1) as executor: with ThreadPoolExecutor(max_workers=1) as executor:
futures: list[Future] = [ futures: list[Future] = [
executor.submit(self.scrape_page, scraper_input, page, session) executor.submit(self.scrape_page, scraper_input, page)
for page in range(1, pages_to_process + 1) for page in range(1, pages_to_process + 1)
] ]
@@ -206,21 +210,24 @@ class IndeedScraper(Scraper):
) )
return job_response return job_response
def get_description(self, job_page_url: str, session: tls_client.Session) -> str: def get_description(self, job_page_url: str) -> str | None:
""" """
Retrieves job description by going to the job page url Retrieves job description by going to the job page url
:param job_page_url: :param job_page_url:
:param session:
:return: description :return: description
""" """
parsed_url = urllib.parse.urlparse(job_page_url) parsed_url = urllib.parse.urlparse(job_page_url)
params = urllib.parse.parse_qs(parsed_url.query) params = urllib.parse.parse_qs(parsed_url.query)
jk_value = params.get("jk", [None])[0] jk_value = params.get("jk", [None])[0]
formatted_url = f"{self.url}/viewjob?jk={jk_value}&spa=1" formatted_url = f"{self.url}/viewjob?jk={jk_value}&spa=1"
session = create_session(self.proxy)
try: try:
response = session.get( response = session.get(
formatted_url, allow_redirects=True, timeout_seconds=5, proxy=self.proxy formatted_url,
headers=self.get_headers(),
allow_redirects=True,
timeout_seconds=5,
) )
except Exception as e: except Exception as e:
return None return None
@@ -228,36 +235,37 @@ class IndeedScraper(Scraper):
if response.status_code not in range(200, 400): if response.status_code not in range(200, 400):
return None return None
raw_description = response.json()["body"]["jobInfoWrapperModel"][ try:
"jobInfoModel" data = json.loads(response.text)
]["sanitizedJobDescription"] job_description = data["body"]["jobInfoWrapperModel"]["jobInfoModel"][
with io.StringIO(raw_description) as f: "sanitizedJobDescription"
soup = BeautifulSoup(f, "html.parser") ]
text_content = " ".join(soup.get_text().split()).strip() except (KeyError, TypeError, IndexError):
return None
soup = BeautifulSoup(job_description, "html.parser")
text_content = " ".join(soup.get_text(separator=" ").split()).strip()
return text_content return text_content
@staticmethod @staticmethod
def get_job_type(job: dict) -> Optional[JobType]: def get_job_type(job: dict) -> list[JobType] | None:
""" """
Parses the job to get JobTypeIndeed Parses the job to get list of job types
:param job: :param job:
:return: :return:
""" """
job_types: list[JobType] = []
for taxonomy in job["taxonomyAttributes"]: for taxonomy in job["taxonomyAttributes"]:
if taxonomy["label"] == "job-types": if taxonomy["label"] == "job-types":
if len(taxonomy["attributes"]) > 0: for i in range(len(taxonomy["attributes"])):
label = taxonomy["attributes"][0].get("label") label = taxonomy["attributes"][i].get("label")
if label: if label:
job_type_str = label.replace("-", "").replace(" ", "").lower() job_type_str = label.replace("-", "").replace(" ", "").lower()
return IndeedScraper.get_enum_from_value(job_type_str) job_type = get_enum_from_job_type(job_type_str)
return None if job_type:
job_types.append(job_type)
@staticmethod return job_types
def get_enum_from_value(value_str):
for job_type in JobType:
if value_str in job_type.value:
return job_type
return None
@staticmethod @staticmethod
def parse_jobs(soup: BeautifulSoup) -> dict: def parse_jobs(soup: BeautifulSoup) -> dict:
@@ -267,7 +275,7 @@ class IndeedScraper(Scraper):
:return: jobs :return: jobs
""" """
def find_mosaic_script() -> Optional[Tag]: def find_mosaic_script() -> Tag | None:
""" """
Finds jobcards script tag Finds jobcards script tag
:return: script_tag :return: script_tag
@@ -317,3 +325,30 @@ class IndeedScraper(Scraper):
data = json.loads(json_str) data = json.loads(json_str)
total_num_jobs = int(data["searchTitleBarModel"]["totalNumResults"]) total_num_jobs = int(data["searchTitleBarModel"]["totalNumResults"])
return total_num_jobs return total_num_jobs
@staticmethod
def get_headers():
return {
"authority": "www.indeed.com",
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"referer": "https://www.indeed.com/viewjob?jk=fe6182337d72c7b1&tk=1hcbfcmd0k62t802&from=serp&vjs=3&advn=8132938064490989&adid=408692607&ad=-6NYlbfkN0A3Osc99MJFDKjquSk4WOGT28ALb_ad4QMtrHreCb9ICg6MiSVy9oDAp3evvOrI7Q-O9qOtQTg1EPbthP9xWtBN2cOuVeHQijxHjHpJC65TjDtftH3AXeINjBvAyDrE8DrRaAXl8LD3Fs1e_xuDHQIssdZ2Mlzcav8m5jHrA0fA64ZaqJV77myldaNlM7-qyQpy4AsJQfvg9iR2MY7qeC5_FnjIgjKIy_lNi9OPMOjGRWXA94CuvC7zC6WeiJmBQCHISl8IOBxf7EdJZlYdtzgae3593TFxbkd6LUwbijAfjax39aAuuCXy3s9C4YgcEP3TwEFGQoTpYu9Pmle-Ae1tHGPgsjxwXkgMm7Cz5mBBdJioglRCj9pssn-1u1blHZM4uL1nK9p1Y6HoFgPUU9xvKQTHjKGdH8d4y4ETyCMoNF4hAIyUaysCKdJKitC8PXoYaWhDqFtSMR4Jys8UPqUV&xkcb=SoDD-_M3JLQfWnQTDh0LbzkdCdPP&xpse=SoBa6_I3JLW9FlWZlB0PbzkdCdPP&sjdu=i6xVERweJM_pVUvgf-MzuaunBTY7G71J5eEX6t4DrDs5EMPQdODrX7Nn-WIPMezoqr5wA_l7Of-3CtoiUawcHw",
"sec-ch-ua": '"Google Chrome";v="119", "Chromium";v="119", "Not?A_Brand";v="24"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": '"Windows"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36",
}
@staticmethod
def is_remote_job(job: dict) -> bool:
"""
:param job:
:return: bool
"""
for taxonomy in job.get("taxonomyAttributes", []):
if taxonomy["label"] == "remote" and len(taxonomy["attributes"]) > 0:
return True
return False

View File

@@ -4,33 +4,33 @@ jobspy.scrapers.linkedin
This module contains routines to scrape LinkedIn. This module contains routines to scrape LinkedIn.
""" """
from typing import Optional, Tuple from typing import Optional
from datetime import datetime from datetime import datetime
import traceback
import requests import requests
from requests.exceptions import Timeout, ProxyError import time
from requests.exceptions import ProxyError
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
from bs4.element import Tag from bs4.element import Tag
from threading import Lock
from urllib.parse import urlparse, urlunparse
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..utils import count_urgent_words, extract_emails_from_text, get_enum_from_job_type, currency_parser
from ..exceptions import LinkedInException from ..exceptions import LinkedInException
from ...jobs import ( from ...jobs import JobPost, Location, JobResponse, JobType, Country, Compensation
JobPost,
Location,
JobResponse,
JobType,
Compensation,
CompensationInterval,
)
class LinkedInScraper(Scraper): class LinkedInScraper(Scraper):
MAX_RETRIES = 3
DELAY = 10
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxy: Optional[str] = None):
""" """
Initializes LinkedInScraper with the LinkedIn job search url Initializes LinkedInScraper with the LinkedIn job search url
""" """
site = Site(Site.LINKEDIN) site = Site(Site.LINKEDIN)
self.country = "worldwide"
self.url = "https://www.linkedin.com" self.url = "https://www.linkedin.com"
super().__init__(site, proxy=proxy) super().__init__(site, proxy=proxy)
@@ -40,12 +40,12 @@ class LinkedInScraper(Scraper):
:param scraper_input: :param scraper_input:
:return: job_response :return: job_response
""" """
self.country = "worldwide"
job_list: list[JobPost] = [] job_list: list[JobPost] = []
seen_urls = set() seen_urls = set()
page, processed_jobs, job_count = 0, 0, 0 url_lock = Lock()
page = scraper_input.offset // 25 + 25 if scraper_input.offset else 0
def job_type_code(job_type): def job_type_code(job_type_enum):
mapping = { mapping = {
JobType.FULL_TIME: "F", JobType.FULL_TIME: "F",
JobType.PART_TIME: "P", JobType.PART_TIME: "P",
@@ -54,10 +54,9 @@ class LinkedInScraper(Scraper):
JobType.TEMPORARY: "T", JobType.TEMPORARY: "T",
} }
return mapping.get(job_type, "") return mapping.get(job_type_enum, "")
with requests.Session() as session: while len(job_list) < scraper_input.results_wanted and page < 1000:
while len(job_list) < scraper_input.results_wanted:
params = { params = {
"keywords": scraper_input.search_term, "keywords": scraper_input.search_term,
"location": scraper_input.location, "location": scraper_input.location,
@@ -66,105 +65,143 @@ class LinkedInScraper(Scraper):
"f_JT": job_type_code(scraper_input.job_type) "f_JT": job_type_code(scraper_input.job_type)
if scraper_input.job_type if scraper_input.job_type
else None, else None,
"pageNum": page, "pageNum": 0,
"start": page + scraper_input.offset,
"f_AL": "true" if scraper_input.easy_apply else None, "f_AL": "true" if scraper_input.easy_apply else None,
} }
params = {k: v for k, v in params.items() if v is not None} params = {k: v for k, v in params.items() if v is not None}
retries = 0
while retries < self.MAX_RETRIES:
try: try:
response = session.get( response = requests.get(
f"{self.url}/jobs/search", f"{self.url}/jobs-guest/jobs/api/seeMoreJobPostings/search?",
params=params, params=params,
allow_redirects=True, allow_redirects=True,
proxies=self.proxy, proxies=self.proxy,
timeout=10, timeout=10,
) )
response.raise_for_status() response.raise_for_status()
break
except requests.HTTPError as e: except requests.HTTPError as e:
if hasattr(e, "response") and e.response is not None:
if e.response.status_code in (429, 502):
time.sleep(self.DELAY)
retries += 1
continue
else:
raise LinkedInException( raise LinkedInException(
f"bad response status code: {response.status_code}" f"bad response status code: {e.response.status_code}"
) )
else:
raise
except ProxyError as e: except ProxyError as e:
raise LinkedInException("bad proxy") raise LinkedInException("bad proxy")
except (ProxyError, Exception) as e: except Exception as e:
raise LinkedInException(str(e)) raise LinkedInException(str(e))
else:
# Raise an exception if the maximum number of retries is reached
raise LinkedInException(
"Max retries reached, failed to get a valid response"
)
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
if page == 0: for job_card in soup.find_all("div", class_="base-search-card"):
job_count_text = soup.find( job_url = None
"span", class_="results-context-header__job-count" href_tag = job_card.find("a", class_="base-card__full-link")
).text if href_tag and "href" in href_tag.attrs:
job_count = int("".join(filter(str.isdigit, job_count_text))) href = href_tag.attrs["href"].split("?")[0]
job_id = href.split("-")[-1]
for job_card in soup.find_all(
"div",
class_="base-card relative w-full hover:no-underline focus:no-underline base-card--link base-search-card base-search-card--link job-search-card",
):
processed_jobs += 1
data_entity_urn = job_card.get("data-entity-urn", "")
job_id = (
data_entity_urn.split(":")[-1] if data_entity_urn else "N/A"
)
job_url = f"{self.url}/jobs/view/{job_id}" job_url = f"{self.url}/jobs/view/{job_id}"
with url_lock:
if job_url in seen_urls: if job_url in seen_urls:
continue continue
seen_urls.add(job_url) seen_urls.add(job_url)
job_info = job_card.find("div", class_="base-search-card__info")
if job_info is None:
continue
title_tag = job_info.find("h3", class_="base-search-card__title")
title = title_tag.text.strip() if title_tag else "N/A"
company_tag = job_info.find("a", class_="hidden-nested-link") # Call process_job directly without threading
company = company_tag.text.strip() if company_tag else "N/A" try:
job_post = self.process_job(job_card, job_url)
if job_post:
job_list.append(job_post)
except Exception as e:
raise LinkedInException("Exception occurred while processing jobs")
metadata_card = job_info.find( page += 25
"div", class_="base-search-card__metadata"
job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list)
def process_job(self, job_card: Tag, job_url: str) -> Optional[JobPost]:
salary_tag = job_card.find('span', class_='job-search-card__salary-info')
compensation = None
if salary_tag:
salary_text = salary_tag.get_text(separator=' ').strip()
salary_values = [currency_parser(value) for value in salary_text.split('-')]
salary_min = salary_values[0]
salary_max = salary_values[1]
currency = salary_text[0] if salary_text[0] != '$' else 'USD'
compensation = Compensation(
min_amount=int(salary_min),
max_amount=int(salary_max),
currency=currency,
) )
location: Location = self.get_location(metadata_card)
datetime_tag = metadata_card.find( title_tag = job_card.find("span", class_="sr-only")
"time", class_="job-search-card__listdate" title = title_tag.get_text(strip=True) if title_tag else "N/A"
company_tag = job_card.find("h4", class_="base-search-card__subtitle")
company_a_tag = company_tag.find("a") if company_tag else None
company_url = (
urlunparse(urlparse(company_a_tag.get("href"))._replace(query=""))
if company_a_tag and company_a_tag.has_attr("href")
else ""
) )
description, job_type = self.get_description(job_url) company = company_a_tag.get_text(strip=True) if company_a_tag else "N/A"
if datetime_tag:
metadata_card = job_card.find("div", class_="base-search-card__metadata")
location = self.get_location(metadata_card)
datetime_tag = (
metadata_card.find("time", class_="job-search-card__listdate")
if metadata_card
else None
)
date_posted = None
if datetime_tag and "datetime" in datetime_tag.attrs:
datetime_str = datetime_tag["datetime"] datetime_str = datetime_tag["datetime"]
try: try:
date_posted = datetime.strptime(datetime_str, "%Y-%m-%d") date_posted = datetime.strptime(datetime_str, "%Y-%m-%d")
except Exception as e: except Exception as e:
date_posted = None date_posted = None
else: benefits_tag = job_card.find("span", class_="result-benefits__text")
date_posted = None benefits = " ".join(benefits_tag.get_text().split()) if benefits_tag else None
job_post = JobPost( description, job_type = self.get_job_description(job_url)
# description, job_type = None, []
return JobPost(
title=title, title=title,
description=description, description=description,
company_name=company, company_name=company,
company_url=company_url,
location=location, location=location,
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
job_type=job_type, job_type=job_type,
compensation=Compensation( compensation=compensation,
interval=CompensationInterval.YEARLY, currency=None benefits=benefits,
), emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None,
) )
job_list.append(job_post)
if processed_jobs >= job_count:
break
if len(job_list) >= scraper_input.results_wanted:
break
if processed_jobs >= job_count:
break
if len(job_list) >= scraper_input.results_wanted:
break
page += 1 def get_job_description(
self, job_page_url: str
job_list = job_list[: scraper_input.results_wanted] ) -> tuple[None, None] | tuple[str | None, tuple[str | None, JobType | None]]:
return JobResponse(jobs=job_list)
def get_description(self, job_page_url: str) -> Optional[str]:
""" """
Retrieves job description by going to the job page url Retrieves job description by going to the job page url
:param job_page_url: :param job_page_url:
@@ -173,27 +210,34 @@ class LinkedInScraper(Scraper):
try: try:
response = requests.get(job_page_url, timeout=5, proxies=self.proxy) response = requests.get(job_page_url, timeout=5, proxies=self.proxy)
response.raise_for_status() response.raise_for_status()
except requests.HTTPError as e:
if hasattr(e, "response") and e.response is not None:
if e.response.status_code in (429, 502):
time.sleep(self.DELAY)
return None, None
except Exception as e: except Exception as e:
return None, None return None, None
if response.url == "https://www.linkedin.com/signup":
return None, None
soup = BeautifulSoup(response.text, "html.parser") soup = BeautifulSoup(response.text, "html.parser")
div_content = soup.find( div_content = soup.find(
"div", class_=lambda x: x and "show-more-less-html__markup" in x "div", class_=lambda x: x and "show-more-less-html__markup" in x
) )
text_content = None description = None
if div_content: if div_content:
text_content = " ".join(div_content.get_text().split()).strip() description = " ".join(div_content.get_text().split()).strip()
def get_job_type( def get_job_type(
soup: BeautifulSoup, soup_job_type: BeautifulSoup,
) -> Tuple[Optional[str], Optional[JobType]]: ) -> list[JobType] | None:
""" """
Gets the job type from job page Gets the job type from job page
:param soup: :param soup_job_type:
:return: JobType :return: JobType
""" """
h3_tag = soup.find( h3_tag = soup_job_type.find(
"h3", "h3",
class_="description__job-criteria-subheader", class_="description__job-criteria-subheader",
string=lambda text: "Employment type" in text, string=lambda text: "Employment type" in text,
@@ -210,16 +254,9 @@ class LinkedInScraper(Scraper):
employment_type = employment_type.lower() employment_type = employment_type.lower()
employment_type = employment_type.replace("-", "") employment_type = employment_type.replace("-", "")
return LinkedInScraper.get_enum_from_value(employment_type) return [get_enum_from_job_type(employment_type)] if employment_type else []
return text_content, get_job_type(soup) return description, get_job_type(soup)
@staticmethod
def get_enum_from_value(value_str):
for job_type in JobType:
if value_str in job_type.value:
return job_type
return None
def get_location(self, metadata_card: Optional[Tag]) -> Location: def get_location(self, metadata_card: Optional[Tag]) -> Location:
""" """
@@ -227,7 +264,7 @@ class LinkedInScraper(Scraper):
:param metadata_card :param metadata_card
:return: location :return: location
""" """
location = Location(country=self.country) location = Location(country=Country.from_string(self.country))
if metadata_card is not None: if metadata_card is not None:
location_tag = metadata_card.find( location_tag = metadata_card.find(
"span", class_="job-search-card__location" "span", class_="job-search-card__location"
@@ -239,7 +276,14 @@ class LinkedInScraper(Scraper):
location = Location( location = Location(
city=city, city=city,
state=state, state=state,
country=self.country, country=Country.from_string(self.country),
)
elif len(parts) == 3:
city, state, country = parts
location = Location(
city=city,
state=state,
country=Country.from_string(country),
) )
return location return location

View File

@@ -0,0 +1,81 @@
import re
import numpy as np
import requests
import tls_client
from ..jobs import JobType
def count_urgent_words(description: str) -> int:
"""
Count the number of urgent words or phrases in a job description.
"""
urgent_patterns = re.compile(
r"\burgen(t|cy)|\bimmediate(ly)?\b|start asap|\bhiring (now|immediate(ly)?)\b",
re.IGNORECASE,
)
matches = re.findall(urgent_patterns, description)
count = len(matches)
return count
def extract_emails_from_text(text: str) -> list[str] | None:
if not text:
return None
email_regex = re.compile(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")
return email_regex.findall(text)
def create_session(proxy: dict | None = None, is_tls: bool = True):
"""
Creates a tls client session
:return: A session object with or without proxies.
"""
if is_tls:
session = tls_client.Session(
client_identifier="chrome112",
random_tls_extension_order=True,
)
session.proxies = proxy
# TODO multiple proxies
# if self.proxies:
# session.proxies = {
# "http": random.choice(self.proxies),
# "https": random.choice(self.proxies),
# }
else:
session = requests.Session()
session.allow_redirects = True
if proxy:
session.proxies.update(proxy)
return session
def get_enum_from_job_type(job_type_str: str) -> JobType | None:
"""
Given a string, returns the corresponding JobType enum member if a match is found.
"""
res = None
for job_type in JobType:
if job_type_str in job_type.value:
res = job_type
return res
def currency_parser(cur_str):
# Remove any non-numerical characters
# except for ',' '.' or '-' (e.g. EUR)
cur_str = re.sub("[^-0-9.,]", '', cur_str)
# Remove any 000s separators (either , or .)
cur_str = re.sub("[.,]", '', cur_str[:-3]) + cur_str[-3:]
if '.' in list(cur_str[-3:]):
num = float(cur_str)
elif ',' in list(cur_str[-3:]):
num = float(cur_str.replace(',', '.'))
else:
num = float(cur_str)
return np.round(num, 2)

View File

@@ -5,36 +5,24 @@ jobspy.scrapers.ziprecruiter
This module contains routines to scrape ZipRecruiter. This module contains routines to scrape ZipRecruiter.
""" """
import math import math
import json import time
import re import re
import traceback from datetime import datetime, date
from datetime import datetime from typing import Optional, Tuple, Any
from typing import Optional, Tuple
from urllib.parse import urlparse, parse_qs
import tls_client
import requests
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
from bs4.element import Tag from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import ThreadPoolExecutor, Future
from .. import Scraper, ScraperInput, Site from .. import Scraper, ScraperInput, Site
from ..exceptions import ZipRecruiterException from ..exceptions import ZipRecruiterException
from ...jobs import ( from ..utils import count_urgent_words, extract_emails_from_text, create_session
JobPost, from ...jobs import JobPost, Compensation, Location, JobResponse, JobType, Country
Compensation,
CompensationInterval,
Location,
JobResponse,
JobType,
Country,
)
class ZipRecruiterScraper(Scraper): class ZipRecruiterScraper(Scraper):
def __init__(self, proxy: Optional[str] = None): def __init__(self, proxy: Optional[str] = None):
""" """
Initializes LinkedInScraper with the ZipRecruiter job search url Initializes ZipRecruiterScraper with the ZipRecruiter job search url
""" """
site = Site(Site.ZIP_RECRUITER) site = Site(Site.ZIP_RECRUITER)
self.url = "https://www.ziprecruiter.com" self.url = "https://www.ziprecruiter.com"
@@ -42,29 +30,26 @@ class ZipRecruiterScraper(Scraper):
self.jobs_per_page = 20 self.jobs_per_page = 20
self.seen_urls = set() self.seen_urls = set()
self.session = tls_client.Session(
client_identifier="chrome112", random_tls_extension_order=True
)
def find_jobs_in_page( def find_jobs_in_page(
self, scraper_input: ScraperInput, page: int self, scraper_input: ScraperInput, continue_token: str | None = None
) -> tuple[list[JobPost], int | None]: ) -> Tuple[list[JobPost], Optional[str]]:
""" """
Scrapes a page of ZipRecruiter for jobs with scraper_input criteria Scrapes a page of ZipRecruiter for jobs with scraper_input criteria
:param scraper_input: :param scraper_input:
:param page: :param continue_token:
:param session: :return: jobs found on page
:return: jobs found on page, total number of jobs found for search
""" """
job_list: list[JobPost] = [] params = self.add_params(scraper_input)
if continue_token:
params["continue"] = continue_token
try: try:
response = self.session.get( session = create_session(self.proxy, is_tls=False)
self.url + "/jobs-search", response = session.get(
headers=ZipRecruiterScraper.headers(), f"https://api.ziprecruiter.com/jobs-app/jobs",
params=ZipRecruiterScraper.add_params(scraper_input, page), headers=self.headers(),
allow_redirects=True, params=self.add_params(scraper_input),
proxy=self.proxy, timeout=10,
timeout_seconds=10,
) )
if response.status_code != 200: if response.status_code != 200:
raise ZipRecruiterException( raise ZipRecruiterException(
@@ -74,194 +59,68 @@ class ZipRecruiterScraper(Scraper):
if "Proxy responded with non 200 code" in str(e): if "Proxy responded with non 200 code" in str(e):
raise ZipRecruiterException("bad proxy") raise ZipRecruiterException("bad proxy")
raise ZipRecruiterException(str(e)) raise ZipRecruiterException(str(e))
else:
soup = BeautifulSoup(response.text, "html.parser")
js_tag = soup.find("script", {"id": "js_variables"})
if js_tag: time.sleep(5)
page_json = json.loads(js_tag.string) response_data = response.json()
jobs_list = page_json.get("jobList") jobs_list = response_data.get("jobs", [])
if jobs_list: next_continue_token = response_data.get("continue", None)
page_variant = "javascript"
# print('type javascript', len(jobs_list))
else:
page_variant = "html_2"
jobs_list = soup.find_all("div", {"class": "job_content"})
# print('type 2 html', len(jobs_list))
else:
page_variant = "html_1"
jobs_list = soup.find_all("li", {"class": "job-listing"})
# print('type 1 html', len(jobs_list))
# with open("zip_method_8.html", "w") as f:
# f.write(soup.prettify())
with ThreadPoolExecutor(max_workers=10) as executor: with ThreadPoolExecutor(max_workers=self.jobs_per_page) as executor:
if page_variant == "javascript": job_results = [executor.submit(self.process_job, job) for job in jobs_list]
job_results = [
executor.submit(self.process_job_javascript, job)
for job in jobs_list
]
elif page_variant == "html_1":
job_results = [
executor.submit(self.process_job_html_1, job) for job in jobs_list
]
elif page_variant == "html_2":
job_results = [
executor.submit(self.process_job_html_2, job) for job in jobs_list
]
job_list = [result.result() for result in job_results if result.result()] job_list = [result.result() for result in job_results if result.result()]
return job_list return job_list, next_continue_token
def scrape(self, scraper_input: ScraperInput) -> JobResponse: def scrape(self, scraper_input: ScraperInput) -> JobResponse:
""" """
Scrapes ZipRecruiter for jobs with scraper_input criteria Scrapes ZipRecruiter for jobs with scraper_input criteria.
:param scraper_input: :param scraper_input: Information about job search criteria.
:return: job_response :return: JobResponse containing a list of jobs.
""" """
#: get first page to initialize session job_list: list[JobPost] = []
job_list: list[JobPost] = self.find_jobs_in_page(scraper_input, 1) continue_token = None
pages_to_process = max(
3, math.ceil(scraper_input.results_wanted / self.jobs_per_page) max_pages = math.ceil(scraper_input.results_wanted / self.jobs_per_page)
for page in range(1, max_pages + 1):
if len(job_list) >= scraper_input.results_wanted:
break
jobs_on_page, continue_token = self.find_jobs_in_page(
scraper_input, continue_token
) )
if jobs_on_page:
job_list.extend(jobs_on_page)
with ThreadPoolExecutor(max_workers=10) as executor: if not continue_token:
futures: list[Future] = [ break
executor.submit(self.find_jobs_in_page, scraper_input, page)
for page in range(2, pages_to_process + 1)
]
for future in futures:
jobs = future.result()
job_list += jobs
if len(job_list) > scraper_input.results_wanted:
job_list = job_list[: scraper_input.results_wanted] job_list = job_list[: scraper_input.results_wanted]
return JobResponse(jobs=job_list) return JobResponse(jobs=job_list)
def process_job_html_1(self, job: Tag) -> Optional[JobPost]: @staticmethod
""" def process_job(job: dict) -> JobPost:
Parses a job from the job content tag """Processes an individual job dict from the response"""
:param job: BeautifulSoup Tag for one job post title = job.get("name")
:return JobPost job_url = job.get("job_url")
"""
job_url = job.find("a", {"class": "job_link"})["href"]
if job_url in self.seen_urls:
return None
title = job.find("h2", {"class": "title"}).text
company = job.find("a", {"class": "company_name"}).text.strip()
description, updated_job_url = self.get_description(job_url)
job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = job.find("p", {"class": "job_snippet"}).text.strip()
job_type_element = job.find("li", {"class": "perk_item perk_type"})
job_type = None
if job_type_element:
job_type_text = (
job_type_element.text.strip().lower().replace("_", "").replace(" ", "")
)
job_type = ZipRecruiterScraper.get_job_type_enum(job_type_text)
date_posted = ZipRecruiterScraper.get_date_posted(job)
job_post = JobPost(
title=title,
description=description,
company_name=company,
location=ZipRecruiterScraper.get_location(job),
job_type=job_type,
compensation=ZipRecruiterScraper.get_compensation(job),
date_posted=date_posted,
job_url=job_url,
)
return job_post
def process_job_html_2(self, job: Tag) -> Optional[JobPost]:
"""
Parses a job from the job content tag for a second variat of HTML that ZR uses
:param job: BeautifulSoup Tag for one job post
:return JobPost
"""
job_url = job.find("a", class_="job_link")["href"]
title = job.find("h2", class_="title").text
company = job.find("a", class_="company_name").text.strip()
description, updated_job_url = self.get_description(job_url)
job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = job.find("p", class_="job_snippet").get_text().strip()
job_type_text = job.find("li", class_="perk_item perk_type")
job_type = None
if job_type_text:
job_type_text = (
job_type_text.get_text()
.strip()
.lower()
.replace("-", "")
.replace(" ", "")
)
job_type = ZipRecruiterScraper.get_job_type_enum(job_type_text)
date_posted = ZipRecruiterScraper.get_date_posted(job)
job_post = JobPost(
title=title,
description=description,
company_name=company,
location=ZipRecruiterScraper.get_location(job),
job_type=job_type,
compensation=ZipRecruiterScraper.get_compensation(job),
date_posted=date_posted,
job_url=job_url,
)
return job_post
def process_job_javascript(self, job: dict) -> JobPost:
title = job.get("Title")
job_url = job.get("JobURL")
description, updated_job_url = self.get_description(job_url)
job_url = updated_job_url if updated_job_url else job_url
if description is None:
description = BeautifulSoup( description = BeautifulSoup(
job.get("Snippet", "").strip(), "html.parser" job.get("job_description", "").strip(), "html.parser"
).get_text() ).get_text()
company = job.get("OrgName") company = job["hiring_company"].get("name") if "hiring_company" in job else None
country_value = "usa" if job.get("job_country") == "US" else "canada"
country_enum = Country.from_string(country_value)
location = Location( location = Location(
city=job.get("City"), state=job.get("State"), country=Country.US_CANADA city=job.get("job_city"), state=job.get("job_state"), country=country_enum
) )
job_type = ZipRecruiterScraper.get_job_type_enum( job_type = ZipRecruiterScraper.get_job_type_enum(
job.get("EmploymentType", "").replace("-", "").lower() job.get("employment_type", "").replace("_", "").lower()
) )
formatted_salary = job.get("FormattedSalaryShort", "")
salary_parts = formatted_salary.split(" ")
min_salary_str = salary_parts[0][1:].replace(",", "")
if "." in min_salary_str:
min_amount = int(float(min_salary_str) * 1000)
else:
min_amount = int(min_salary_str.replace("K", "000"))
if len(salary_parts) >= 3 and salary_parts[2].startswith("$"):
max_salary_str = salary_parts[2][1:].replace(",", "")
if "." in max_salary_str:
max_amount = int(float(max_salary_str) * 1000)
else:
max_amount = int(max_salary_str.replace("K", "000"))
else:
max_amount = 0
compensation = Compensation(
interval=CompensationInterval.YEARLY,
min_amount=min_amount,
max_amount=max_amount,
currency="USD/CAD",
)
save_job_url = job.get("SaveJobURL", "") save_job_url = job.get("SaveJobURL", "")
posted_time_match = re.search( posted_time_match = re.search(
r"posted_time=(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z)", save_job_url r"posted_time=(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z)", save_job_url
@@ -272,62 +131,43 @@ class ZipRecruiterScraper(Scraper):
date_posted = date_posted_obj.date() date_posted = date_posted_obj.date()
else: else:
date_posted = date.today() date_posted = date.today()
job_url = job.get("JobURL")
return JobPost( return JobPost(
title=title, title=title,
description=description,
company_name=company, company_name=company,
location=location, location=location,
job_type=job_type, job_type=job_type,
compensation=compensation, compensation=Compensation(
interval="yearly"
if job.get("compensation_interval") == "annual"
else job.get("compensation_interval"),
min_amount=int(job["compensation_min"])
if "compensation_min" in job
else None,
max_amount=int(job["compensation_max"])
if "compensation_max" in job
else None,
currency=job.get("compensation_currency"),
),
date_posted=date_posted, date_posted=date_posted,
job_url=job_url, job_url=job_url,
description=description,
emails=extract_emails_from_text(description) if description else None,
num_urgent_words=count_urgent_words(description) if description else None,
) )
return job_post
@staticmethod @staticmethod
def get_job_type_enum(job_type_str: str) -> Optional[JobType]: def get_job_type_enum(job_type_str: str) -> list[JobType] | None:
for job_type in JobType: for job_type in JobType:
if job_type_str in job_type.value: if job_type_str in job_type.value:
a = True return [job_type]
return job_type
return None return None
def get_description(self, job_page_url: str) -> Tuple[Optional[str], Optional[str]]:
"""
Retrieves job description by going to the job page url
:param job_page_url:
:param session:
:return: description or None, response url
"""
try:
response = requests.get(
job_page_url,
headers=ZipRecruiterScraper.headers(),
allow_redirects=True,
timeout=5,
proxies=self.proxy,
)
if response.status_code not in range(200, 400):
return None, None
except Exception as e:
return None, None
html_string = response.content
soup_job = BeautifulSoup(html_string, "html.parser")
job_description_div = soup_job.find("div", {"class": "job_description"})
if job_description_div:
return job_description_div.text.strip(), response.url
return None, response.url
@staticmethod @staticmethod
def add_params(scraper_input, page) -> Optional[str]: def add_params(scraper_input) -> dict[str, str | Any]:
params = { params = {
"search": scraper_input.search_term, "search": scraper_input.search_term,
"location": scraper_input.location, "location": scraper_input.location,
"page": page,
"form": "jobs-landing", "form": "jobs-landing",
} }
job_type_value = None job_type_value = None
@@ -352,107 +192,6 @@ class ZipRecruiterScraper(Scraper):
return params return params
@staticmethod
def get_interval(interval_str: str):
"""
Maps the interval alias to its appropriate CompensationInterval.
:param interval_str
:return: CompensationInterval
"""
interval_alias = {"annually": CompensationInterval.YEARLY}
interval_str = interval_str.lower()
if interval_str in interval_alias:
return interval_alias[interval_str]
return CompensationInterval(interval_str)
@staticmethod
def get_date_posted(job: BeautifulSoup) -> Optional[datetime.date]:
"""
Extracts the date a job was posted
:param job
:return: date the job was posted or None
"""
button = job.find(
"button", {"class": "action_input save_job zrs_btn_secondary_200"}
)
if not button:
return None
url_time = button.get("data-href", "")
url_components = urlparse(url_time)
params = parse_qs(url_components.query)
posted_time_str = params.get("posted_time", [None])[0]
if posted_time_str:
posted_date = datetime.strptime(
posted_time_str, "%Y-%m-%dT%H:%M:%SZ"
).date()
return posted_date
return None
@staticmethod
def get_compensation(job: BeautifulSoup) -> Optional[Compensation]:
"""
Parses the compensation tag from the job BeautifulSoup object
:param job
:return: Compensation object or None
"""
pay_element = job.find("li", {"class": "perk_item perk_pay"})
if pay_element is None:
return None
pay = pay_element.find("div", {"class": "value"}).find("span").text.strip()
def create_compensation_object(pay_string: str) -> Compensation:
"""
Creates a Compensation object from a pay_string
:param pay_string
:return: compensation
"""
interval = ZipRecruiterScraper.get_interval(pay_string.split()[-1])
amounts = []
for amount in pay_string.split("to"):
amount = amount.replace(",", "").strip("$ ").split(" ")[0]
if "K" in amount:
amount = amount.replace("K", "")
amount = int(float(amount)) * 1000
else:
amount = int(float(amount))
amounts.append(amount)
compensation = Compensation(
interval=interval,
min_amount=min(amounts),
max_amount=max(amounts),
currency="USD/CAD",
)
return compensation
return create_compensation_object(pay)
@staticmethod
def get_location(job: BeautifulSoup) -> Location:
"""
Extracts the job location from BeatifulSoup object
:param job:
:return: location
"""
location_link = job.find("a", {"class": "company_location"})
if location_link is not None:
location_string = location_link.text.strip()
parts = location_string.split(", ")
if len(parts) == 2:
city, state = parts
else:
city, state = None, None
else:
city, state = None, None
return Location(city=city, state=state, country=Country.US_CANADA)
@staticmethod @staticmethod
def headers() -> dict: def headers() -> dict:
""" """
@@ -460,5 +199,13 @@ class ZipRecruiterScraper(Scraper):
:return: dict - Dictionary containing headers :return: dict - Dictionary containing headers
""" """
return { return {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36" "Host": "api.ziprecruiter.com",
"Cookie": "ziprecruiter_browser=018188e0-045b-4ad7-aa50-627a6c3d43aa; ziprecruiter_session=5259b2219bf95b6d2299a1417424bc2edc9f4b38; SplitSV=2016-10-19%3AU2FsdGVkX19f9%2Bx70knxc%2FeR3xXR8lWoTcYfq5QjmLU%3D%0A; __cf_bm=qXim3DtLPbOL83GIp.ddQEOFVFTc1OBGPckiHYxcz3o-1698521532-0-AfUOCkgCZyVbiW1ziUwyefCfzNrJJTTKPYnif1FZGQkT60dMowmSU/Y/lP+WiygkFPW/KbYJmyc+MQSkkad5YygYaARflaRj51abnD+SyF9V; zglobalid=68d49bd5-0326-428e-aba8-8a04b64bc67c.af2d99ff7c03.653d61bb; ziprecruiter_browser=018188e0-045b-4ad7-aa50-627a6c3d43aa; ziprecruiter_session=5259b2219bf95b6d2299a1417424bc2edc9f4b38",
"accept": "*/*",
"x-zr-zva-override": "100000000;vid:ZT1huzm_EQlDTVEc",
"x-pushnotificationid": "0ff4983d38d7fc5b3370297f2bcffcf4b3321c418f5c22dd152a0264707602a0",
"x-deviceid": "D77B3A92-E589-46A4-8A39-6EF6F1D86006",
"user-agent": "Job Search/87.0 (iPhone; CPU iOS 16_6_1 like Mac OS X)",
"authorization": "Basic YTBlZjMyZDYtN2I0Yy00MWVkLWEyODMtYTI1NDAzMzI0YTcyOg==",
"accept-language": "en-US,en;q=0.9",
} }

View File

@@ -1,10 +1,14 @@
from ..jobspy import scrape_jobs from ..jobspy import scrape_jobs
import pandas as pd
def test_all(): def test_all():
result = scrape_jobs( result = scrape_jobs(
site_name=["linkedin", "indeed", "zip_recruiter"], site_name=["linkedin", "indeed", "zip_recruiter", "glassdoor"],
search_term="software engineer", search_term="software engineer",
results_wanted=5, results_wanted=5,
) )
assert result is not None and result.errors.empty is True
assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"

View File

@@ -0,0 +1,11 @@
from ..jobspy import scrape_jobs
import pandas as pd
def test_indeed():
result = scrape_jobs(
site_name="glassdoor", search_term="software engineer", country_indeed="USA"
)
assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"

View File

@@ -1,9 +1,11 @@
from ..jobspy import scrape_jobs from ..jobspy import scrape_jobs
import pandas as pd
def test_indeed(): def test_indeed():
result = scrape_jobs( result = scrape_jobs(
site_name="indeed", site_name="indeed", search_term="software engineer", country_indeed="usa"
search_term="software engineer",
) )
assert result is not None and result.errors.empty is True assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"

View File

@@ -1,4 +1,5 @@
from ..jobspy import scrape_jobs from ..jobspy import scrape_jobs
import pandas as pd
def test_linkedin(): def test_linkedin():
@@ -6,4 +7,6 @@ def test_linkedin():
site_name="linkedin", site_name="linkedin",
search_term="software engineer", search_term="software engineer",
) )
assert result is not None and result.errors.empty is True assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"

View File

@@ -1,4 +1,5 @@
from ..jobspy import scrape_jobs from ..jobspy import scrape_jobs
import pandas as pd
def test_ziprecruiter(): def test_ziprecruiter():
@@ -7,4 +8,6 @@ def test_ziprecruiter():
search_term="software engineer", search_term="software engineer",
) )
assert result is not None and result.errors.empty is True assert (
isinstance(result, pd.DataFrame) and not result.empty
), "Result should be a non-empty DataFrame"