Guides / Scrape LinkedIn
How to Scrape LinkedIn in 2026
LinkedIn uses aggressive anti-bot detection, login walls for profile pages, JavaScript-heavy rendering, and strict rate limiting. This guide focuses on scraping public job listings, which are accessible without authentication. With Browser7, you get fully rendered job search results in a single API call.
What makes LinkedIn hard to scrape
Aggressive anti-bot detection
LinkedIn uses advanced browser fingerprinting, behavioral analysis, and request pattern detection to identify automated traffic. Datacenter IPs are blocked immediately, and even residential proxies can be flagged if request patterns look automated.
Login walls for profiles
Most LinkedIn profile and company pages require authentication to view. However, public job search results are accessible without logging in, making them the most practical target for data extraction.
JavaScript rendering
LinkedIn renders job cards, company details, and search filters using client-side JavaScript. A simple HTTP request returns an incomplete page shell. You need a real browser to get the actual job listing data.
Rate limiting
LinkedIn enforces strict rate limits and will temporarily block IPs that make too many requests in a short period. Even legitimate-looking traffic can trigger "unusual activity" warnings if the volume is too high.
Scrape LinkedIn job search results
Browser7 handles proxy rotation, browser fingerprinting, CAPTCHA solving, and JavaScript rendering automatically. This example scrapes public job listings for "software engineer" in the United States and returns the fully rendered HTML with all job cards.
from browser7 import Browser7
client = Browser7(
api_key="b7_your_api_key",
base_url="https://ca-api.browser7.com/v1"
)
result = client.render(
"https://www.linkedin.com/jobs/search/?keywords=software+engineer&location=United+States",
country_code="US",
)
print(result.html)That is the complete code. No proxy configuration, no browser setup, no CAPTCHA handling logic. The response contains the fully rendered HTML of the LinkedIn job search page, including all job cards with titles, companies, locations, and posting dates.
Data you can extract
The rendered HTML contains all the data LinkedIn shows to a real visitor on public job search pages. Common data points to extract:
Job details
- Job title and seniority level
- Full job description (on detail pages)
- Employment type (full-time, contract, etc.)
- Remote/hybrid/on-site designation
- Link to full job posting
Company info
- Company name
- Company logo URL
- Company LinkedIn page link
- Industry and company size
- Hiring activity indicators
Posting metadata
- Date posted
- Number of applicants
- Easy Apply availability
- Salary range (when listed)
- Benefits highlights
Search metadata
- Total results count
- Search filters applied
- Results per page
- Pagination info
- Related search suggestions
Complete example: render and parse job listings
Here is a complete example that renders a LinkedIn job search page and extracts structured data from the HTML. The Python example uses BeautifulSoup, Node.js uses Cheerio, and PHP uses DOMDocument - the standard HTML parsing approach for each language.
from browser7 import Browser7
from bs4 import BeautifulSoup
import json
client = Browser7(
api_key="b7_your_api_key",
base_url="https://ca-api.browser7.com/v1"
)
result = client.render(
"https://www.linkedin.com/jobs/search/?keywords=software+engineer&location=United+States",
country_code="US",
)
soup = BeautifulSoup(result.html, "html.parser")
jobs = []
for i, card in enumerate(soup.select("div.base-card")):
job = {
"position": i + 1,
"title": None,
"company": None,
"location": None,
"date": None,
}
title = card.select_one("h3.base-search-card__title")
if title:
job["title"] = title.get_text(strip=True)
company = card.select_one("h4.base-search-card__subtitle")
if company:
job["company"] = company.get_text(strip=True)
location = card.select_one("span.job-search-card__location")
if location:
job["location"] = location.get_text(strip=True)
date = card.select_one("time[datetime]")
if date:
job["date"] = date.get("datetime")
jobs.append(job)
print(json.dumps(jobs[:5], indent=2))CSS selectors may change if LinkedIn updates their page structure. Inspect the current page if any fields return null.
Sample output:
[
{
"position": 1,
"title": "Senior Software Engineer",
"company": "Google",
"location": "Mountain View, CA",
"date": "2026-04-09"
},
{
"position": 2,
"title": "Software Engineer II",
"company": "Microsoft",
"location": "Redmond, WA (Remote)",
"date": "2026-04-08"
},
...
]Scrape page 2 and beyond
LinkedIn job search uses the &start= parameter for pagination. Each page shows 25 results, so page 2 is &start=25, page 3 is &start=50, and so on.
from browser7 import Browser7
from bs4 import BeautifulSoup
client = Browser7(
api_key="b7_your_api_key",
base_url="https://ca-api.browser7.com/v1"
)
# Page 2: add &start=25 (25 results per page)
result = client.render(
"https://www.linkedin.com/jobs/search/?keywords=software+engineer&location=United+States&start=25",
country_code="US",
)
soup = BeautifulSoup(result.html, "html.parser")
for card in soup.select("div.base-card"):
title = card.select_one("h3.base-search-card__title")
company = card.select_one("h4.base-search-card__subtitle")
if title and company:
print(f"{title.get_text(strip=True)} at {company.get_text(strip=True)}")Take a screenshot of job results
Capture LinkedIn job search results as an image for recruitment dashboards, market research reports, or tracking job posting trends over time.
import base64
from browser7 import Browser7
client = Browser7(
api_key="b7_your_api_key",
base_url="https://ca-api.browser7.com/v1"
)
result = client.render(
"https://www.linkedin.com/jobs/search/?keywords=software+engineer&location=United+States",
country_code="US",
block_images=False,
include_screenshot=True,
screenshot_full_page=True,
screenshot_format="png"
)
# Save the screenshot
with open("linkedin-jobs.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
print("Screenshot saved")What this costs
Every LinkedIn page render costs $0.01 - the same as any other website. Residential proxies, JavaScript rendering, CAPTCHA solving, and screenshots are all included. There are no per-domain surcharges, no credit multipliers, and no bandwidth fees.
10,000 LinkedIn job search pages costs $100. You know this before you start, not after.