Guides / Scrape Google

How to Scrape Google Search Results in 2026

Google actively blocks automated access to search results using reCAPTCHA, IP-based rate limiting, and browser fingerprint detection. Scraping it yourself means managing proxy rotation, solving CAPTCHAs, and parsing constantly changing HTML structures. With Browser7, you get fully rendered search results in a single API call.

What makes Google hard to scrape

reCAPTCHA challenges

Google serves reCAPTCHA challenges aggressively when it detects automated traffic. Even with residential proxies, high request volumes or suspicious patterns will trigger challenges that block your scraping entirely.

IP-based rate limiting

Google tracks request patterns per IP address. Too many searches from the same IP in a short period will result in temporary blocks or CAPTCHA walls. Datacenter IPs are blocked almost immediately.

Geo-targeted results

Search results vary significantly by location. The same query returns different organic results, ads, and featured snippets depending on the searcher's country and city. To see what a user in London sees versus New York, you need proxies in those locations.

Obfuscated HTML structure

Google uses minified CSS class names that are not human-readable and can change without notice. Parsing Google search results requires identifying the current class names for result containers, titles, URLs, and snippets.

Scrape a Google search query

Browser7 handles proxy rotation, CAPTCHA solving, and JavaScript rendering automatically. This example uses the North America API endpoint, geo-targets the US, and enables automatic CAPTCHA handling in case Google serves a challenge.

from browser7 import Browser7

client = Browser7(
    api_key="b7_your_api_key",
    base_url="https://ca-api.browser7.com/v1"
)

result = client.render(
    "https://www.google.com/search?q=refurbished+phones",
    country_code="US",
    captcha="auto"
)

print(result.html)

The response contains the fully rendered Google search results page, including organic results, featured snippets, and any other SERP features Google shows for that query.

Data you can extract

The rendered HTML contains everything Google shows to a real searcher. Common data points to extract from search results:

Organic results

  • Page title and display URL
  • Destination URL
  • Result snippet / description
  • Position on page
  • Sitelinks (if present)

SERP features

  • Featured snippets
  • People Also Ask questions
  • Knowledge panels
  • Image and video carousels
  • Local pack results

Ads and shopping

  • Sponsored results (top and bottom)
  • Shopping carousel products
  • Ad copy and display URLs
  • Ad position tracking

Metadata

  • Total estimated results
  • Related searches
  • Search suggestions
  • Pagination links

Complete example: parse organic results

Here is a complete example that renders a Google search results page and extracts structured data for each organic result. The Python example uses BeautifulSoup, Node.js uses Cheerio, and PHP uses DOMDocument.

from browser7 import Browser7
from bs4 import BeautifulSoup
import json

client = Browser7(
    api_key="b7_your_api_key",
    base_url="https://ca-api.browser7.com/v1"
)

result = client.render(
    "https://www.google.com/search?q=refurbished+phones",
    country_code="US",
    captcha="auto"
)

soup = BeautifulSoup(result.html, "html.parser")

results = []
for i, container in enumerate(soup.select("div.tF2Cxc")):
    item = {
        "position": i + 1,
        "title": None,
        "url": None,
        "snippet": None,
    }

    h3 = container.find("h3")
    if h3:
        item["title"] = h3.get_text(strip=True)

    link = container.select_one("div.yuRUbf a")
    if link:
        item["url"] = link.get("href")

    snippet = container.select_one("div.VwiC3b")
    if snippet:
        item["snippet"] = snippet.get_text(strip=True)

    results.append(item)

print(json.dumps(results, indent=2))

Google uses obfuscated class names that may change periodically. Inspect the current page structure if any fields return null.

Sample output:

[
  {
    "position": 1,
    "title": "Buy Used Cell Phones | Certified Refurbished Phones",
    "url": "https://buy.gazelle.com/collections/cell-phones",
    "snippet": "Gazelle is a leading provider of quality refurbished cell phones..."
  },
  {
    "position": 2,
    "title": "Professionally refurbished Phones - Back Market",
    "url": "https://www.backmarket.com/en-us/l/smartphones/...",
    "snippet": "Find the best deals on professionally refurbished smartphones..."
  },
  ...
]

Scrape page 2 and beyond

Google's pagination uses the start query parameter. Page 1 is the default, page 2 is &start=10, page 3 is &start=20, and so on. No special pagination handling is needed - just change the URL.

from browser7 import Browser7
from bs4 import BeautifulSoup

client = Browser7(
    api_key="b7_your_api_key",
    base_url="https://ca-api.browser7.com/v1"
)

# Page 2: add &start=10 to skip the first 10 results
result = client.render(
    "https://www.google.com/search?q=refurbished+phones&start=10",
    country_code="US",
    captcha="auto"
)

soup = BeautifulSoup(result.html, "html.parser")
for i, container in enumerate(soup.select("div.tF2Cxc")):
    h3 = container.find("h3")
    title = h3.get_text(strip=True) if h3 else "No title"
    print(f"{i + 11}. {title}")

Scrape Google from a different country

Google results vary significantly by location - different organic rankings, local results, and featured snippets. In this example, because we are targeting the UK, we use the EU API endpoint for optimal performance and lower latency. Geo-targeting is included in the $0.01 per page price - no extra charge.

from browser7 import Browser7

# Use the EU endpoint for European targets
client = Browser7(
    api_key="b7_your_api_key",
    base_url="https://eu-api.browser7.com/v1"
)

# Get Google UK results from a London IP
result = client.render(
    "https://www.google.co.uk/search?q=refurbished+phones",
    country_code="GB",
    city="london",
    captcha="auto"
)

print(result.html)
print(f"Rendered from: {result.selected_city}")

Take a screenshot of search results

Capture the full SERP as an image for rank tracking dashboards, client reports, or visual diffing. Useful for monitoring how your site appears in search results over time.

import base64
from browser7 import Browser7

client = Browser7(
    api_key="b7_your_api_key",
    base_url="https://ca-api.browser7.com/v1"
)

result = client.render(
    "https://www.google.com/search?q=refurbished+phones",
    country_code="US",
    captcha="auto",
    block_images=False,
    include_screenshot=True,
    screenshot_full_page=True,
    screenshot_format="png"
)

# Save the screenshot
with open("google-serp.png", "wb") as f:
    f.write(base64.b64decode(result.screenshot))

print("Screenshot saved")

What this costs

Every Google SERP render costs $0.01 - the same as any other website. Residential proxies, JavaScript rendering, CAPTCHA solving, and geo-targeting are all included. There are no per-domain surcharges (unlike Oxylabs which charges $1.00/1K for Google versus $0.50/1K for other sites), no credit multipliers, and no bandwidth fees.

Scraping 10 pages of results for 100 different keywords costs $10. You know this before you start.

Try it yourself

100 free renders - enough to test Google scraping with no payment required.