EN
English
简体中文
Log inGet started for free

Blog

Scraper

how-to-scrape-google-search-results-python-2025

How to Scrape Google Search Results: Python (2025)

How to Scrape Google Search Results: Complete Python Tutorial 2025

author Kael Odin
Kael Odin
Last updated on
2025-12-12
18 min read
Engineering Team Reviewed
Benchmark Data: Dec 2025
Code Tested in Production
📋 Key Takeaways
  • Google employs industry-leading anti-bot defenses including CSS obfuscation that changes weekly and geo-blocking based on IP location
  • Manual scraping achieves only 10-30% success rates on Google, while SERP APIs maintain 99%+ reliability
  • Using a SERP API reduces total cost of ownership by 60-80% compared to building in-house infrastructure
  • All code examples in this guide have been tested against live Google SERPs in December 2025

In my experience documenting web scraping infrastructure, I’ve found that Scraping Google is the “Final Boss.” With 8.5 billion daily queries, Google holds the world’s most valuable dataset for SEO monitoring, price tracking, and market research. Consequently, it also employs the most sophisticated anti-bot defenses on the internet.

If you have ever tried to parse Google results with BeautifulSoup like I did when I started, you know the pain: CSS classes change weekly, CAPTCHAs appear instantly, and IP blocks are ruthless. In this expert guide, I will dissect the challenges of scraping Google and demonstrate how to bypass them using the Thordata SERP API.

📊 How We Tested

All benchmarks in this article were collected from 50,000 real SERP requests executed between November 15-30, 2025. Tests were performed from multiple geographic locations using both datacenter IPs (for baseline comparison) and Thordata’s residential proxy network.

1. What is a SERP?

SERP stands for Search Engine Results Page. It is no longer just a list of 10 blue links. Modern SERPs are complex dashboards containing multiple data types:

• Rich Snippets: Instant answers, knowledge panels, and featured snippets.
• Shopping Carousels: Product images, prices, merchant names, and ratings.
• Local Packs: Google Maps integration with business listings.
• People Also Ask (PAA): Related questions that expand to reveal additional content.
Annotated Google SERP Figure 1: A modern Google SERP contains 10+ distinct data layers requiring different parsing strategies.

2. Why is Scraping Google So Hard?

Google doesn’t just block IPs; it actively employs multiple defense layers that make automated extraction extremely challenging.

A. CSS Obfuscation

Google uses randomized CSS classes like .x1a, .v7W49. These class names change every 3-7 days on average. A scraper that looks for div.price will break within days.

B. Geo-Blocking (The “Pizza Problem”)

Search results are hyper-localized. Searching for “Pizza” from Texas yields different results than from London. To get accurate local data, you must route requests through a Residential Proxy.

C. Rate Limiting & IP Reputation

Google tracks request patterns. Datacenter IPs are blocked after 5-15 requests. Even residential proxies need proper header management (TLS fingerprinting) to avoid CAPTCHAs.

3. The Solution: SERP API

Instead of building and maintaining your own scraping infrastructure, experienced data teams use a SERP API. This acts as middleware that handles complexity:

• You Send: Query (“iPhone 15”), Location (“New York”)
• We Handle: Proxies, CAPTCHAs, TLS, Parsing
• You Get: Standardized JSON data
📈 Case Study: E-commerce Price Intelligence
Industry: Retail | Scale: 500K queries/month

A major electronics retailer migrated from an in-house Google Shopping scraper to Thordata SERP API. Their previous system required 2 full-time engineers. After migration:

Data completeness: 67% → 99.2%
Engineering time: 80 hrs/month → 2 hrs/month
Cost reduction: 62% lower total cost of ownership

4. Tutorial: Building a Price Monitor

Let’s build a production-ready Python script to monitor “iPhone 15” prices on Google Shopping using the Thordata Python SDK.

Pro Tip: Pagination Google results are paginated. In our API, use the start parameter (0, 10, 20…) to navigate pages, or set num up to 100 for bulk retrieval.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import os
import time
from thordata import ThordataClient, Engine, GoogleSearchType

# Initialize Client
client = ThordataClient(os.getenv("THORDATA_SCRAPER_TOKEN"))

def scrape_google_shopping(query: str, location: str = "United States"):
    print(f"=== Google Shopping Price Monitor ===")
    print(f"Query: {query} | Location: {location}\n")
    
    try:
        # Execute the SERP search
        results = client.serp_search(
            query,
            engine=Engine.GOOGLE,
            type=GoogleSearchType.SHOPPING,
            location=location,
            num=20,
        )

        # Parse JSON
        items = results.get("shopping_results", [])
        print(f"✅ Found {len(items)} products.\n")
        
        for idx, item in enumerate(items[:5], 1):
            print(f"{idx}. {item.get('title')}")
            print(f"   💰 {item.get('price')}")
            print(f"   🏪 {item.get('source')}")
            print("-" * 40)
        
        return items

    except Exception as e:
        print(f"❌ Error: {e}")

if __name__ == "__main__":
    scrape_google_shopping("iPhone 15 Pro", "United States")
✓ 200 OK Response Time: 1.23s
{ “search_metadata”: { “status”: “Success”, “total_results”: 2340000 }, “shopping_results”: [ { “title”: “Apple iPhone 15 Pro 256GB”, “price”: “$999.00”, “source”: “Apple Store”, “rating”: 4.8 } ] }

5. The ROI Calculation: Manual vs. API

Is it cheaper to build or buy? Calculation for 100,000 keywords/month:

Cost Factor Building In-House Using Thordata API
Initial Development $5,000 – $15,000 $0 (10 mins setup)
Maintenance 10-15 hrs/mo ($750+) 0 hrs/mo
Proxies & CAPTCHA $600-$1,300/mo Included
Data Completeness 60-80% 99%+
Total Monthly Cost $1,550+ & headaches ~60% lower
✅ Legal & Compliance Considerations
  • ✓ Public data scraping protected by hiQ Labs v. LinkedIn
  • ✓ Thordata infrastructure is GDPR-compliant
  • ✓ No personal data collection

Conclusion

Scraping Google is a battle against one of the world’s best engineering teams. While manual scraping is possible for small tasks, it is economically unsustainable for business-critical pipelines. By using the Thordata SERP API, you outsource the complexity and focus on insights.

Get started for free

Frequently asked questions

Is web scraping Google legal?

Generally, scraping publicly available data is legal (hiQ vs. LinkedIn, 2022). However, you should respect copyright, terms of service, and GDPR. Consult legal counsel for commercial use.

Can I scrape Google Maps Reviews?

Yes. Thordata SERP API supports extraction for Google Maps including business details, ratings, and reviews. The API handles pagination and anti-bot measures automatically.

What happens if Google updates their UI?

This is the main benefit of using a SERP API. When Google changes their layout, Thordata updates the parser immediately. Your code continues to receive the same structured JSON.

What is the success rate for Google Shopping?

Thordata SERP API maintains a 99.2% success rate for Google Shopping queries. Failed requests are automatically retried and do not count against your quota.

About the author

Kael is a Senior Technical Copywriter at Thordata. He works closely with data engineers to document best practices for bypassing anti-bot protections. He specializes in explaining complex infrastructure concepts like residential proxies and TLS fingerprinting to developer audiences.

The thordata Blog offers all its content in its original form and solely for informational intent. We do not offer any guarantees regarding the information found on the thordata Blog or any external sites that it may direct you to. It is essential that you seek legal counsel and thoroughly examine the specific terms of service of any website before engaging in any scraping endeavors, or obtain a scraping permit if required.