Your First Plan is on Us!

Get 100% of your first residential proxy purchase back as wallet balance, up to $900.

Start now
EN
English
简体中文
Log inGet started for free

Blog

Scraper

web-scraping-ecommerce-websites-with-python-step-by-step

Web Scraping eCommerce Websites with Python: Step-by-Step

web ecommerce scraping
Yulia
Yulia Taylor
Last updated on
 
2026-01-29
 
5 min read
 

 

eCommerce scrapers enable businesses to extract product data, reviews, and pricing information, allowing them to monitor competitors, track market trends, and optimize strategies in real time. This article provides a detailed, step-by-step guide to web scraping eCommerce websites with Python. By combining Python’s data processing capabilities with Thordata’s infrastructure, developers can bypass CAPTCHAs, IP bans, and dynamic rendering issues to ensure reliable data extraction.

The Value of eCommerce Data

Before diving into the code, it is essential to understand why organizations invest heavily in scraping eCommerce platforms. The insights derived from product pages go far beyond simple price tags.

Dynamic Pricing Optimization

The most common use case is price intelligence. eCommerce giants change prices millions of times a day based on demand, competition, and inventory. By scraping competitor pricing, businesses can implement dynamic pricing algorithms to ensure they remain the attractive option without sacrificing margins.

Minimum Advertised Price (MAP) Monitoring

For brands and manufacturers, unauthorized sellers violating MAP agreements can devalue a brand’s reputation. Automated scraping allows manufacturers to scan thousands of retailer pages daily to identify violations and enforce compliance.

Sentiment Analysis and Trend Forecasting

Review sections are goldmines of unstructured data. By scraping customer reviews and ratings, companies can use Natural Language Processing (NLP) to gauge public sentiment toward a product, identify recurring defects, or spot emerging market trends before they become mainstream.

Here is a comprehensive, 2000-word guide on web scraping eCommerce websites using Python and the Thordata Web Scraper API.

Prerequisites and Environment Setup

To follow this tutorial, a development environment needs to be established. This guide assumes a basic understanding of Python syntax.

1. Python Installation

Ensure that Python 3.8 or later is installed on the machine. You can verify this by running `python –version` in the terminal.

2. Thordata Account and Credentials

To use the Web Scraper API, one must have an active Thordata account.

Navigate to the [Thordata Dashboard]

Sign up for a free trial or log in.

Locate yourAPI Credentials (Username and Password, or an API Token) in the dashboard.

3. Installing Required Libraries

We will use the `requests` library to communicate with Thordata and `pandas` to organize the data. Open a terminal or command prompt and run:

pip install requests pandas beautifulsoup4

Step 1: Thordata Web Scraper API Logic

The architecture of this scraping solution is straightforward. Instead of sending a request directly to the eCommerce site (e.g., `amazon.com`), the Python script sends a request to the Thordata API endpoint. Inside this request, the target URL is specified as a parameter.

Thordata processes the request using its network of residential IPs and scraping infrastructure, then sends the response back.

Step 2: Writing the Basic Scraper Script

Let’s build a script to scrape a product page. For this example, we will assume we are scraping a generic eCommerce product page to extract the title and price.

import requests
import json

# Configuration
THORDATA_API_URL = "https://scraperapi.thordata.com/builder" 
API_TOKEN = "YOUR_BEARER_TOKEN"

def get_product_page(target_url):
  
    spider_params = [{"url": target_url}]

      payload = {
        'spider_name': 'amazon.com', 
        'spider_id': 'amazon_product_by-url', 
        'spider_parameters': json.dumps(spider_params), 
        'spider_errors': 'true',
        'file_name': 'ecommerce_task'
    }

    headers = {
        'Authorization': f'Bearer {API_TOKEN}',
        'Content-Type': 'application/x-www-form-urlencoded'
    }

    try:
        response = requests.post(
            THORDATA_API_URL,
            headers=headers,
            data=payload,
            timeout=60 
        )

        if response.status_code == 200:
            return response.json() 
        else:
            print(f"Error: {response.status_code} - {response.text}")
            return None

    except Exception as e:
        print(f"Request failed: {e}")
        return None

# Example Usage
target_product = "https://www.amazon.com/dp/B0BRXPR726"
json_data = get_product_page(target_product)

if json_data:
    print("Data successfully retrieved via Thordata!")

Step 4: Scaling to Multiple Products (Bulk Scraping)

Scraping one page is useful for testing, but real-world applications require scraping catalogs containing thousands of items. To do this efficiently, we iterate through a list of URLs and store the results in a CSV file.

import pandas as pd
import time

urls_to_scrape = [
    "https://www.amazon.com/dp/B0BZYCJK89",
    "https://www.amazon.com/dp/B0BRXPR726",
   ]

results = []

for url in urls_to_scrape:
    print(f"Scraping {url}...")
    json_response = get_product_page(url)
    
    if json_response:
        product_data = parse_product_data(json_response)
        if product_data:
            product_data['url'] = url
            results.append(product_data)
    
    time.sleep(1) 

if results:
    df = pd.DataFrame(results)
    df.to_csv('ecommerce_data.csv', index=False)
    print("Scraping complete. Data saved to ecommerce_data.csv")

Handling Thordata's "Job" Mode for Large Scale

For truly massive scrapes (e.g., 50,000 pages), waiting for each HTTP request to finish sequentially is too slow. Thordata’s API often supports an asynchronous "Job" mode or "Batch" mode.

In this workflow:

The Python script submits a batch of 1,000 URLs to Thordata.

Thordata processes them in parallel on its cloud infrastructure.

1. The Python script polls an endpoint to check the status or receives a webhook when the job is done.

2. The script downloads the compiled JSON result.

Using the asynchronous approach is recommended for enterprise-level data collection to maximize throughput and minimize local resource usage.

Step 5: Advanced Configurations and Error Handling

eCommerce sites are unpredictable. A robust scraper must account for changes in layout, out-of-stock items, or network errors.

Implementing Retries

Even the best proxies fail occasionally. The script should implement a retry mechanism.

def get_product_page_with_retry(target_url, retries=3):
for attempt in range(retries):
data = get_product_page(target_url)
if data:
return data
print(f"Attempt {attempt + 1} failed. Retrying...")
time.sleep(2)
return None

Dealing with CAPTCHAs

Sometimes, a site forces a CAPTCHA. Thordata’s Web Scraper API typically handles this automatically (the "Unlocker" feature). If the API returns a response indicating a CAPTCHA was encountered but not solved, check the API documentation for the `force_captcha_solve: true` parameter or similar flags to ensure the unlocking engine is engaged.

Custom Headers and Cookies

Some eCommerce sites show different data based on the user's cookies (e.g., a returning user vs. a new user). The Thordata API allows developers to pass custom headers or cookies in the payload. This is useful for scraping pages that require a login session, although scraping behind a login requires careful ethical consideration.

Legal and Ethical Considerations in Web Scraping

While technology like Thordata makes scraping accessible, it is imperative to operate within legal and ethical boundaries.

Respect Robots.txt

The `robots.txt` file is a standard used by websites to communicate with web crawlers. While not always legally binding depending on the jurisdiction, ignoring it can lead to aggressive blocking. Developers should review the target site's policy.

Personal Identifiable Information (PII)

When scraping reviews, avoid collecting user names, avatars, or profile links unless necessary and compliant with regulations like GDPR (Europe) or CCPA (California). Focus on the product data and the review text, anonymizing the user identity.

Server Load

Even though Thordata rotates IPs, sending thousands of requests per second to a small eCommerce server is irresponsible and could be considered a Denial of Service (DoS) attack. Always scrape at a rate that does not degrade the performance of the target website for genuine users.

Copyright and Terms of Service

Extracting data for analysis is generally accepted, but republishing that data (e.g., cloning a competitor's catalog to build a copycat site) is often a copyright violation. Always review the Terms of Service of the website being scraped.

Why Choose Thordata Web Scraper API over Building From Scratch?

A common question among developers is:"Why pay for an API when I can build a scraper with Selenium and free proxies?"

The answer lies inTotal Cost of Ownership (TCO) and reliability.

1. Maintenance Hell: eCommerce sites update their HTML structure and anti-bot systems weekly. A custom-built scraper requires constant code updates. Thordata manages the unlocking logic on the backend, ensuring the scraper continues to work even when the target site changes its defense mechanisms.

2 .Infrastructure Costs: Running headless browsers (Chrome/Firefox) requires significant RAM and CPU. Scaling to scrape millions of pages requires managing a fleet of servers. Thordata offloads this computation to its cloud.

3. Proxy Management: High-quality residential proxies are expensive to procure individually. Thordata bundles these into the API cost, providing access to millions of IPs without the need to manage rotation logic or ban lists.

Conclusion

Leveraging Python for web scraping is key to unlocking eCommerce market intelligence. Yet, scaling this capability requires navigating a complex landscape of anti-bot measures. Thordata’s Web Scraper API eliminates this friction, allowing you to focus on data analysis rather than maintenance.

With our specialized tools, you can dive deep into the markets that matter:

Amazon Scraper API: Retrieve global product details, seller info, and 500+ data fields in real-time.

Walmart Scraper API: Instantly scrape IDs, pricing, images, reviews, and competitive benchmarks.

Don't want to code? We also offer ready-to-use e-commerce datasets to fast-track your success.

 
Get started for free

Frequently asked questions

What is eCommerce Web Scraping?

 

It is the automated process of extracting public data from online stores for competitor price monitoring and market intelligence. Businesses use it to turn website content into structured data(like JSON/CSV) to analyze dynamic pricing trends.

Who provides eCommerce web scraping services?

 

Thordata is a premier provider of eCommerce data services. With Thordata's robust Web Scraper API and ready-to-use datasets, you can effortlessly overcome technical hurdles like CAPTCHAs and IP blocks.

What data can I extract from eCommerce websites?

 

You can capture essential fields such as product specifications, SKUs, real-time pricing, inventory levels, and customer reviews. This data helps analyze best-seller rankings and consumer sentiment.

About the author

Yulia is a dynamic content manager with extensive experience in social media, project management, and SEO content marketing. She is passionate about exploring new trends in technology and cybersecurity, especially in data privacy and encryption. In her free time, she enjoys relaxing with yoga and trying new dishes.

The thordata Blog offers all its content in its original form and solely for informational intent. We do not offer any guarantees regarding the information found on the thordata Blog or any external sites that it may direct you to. It is essential that you seek legal counsel and thoroughly examine the specific terms of service of any website before engaging in any scraping endeavors, or obtain a scraping permit if required.