EN
English
简体中文
Log inGet started for free

Blog

blog

curl-get-request-guide-syntax-arguments-examples

cURL GET Request Guide: Syntax, Arguments & Examples

Advanced cURL Guide: Terminal window showing cURL GET request syntax

author Kael Odin
Kael Odin
Last updated on
2025-12-12
12 min read
Engineering Team Reviewed
Code Tested: Dec 2025
cURL Version: 8.x Compatible
📋 Key Takeaways
  • Default cURL requests are instantly flagged by anti-bot systems due to identifiable User-Agent strings
  • TLS fingerprinting (JA3) can detect cURL even with IP rotation—browser impersonation is required for protected sites
  • Use -G and -d flags for clean query parameter handling instead of manual string concatenation
  • Production scripts should include --connect-timeout, --retry, and proper proxy authentication
  • For JavaScript-rendered content, cURL alone is insufficient—use headless browsers or scraping APIs with JS rendering

cURL (Client for URLs) is the Swiss Army knife of data engineering. While most developers know how to run a basic curl google.com, relying on default commands in a production scraping environment is a recipe for getting blocked.

Modern websites employ sophisticated defenses—like TLS fingerprinting (JA3) and behavior analysis—that can instantly detect standard cURL requests. In this advanced guide, we move beyond the basics. We will cover how to structure production-ready GET requests, how to debug connection drops, and most importantly, how to integrate Residential Proxies to stay anonymous.

📊 Testing Methodology

All commands in this guide were tested against httpbin.org and real-world endpoints using cURL 8.4.0 on Ubuntu 22.04 LTS. Proxy rotation tests were conducted using Thordata’s residential proxy network with 100+ sequential requests to verify IP diversity.

1. The Anatomy of a cURL GET Request

A GET request is the standard HTTP method for retrieving data without modifying server-side resources. However, to a server’s anti-bot system, a naked cURL request looks suspicious because it lacks the “metadata” that a real browser sends.

Here is the simplest command possible (which typically gets blocked by protected sites):

1
curl http://httpbin.org/get
✓ 200 OK Response Time: 245ms
{ “args”: {}, “headers”: { “Host”: “httpbin.org”, “User-Agent”: “curl/8.4.0” }, “origin”: “203.0.113.45”, “url”: “http://httpbin.org/get” }
Pro Tip: The User-Agent Trap By default, cURL sends User-Agent: curl/8.4.0. This is an immediate red flag. Over 85% of protected sites block this. Always spoof this to look like a Chrome or Firefox browser using the -A or -H flag.

2. Essential Flags for Data Collection

A. Handling Query Parameters (-G)

Don’t concatenate strings manually (e.g., ?q=item). Use -G and -d to let cURL handle URL encoding automatically.

1
curl -G -d "search=iphone 15" -d "sort=price_asc" http://httpbin.org/get

B. Following Redirects (-L)

Scraping targets often redirect. Use -L to follow them, but set a limit to avoid loops.

1
curl -L --max-redirs 5 http://httpbin.org/redirect-to?url=http://httpbin.org/get

C. Debugging (-v vs –trace)

Use -v to see headers, or --trace-ascii for raw byte data.

1
2
curl -v http://example.com
curl --trace-ascii debug_dump.txt http://example.com

3. The “Hidden” Danger: TLS Fingerprinting

Standard cURL has a unique TLS handshake signature (JA3 fingerprint) that differs from browsers. Advanced systems like Cloudflare detect this in milliseconds.

🔬 Real-World Observation: JA3 Detection
Based on internal testing against 50 Cloudflare-protected domains

When testing standard cURL (v8.4.0) against Cloudflare-protected sites, requests were blocked within 1-3 attempts even with clean residential IPs. The blocking occurred before rate limiting, indicating fingerprint-based detection. For these cases, use the Thordata Web Scraping API which impersonates browser TLS signatures.

4. Production Script: Rotating Proxies with cURL

This script demonstrates how to rotate IPs using Thordata’s gateway.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/bash
# Configuration
PROXY="http://$THORDATA_SCRAPER_TOKEN:@gate.thordata.com:22225"
TARGET="http://httpbin.org/ip"

echo "🔄 Starting Proxy Rotation Test..."

for i in {1..3}
do
   echo "Request #$i..."
   # -x: Specify proxy server
   curl -x "$PROXY" \
        --connect-timeout 5 \
        --retry 2 \
        -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0" \
        "$TARGET"
   echo -e "\n----------------"
done

5. cURL vs. The World

Tool Pros Cons Best For
cURL (Bash) Extremely Fast, minimal resource usage. Detected by TLS fingerprinting, no JS. API testing, static pages.
Selenium Full JavaScript, mimics user. Very Slow, high CPU usage. Complex SPAs.
Scraping API Bypasses Blocks, auto-scaling. Per-request cost. High-volume enterprise scraping.
⚖️ Responsible Scraping Reminder
  • ✓ Always check robots.txt
  • ✓ Respect rate limits
  • ✓ Comply with Terms of Service
  • ✓ Handle personal data per GDPR/CCPA

Conclusion

Mastering cURL is essential for quick diagnostics and API testing. However, for production scraping against protected targets, relying solely on cURL flags is often insufficient. Consider integrating specialized tooling that handles headers, fingerprint rotation, and proxy management automatically.

Get started for free

Frequently asked questions

How do I fix “curl: (56) Proxy CONNECT aborted”?

This error typically indicates the proxy server rejected the connection. Common causes include: invalid or expired authentication token, exceeding concurrent connection limits, or the proxy being temporarily unavailable. Check your Thordata dashboard for status.

Can cURL execute JavaScript?

No. cURL only retrieves raw HTML. If content is generated by React or Vue.js, cURL will receive an empty page. Use a headless browser or Thordata’s Universal Scraper with js_render=true.

What is the difference between -H and -A flags?

The -A flag is a shortcut specifically for setting the User-Agent. The -H flag is generic for any header (e.g., -H "Authorization: Bearer..."). Both can set User-Agent, but -A is shorter.

About the author

Kael is a Senior Technical Copywriter at Thordata. He works closely with data engineers to document best practices for bypassing anti-bot protections. He specializes in explaining complex infrastructure concepts like residential proxies and TLS fingerprinting to developer audiences.

The thordata Blog offers all its content in its original form and solely for informational intent. We do not offer any guarantees regarding the information found on the thordata Blog or any external sites that it may direct you to. It is essential that you seek legal counsel and thoroughly examine the specific terms of service of any website before engaging in any scraping endeavors, or obtain a scraping permit if required.