Pull structured fields from any page
pick a template or write CSS selectors, get JSON.
What you get
Structured, exportable, AI-ready.
Pre-built templates
Hacker News, GitHub Trending, Reddit, Product Hunt, generic blog — pick one and run.
Custom CSS selectors
Define your own fields with CSS — get exact JSON, no LLM extraction needed.
Three speed modes
HTTP (sub-2s), browser (renders JS), stealth (defeats Cloudflare).
Repeating items
Set an item_selector once — get an array of rows with all fields filled.
Export to JSON / CSV
Copy or download the result — feed into Sheets, Notion, your DB, or LLM context.
No signup
Public during dev. Future quota by sign-in only when stealth/browser mode is used.
Common questions
How is Extract different from Crawl?+
Crawl gives you the whole page as Markdown — a wall of text that you (or an LLM) still has to parse. Extract returns precise JSON: you tell it which CSS selectors map to which field names, and it gives back an array of typed records. Best for product cards, list pages, search results, API-like reads.
Do I need to know CSS selectors?+
No — pick from built-in templates (Hacker News, GitHub Trending, Reddit, Product Hunt, generic blog) and just swap the URL. The advanced editor is only there if you want to customize.
What's the difference between HTTP / browser / stealth mode?+
HTTP (default) is the fastest — pure GET with TLS fingerprint spoofing, ~1-2s. Browser uses Playwright when the page needs JS to render content. Stealth adds anti-bot evasion (humanized cursor moves, fingerprint randomization) — needed for sites behind Cloudflare or DataDome.
Can I export the results?+
Yes — copy JSON to clipboard, or download as JSON / CSV. Feeds straight into Sheets, Airtable, Notion, your database, or an LLM context window.