What Supacrawler supports:
- v1/scrape — fetch content (HTML, js rendered) from a URL
- v1/crawl — follow links to crawl entire sites or sections
- v1/screenshots — capture visual renderings of pages (full page, element, etc.)
- v1/watch — monitor pages for changes over time
- v1/parse — the new endpoint: you submit a URL + a schema or desired format (JSON, CSV, YAML, Markdown), and it returns structured data without needing custom scraper logic
Repo: https://github.com/supacrawler/supacrawler Cloud: https://supacrawler.com
Let me know what would make this a tool you’d rely on in production! Thanks for checking this out :)