Lots of customers are using this across different domains to get consistent, repeatable JSON out of sites and monitor changes.
Supports API + HTML extraction, never write a scraper again!
codingdave•1h ago
Writing a scraper isn't the hard part, that is actually fairly trivial at this point in time. Pulling content into JSON from your scrape is also fairly trivial - libraries exist that handle it well.
The harder parts are things like playing nicely so your bot doesn't get banned by sysadmins, detecting changes downstream from your URL, handling dynamically loading content, and keeping that JSON structure consistent even as your sites change their content, their designs, etc. Also, scalability. One customer I'm talking to could use a product like this, but they have 100K URLs to track, and that is more than I currently want to deal with.
I absolutely can see the use case for consistent change data from a URL, I'm just not seeing enough content in your marketing to know whether you really have something here, or if you vibe coded a scraper and are throwing it against the wall to see if it sticks.
chadwebscraper•1h ago
I appreciate the response! I also agree - happy to add some clarity to this stuff.
Bot protection - this is handled in a few ways, the basic form bypasses most bot protections and that’s what you can use on the site today. For tougher sites, it solves the bot protections (think datadome, Akamai, incapsula).
The consistency part is ongoing, but it’s possible to check the diffs and content extractions and notice if something has changed and “reindex” the site.
100k URLs is a lot! It could support that, but the initial indexing would be heavy. It’s fairly resource efficient (no browsers). For scale, it’s doing about 40k/scrapes a day right now.
Appreciate the comments, happy to dive deeper into the implantation and I agree with everything you’ve said. Still iterating and trying to improve it.
tmaly•54m ago
this must wreck their google analytics stats
chadwebscraper•53m ago
lol it probably does unless their filtering is great
arm32•1h ago
Residential proxies are sketchy at best. How can you guarantee that your service's infrastructure isn't hinging on an illicit botnet?
chadwebscraper•56m ago
This is a good callout - I’ve tried my best thus far to limit the use of proxies unless absolutely necessary and then focus on reputable providers (even though these are a bit more pricey).
Definitely going to give this more thought though, thank you for the comment
dewey•3m ago
There's a lot of variety in the residential proxy market. Some are sourced from bandwidth sharing SDKs for games with user consent, some are "mislabeled" IPs from ISPs that offer that as a product and then there's a long tail of "hacked" devices. Labeling them generally as sketchy seems wrong.
arjunchint•51m ago
So what happens when the website layout updates, does the monitoring job fail silently?
chadwebscraper•3h ago
1. Paste a URL in, describe what you want
2. Define an interval to monitor
3. Get real time webhooks of any changes in JSON
Lots of customers are using this across different domains to get consistent, repeatable JSON out of sites and monitor changes.
Supports API + HTML extraction, never write a scraper again!
codingdave•1h ago
The harder parts are things like playing nicely so your bot doesn't get banned by sysadmins, detecting changes downstream from your URL, handling dynamically loading content, and keeping that JSON structure consistent even as your sites change their content, their designs, etc. Also, scalability. One customer I'm talking to could use a product like this, but they have 100K URLs to track, and that is more than I currently want to deal with.
I absolutely can see the use case for consistent change data from a URL, I'm just not seeing enough content in your marketing to know whether you really have something here, or if you vibe coded a scraper and are throwing it against the wall to see if it sticks.
chadwebscraper•1h ago
Bot protection - this is handled in a few ways, the basic form bypasses most bot protections and that’s what you can use on the site today. For tougher sites, it solves the bot protections (think datadome, Akamai, incapsula).
The consistency part is ongoing, but it’s possible to check the diffs and content extractions and notice if something has changed and “reindex” the site.
100k URLs is a lot! It could support that, but the initial indexing would be heavy. It’s fairly resource efficient (no browsers). For scale, it’s doing about 40k/scrapes a day right now.
Appreciate the comments, happy to dive deeper into the implantation and I agree with everything you’ve said. Still iterating and trying to improve it.
tmaly•54m ago
chadwebscraper•53m ago