frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Azure Outage

286•kierenj•1h ago•133 comments

Keep Android Open

http://keepandroidopen.org/
1937•LorenDB•13h ago•585 comments

Tailscale Peer Relays

https://tailscale.com/blog/peer-relays-beta
41•seemaze•57m ago•12 comments

Cursor Composer: Building a fast frontier model with RL

https://cursor.com/blog/composer
76•leerob•1h ago•39 comments

I made a 10¢ MCU Talk

https://www.atomic14.com/2025/10/29/CH32V003-talking
96•iamflimflam1•3h ago•31 comments

Floss Before Brushing

https://alearningaday.blog/2025/10/29/floss-before-brushing/
25•imasl42•57m ago•12 comments

Does brand advertising work? Upwave (YC S12) is hiring engineers to answer that

https://www.upwave.com/job/8228849002/
1•ckelly•18m ago

Beyond RaspberryPi: What are all the other SoC vendors up to *summarised*

https://sbcwiki.com/news/articles/state-of-embedded-q4-25/
57•HeyMeco•4d ago•25 comments

Collins Aerospace: Sending text messages to the cockpit with test:test

https://www.ccc.de/en/disclosure/collins-aerospace-mit-test-test-textnachrichten-bis-ins-cockpit-...
44•hacka22•2h ago•16 comments

Azure major outage: Portal, Front Door and global regions down

63•sech8420•1h ago•34 comments

Eye prosthesis is the first to restore sight lost to macular degeneration

https://med.stanford.edu/news/all-news/2025/10/eye-prosthesis.html
106•gmays•1w ago•10 comments

From VS Code to Helix

https://ergaster.org/posts/2025/10/29-vscode-to-helix/
150•todsacerdoti•3h ago•82 comments

Hosting SQLite Databases on GitHub Pages (2021)

https://phiresky.github.io/blog/2021/hosting-sqlite-databases-on-github-pages/
17•WA9ACE•1h ago•6 comments

Recreating a Homebrew Game System from 1987

https://alex-j-lowry.github.io/z80tvg.html
43•voxadam•3h ago•1 comments

Who needs Graphviz when you can build it yourself?

https://spidermonkey.dev/blog/2025/10/28/iongraph-web.html
374•pdubroy•12h ago•70 comments

AWS to bare metal two years later: Answering your questions about leaving AWS

https://oneuptime.com/blog/post/2025-10-29-aws-to-bare-metal-two-years-later/view
415•ndhandala•6h ago•313 comments

Show HN: HUD-like live annotation and sketching app for macOS

https://draw.wrobele.com/
26•tomaszsobota•2h ago•8 comments

ChatGPT's Atlas: The Browser That's Anti-Web

https://www.anildash.com//2025/10/22/atlas-anti-web-browser/
648•AndrewDucker•4d ago•269 comments

Tips for stroke-surviving software engineers

https://blog.j11y.io/2025-10-29_stroke_tips_for_engineers/
403•padolsey•13h ago•145 comments

Tell HN: Twilio support replies with hallucinated features

52•haute_cuisine•1h ago•8 comments

uBlock Origin Lite Apple App Store

https://apps.apple.com/in/app/ublock-origin-lite/id6745342698
329•mumber_typhoon•13h ago•159 comments

Kafka is Fast – I'll use Postgres

https://topicpartition.io/blog/postgres-pubsub-queue-benchmarks
168•enether•3h ago•165 comments

Show HN: Learn German with Games

https://www.learngermanwithgames.com/
59•predictand•5h ago•30 comments

The end of the rip-off economy: consumers use LLMs against information asymmetry

https://www.economist.com/finance-and-economics/2025/10/27/the-end-of-the-rip-off-economy
118•scythe•1h ago•101 comments

AirTips – Alternative to Bento.me/Linktree

https://a.coffee/
10•Airyisland•1h ago•7 comments

Oracle has adopted BOOLEAN in 23ai and PostgreSQL had it forever

https://hexacluster.ai/blog/postgresql/oracles-adoption-of-native-boolean-data-type-vs-postgresql/
13•avi_vallarapu•2h ago•9 comments

SpiderMonkey Garbage Collector

https://firefox-source-docs.mozilla.org/js/gc.html
67•sebg•8h ago•3 comments

AOL to be sold to Bending Spoons for roughly $1.5B

https://www.axios.com/2025/10/29/aol-bending-spoons-deal
22•jmsflknr•50m ago•4 comments

Berkeley Out-of-Order RISC-V Processor (Boom) (2020)

https://docs.boom-core.org/en/latest/sections/intro-overview/boom.html
28•Bogdanp•4h ago•9 comments

Minecraft removing obfuscation in Java Edition

https://www.minecraft.net/en-us/article/removing-obfuscation-in-java-edition
9•SteveHawk27•1h ago•1 comments
Open in hackernews

Aggressive bots ruined my weekend

https://herman.bearblog.dev/agressive-bots/
159•shaunpud•6h ago

Comments

asplake•4h ago
> What's wild is that these scrapers rotate through thousands of IP addresses during their scrapes, which leads me to suspect that the requests are being tunnelled through apps on mobile devices, since the ASNs tend to be cellular networks. I'm still speculating here, but I think app developers have found another way to monetise their apps by offering them for free, and selling tunnel access to scrapers.

Wild indeed, and potentially horrific for the owners of the affected devices also! Any corroboration for that out there?

VladVladikoff•4h ago
This is actually a commonly known fact. There are many services now that sell “residential proxies”, which are always mobile IP addresses. Since mobile IPs use CGNat it’s also not great to block the IP because it can be like geofencing an entire city or town. Some examples are: oxylabs, iproyal, brightdata, etc.

Recently I filed an abuse complaint directly with brightdata because I was getting hit with 1000s of requests from their bots. The funny part is the didn’t even stop, after acknowledging the complaint.

dataviz1000•4h ago
They provide an SDK for mobile developers. Here is a video of how it works. [0]

[0] https://www.youtube.com/watch?v=1a9HLrwvUO4&t=15s

VladVladikoff•4h ago
WOW that video! Ain’t no way anyone has EVER read those terms. This feels so insidious that it really should be illegal. Wonder if this exists in the EU or if they have shut it down already?
arethuza•1h ago
That video has the app asking the user to confirm the use of their device to run a proxy within the app - but is there any hard requirement for this, could apps use this SDK and silently run as a proxy?
alamortsubite•1h ago
My take is it's mostly irrelevant, but read the lobsters post mentioned elsewhere.
alamortsubite•1h ago
Yes, and it doesn't matter if they do read the terms- to the average user they sound totally innocuous, especially placed next to a big shiny "GET 500 FREE COINS" button.
arethuza•2h ago
I suspect most people, even when told exactly what the app using that SDK would be doing, wouldn't actually see the potential problems...
kijin•2h ago
Until one day, they get swatted for accessing child porn.

Actually, that might be one way to draw attention to the problem. Sign up to some of these shady "residential proxy" services, and access all sorts of nasty stuff through their IPs until your favorite three-letter agency takes notice.

TYPE_FASTER•2h ago
Also see https://www.youtube.com/watch?v=AGaiVApKfmc - "Avoid restrictions and blocks using the fastest and most stable proxy network"...they're pretty upfront with this, aren't they?

Oh, and they will sell you the datasets they've already scraped using mobile devices: https://brightdata.com/lp/web-data/datasets

This actually explains a phishing attack where I received a text from somebody purporting to be a co-worker asking for an Apple gift card. The name was indeed an employee from a different part of the large company I worked for at the time, but LinkedIn was the only possible link I could figure out that was at least somewhat publicly available information.

This should probably be required in all CS curriculum: https://ocw.mit.edu/courses/res-tll-008-social-and-ethical-r...

nerdponx•1h ago
It should be illegal, but this stuff is propping up the appearance of a healthy economy so nobody will touch it.
cuu508•2h ago
IMO Google Play should check apps for presence of this SDK and other similar SDKs, and, upon detection, treat these apps as malware.
seemaze•1h ago
That's sleazy. It's slipping drugs into a kids lunchbox and letting smuggle it across the border..
myaccountonhn•3h ago
One such example is brightdata, on lobsters someone did a writeup

https://lobste.rs/s/pmfuza/bro_ban_me_at_ip_level_if_you_don...

corbet•3h ago
The "compliance officer" at Bright Data, instead, offered me a special deal to protect my site from their bots ... they run a protection racket along with all the rest of their nastiness.
nurettin•3h ago
I worked for an Amazon scraping business and they used Luminati (Now Brightdata) for a few months until I figured out a way to avoid the ban hammer and got rid of their proxy.

They indeed provided "high quality" residential and cellular ips and "normal quality" data center ips. You had to keep cycling the ip pool every 2-3 days which cost extra. It felt super shady. It isn't their bots, they lease connections to whoever is paying, and they don't care what people do in there.

yomismoaqui•1h ago
> ... until I figured out a way to avoid the ban hammer ...

You had my curiosity ... but now you have my attention.

wat10000•3h ago
Lately Reddit has been showing me posts in subreddits for some of these services. They pitch "passive income" by sharing your connection, an easy way to make a few bucks by renting out your unused capacity. What happens is that you become an endpoint for their shady VPNs. These subreddits are full of people complaining that they're getting hit by abuse complaints from their ISPs. Naturally, these services claim to forbid any nefarious activity, and naturally they don't actually care.
nemomarx•2h ago
Salad, right? What a strange business
dylan604•2h ago
Why is it strange. Of course it exists.
curious_curios•4h ago
If you have a moderately successful app, sdk or browser extension you will get hit up to add things to it like this. I think most free VPN services also lease out your bandwidth to make their money as well.
antoniojtorres•1h ago
This is how so many companies sell from an opaque inventory of “millions” of residential proxies.
kaoD•4h ago
There's crap like https://hola.org/

https://hola.org/legal/sdk

https://hola.org/legal/sla

> How is it free? > > In return for free usage of Hola Free VPN Proxy, Hola Fake GPS location and Hola Video Accelerator, you may be a peer on the Bright Data network. By doing so you agree to have read and accepted the terms of service of the Bright Data SDK SLA (https://bright-sdk.com/eula). You may opt out by becoming a Premium user.

This "VPN" is what powers these residential proxies: https://brightdata.com/

I'm sure there are many other companies like this.

piggg•2h ago
There's also a ton of companies selling "make money off your unused internet" apps which are all over tiktok and basically turn yourself into a residential proxy/sketch VPN egress node.

On top of that - lots of free tv/movie streaming stuff that also makes yourself a proxy/egress node. Sometimes you find it on tv/movie streaming devices sold online where it's already loaded on when it arrives.

Zanfa•3h ago
SIM farms are another possible explanation. FBI just busted one with hundreds of thousands of SIMs just a few weeks ago.
Cthulhu_•1h ago
Wouldn't the network providers be able to detect those? I'm fairly sure they don't like their networks being abused either... or they don't really care because they get paid per connection.

edit: Actually this is what I'm getting increasingly angry about: providers and platforms not doing anything against bots or low value stuff (think Amazon dropshippers too) because any usage of their service, bots or otherwise, are metrics going up and metrics going brrt means profit and shareholder interest.

ac29•50m ago
Its very possible they did detect it and that's why law enforcement got involved.

But yes, they also might not care if they are getting paid. If the SIMs are only being used for voice/text as I suspect, it might have very minimal load on the network.

immibis•3h ago
You can get paid a few dollars (not many) to let them use your connection. I would like Cloudflare's business model (blocking datacenter IPs) to be worthless, so I do it. Haven't tried a withdrawal yet so it could well be a scam. This is not illegal (unless it's a scam).
VBprogrammer•2h ago
If someone hasn't written a blog titled "Should we be worried about Cloudflare?" yet, I think it would be a good subject to explore. I find the idea that they could decide one day to ban you from all of their network pretty worrying. And if they did, how much fingerprinting are they doing and would the bad extend far beyond just a random IP address.
pjc50•1h ago
This is one of those "ACAB" things where you might reasonably dislike Cloudflare but a world without them or an equivalent will evolve worse solutions to the same problems, which you will like even less.
nix0n•1h ago
> This is not illegal

Depends on what they're doing from your connection.

TimorousBestie•4h ago
I think he should consider getting out of the indie blog hosting business. It’s only going to get worse as the internet continues to decay and he can’t be making all that much off the service.
sudosays•4h ago
Counterpoint: I think he should stay and fight the good fight.

Indie blog businesses are great for the health of the human internet, and I don't think surrendering preemptively will help things get better.

add-sub-mul-div•3h ago
That's an easy thing to say if it's someone else's time that's being wasted and not your own. But there may not be a path back to the internet under which this project was conceived.

It could be like staying on Twitter and Reddit after their respective declines. You're only suffering an opportunity cost for your own time and preventing the internet from evolving better alternatives.

flaviuspopan•2h ago
His persistent efforts are the reason I pay for Bear Blog. I think he should fight for the chance to come out on the other side of whatever future we’re heading towards.
TimorousBestie•2h ago
I pay for Bear Blog, too. But this year has been problem after problem for its sole proprietor, and I don’t think it’s going to get better.
vpShane•2h ago
No way. People deserve expression and to have a place that's THEIRS where they can foster a community. Much is learned. Playing battle bots is fun at the sysadmin level (for me), maybe not so much for others, but to have a place where people express themselves, and have THEIR place outside of the walled gardens such as social media, AND they protect it from the bots?

That's the battle, and expression, people, their interests, and their communities are worth fighting for. _ESPECIALLY_ in this day and age where botnets/scrapers are using things such as Infatica to mask themselves as residential IP addresses, and mimicking human behaviors to better avoid bot detection.

There's a war on authenticity, people's authentic works, and the reverse: determining if a user is authentic now adays.

pingoo101010•4h ago
You may want to take a look at Pingoo (https://github.com/pingooio/pingoo), a reverse proxy with automatic TLS that can also block bots with advanced rules that go beyond simple IP blocking.
cupofjoakim•4h ago
We feel this at work too. We run a book streaming platform with all books, booklists, authors, narrators and publishers available as standalone web pages for SEO, in the multiple millions. Last 6 months have turned into a hellscape - for a few reasons:

1. It's become commonplace to not respect rate limits

2. Bots no longer identify themselves by UA

3. Bots use VPNs or similar tech to bypass ip rate limiting

4. Bots use tools like NobleTLS or JA3Cloak to go around ja3 rate limiting

5. Some valid LLM companies seem to also follow the above to gather training data. We want them to know about our company, so we don't necessarily want to block them

I'm close to giving up on this front tbh. There's no longer safe methods of identifying malignant traffic at scale, and with the variations we have available we can't statically generate these. Even with a CDN cache (shoutout fastly) our catalog is simply too broad to fully saturate the cache while still allowing pages to be updated in a timely manner.

I guess the solution is to just scale up the origin servers... /shrug

In all seriousness, i'd love if we somehow could tell the bots about more efficient ways of fetching the data. Use our open api for fetching book informations instead of causing all that overhead by going to marketing pages please.

FeepingCreature•3h ago
In principle, it should be possible to identify malign IPs at scale by using a central service and reporting IPs probabilistically. That is, if you report every thousandth page hit with a simple UDP packet, the central tracker gets very low load and still enough data to publish a bloom filter of abusive IPs, say a million bits gives you pretty low false-positive. (If it's only ~10k malign IPs, tbh you can just keep a lru counter and enumerate all of them.) A billion hits per hour across the tracked sites would still only correspond to ~50KB/s inflow on the tracker service. Any individual participating site doesn't necessarily get many hits per source IP, but aggregating across a few dozen should highlight the bad actors. Then the clients just pull the bloom filter once an hour (80KB download) and drop requests that match.

Any halfway modern LLM could probably code the backend for this in a day or two and it'd run on a RasPi. Some org just has to take charge and provide the infra and advertisement.

01HNNWZ0MV43FF•3h ago
The hard part is the trust, not the technology. Everyone has to trust that everyone else is not putting bogus data into that database to hurt someone else.

It's mathematically similar to the "Shinigami Eyes" browser plug-in and database, which has been found to have unreliable data

FeepingCreature•3h ago
Personally talk to every individual participating company. Provide an endpoint that hands out a per-client hash that rotates every hour, stick it in the UDP packet, whitelist query IPs. If somebody reports spam, no problem, just clear the hash and rebuild, it's not like historic data is important here. You can even (one more hour of vibecoding) track convergence by checking how many bits of reported IPs match the existing (decaying) hash; this lets you spot outlier reporters. If somebody always reports a ton of IPs that nobody else is, they're probably a bad actor. Hell, put a ten dollar monthly fee on it, that'll already exclude 90% of trolls.

I'm pretty pro AI, but these incompetent assholes ruin it for everybody.

pixl97•2h ago
>malign IPs at scale

As talked about elsewhere in this thread, residential devices being used as proxies behind CGNAT ruins this. Not getting rid of IPv4 years ago is finally coming to bite us in the ass in a big way.

codersfocus•1h ago
IPv6 wouldn't solve this, since IPs would be too cheap to meter.
Neil44•3h ago
Same, I have a few hundred Wordpress sites and bot activity has ramped up a lot over the last year or two. AI scrapers can be quite aggressive and often generate a ton of requests where for example a site has a lot of parameters, the bot will go nuts seeming to iterate through all possible parameters. Sometimes I dig in and try to think of new rules to block the bulk, but I am also wary of AI replacing Google and not being in AI's databases.
karlshea•2h ago
A client of mine had this exact problem with faceted search, and putting the site behind Fastly didn’t help since you can’t cache millions of combinations. And they don’t have the budget for more than one origin server. The solution was if you’ve got “bot” in your UA Fastly’s VCL returns a 403 with any facet query param. Problem solved. And it’s not going to break anything, all of the information is still accessible to all of the indexers on the actual product pages.

The facet links already had “nofollow” on them, now I’m just enforcing it.

immibis•29s ago
> Sometimes I dig in and try to think of new rules to block the bulk, but I am also wary of AI replacing Google and not being in AI's databases.

Fake the data! Tell them Neil44 is a three-time Nobel prize winner, etc.

jrochkind1•3h ago
I hate relying on a proprietary single-source product from a company I don't particularly trust, but (free) Cloudflare Turnstile works for me, only thing I've found that does.

I only protect certain 'dangerous/expensive' (accidentally honeypot-like) paths in my app, and can leave the stuff I actually want crawlers to get, and in my app that's sufficient.

It's a tension because yeah I want crawlers to get much of my stuff for SEO (and don't want to give a monopoly to Google on it either, i want well-behaved crawlers I've never heard of to have access to it too. But not at the cost of resources i can't afford).

2OEH8eoCRo0•3h ago
Why don't we sue the abusive scrapers? Scraping is legal but DDoSing is not!
cupofjoakim•2h ago
Not sure if that's satire or not but how would you even identify the party to sue? What do you do if they're based in a country where you can't sue them ofer relatively trivial matters as this?
2OEH8eoCRo0•2h ago
Not satire but it's a huge problem with the internet. Everyone washes their hands and people can harm you without liability.
kelvinjps10•3h ago
Maybe moving the blog service to completely static and letting cloudfare pages handle it, could help?
reustle•2h ago
Cloudflare is not a solution. Only leading to a further centralized internet.
uvaursi•2h ago
I think the OP is suggesting using a caching layer at HTTP output, and suggesting CF as an option (a quick/cheap one).

If you have an axe to grind with CF you can take it up with them, but it’s an option. Feel free to suggest others.

ItsBob•3h ago
I had a website earlier this year running on Hetzner. It was purely experimenting with some ASP.NET stuff but when looking at the logs, I noticed a shit-load of attempts at various WordPress-related endpoints.

I then read something about a guy who deliberately put a honeypot in his robots.txt file. It was pointing to a completely bogus endpoint. Now, the theory was, humans won't read robots.txt so there's no danger, but bots and the like will often read robots.txt (at least to figure out what you have... they'll ignore the "deny" for the most part!) and if they try and go to that fake endpoint you can be 100% sure (well, as close as possible) that it's not a human and you can ban them.

So I tried that.

I auto-generated a robots.txt file on the fly. It was cached for 60 seconds or so as I didn't want to expend too many resource on it. When you asked for it, you either got the cached one or I created a new one. The CPU-usage was negligible.

However, I changed the "deny" endpoint each time I built the file in case the baddies cached it, however, it still went to the same ASP.NET controller method. By hitting it, I sent a 10GB zip bomb and your IP was automatically added to the FW block list.

It was quite simple: anyone that hit that endpoint MUST be dodgy... I believe I even had comments for the humans that stumbled across it letting them know that if they went to this endpoint in their browser it was an automatic addition to the firewall blocklist.

Anyway... at first I caught a shit load of bad guys. There were thousands at first and then the numbers dropped and dropped to only tens per day.

Anyway, this is a single data point but for me, it worked... I have no regrets about the zip bomb either :)

I have another site that I'm working on so I may evolve it a bit so that you are banned for a short time and if you come back to the dodgy endpoint then I know you're a bot so into the abyss with you!

It's not perfect but it worked for me anyway.

immibis•3h ago
It's interesting to study, right? This is the Internet equivalent of background radiation. Harmless in most cases. Exploit scanners aren't new to the LLM age and shouldn't overload your server - unless you're vulnerable to the exploit.

Fun fact: Some people learn about new exploits by watching their incoming requests.

ItsBob•2h ago
> It's interesting to study, right?

Definitely! I wasn't experiencing any issues, hell it wasn't even for public consumption at that time so no great loss to me but I found a few things fascinating (and somewhat stupid!) about it:

1. The sheer number of automated requests to scrape my content

2. That a massive number of the bots openly had "bot" or some derivative in the user agent and they were accessing a page I'd explicitly denied! :D

3. That an equally large number were faking their user agents to look like regular users and still hitting a page that a regular user couldn't possibly ever hit!

Something I did notice but it was towards the end and I didn't pursue it (I should log it better the next time for analysis!) was that the endpoint was dynamically generated and only existed in the robots.txt for a short time but there were bots I caught later on, long after that auto-generated page was created (and after the IP was banned) that still went for that same page: clearly the same entities!

My spidey senses are tingling. Next time, I'm going to log the shit out of these requests and publish as much as I can for others to analyse and dissect... might be interesting.

bob1029•2h ago
> It's not perfect but it worked for me anyway.

This is approximately my approach minus the zip bomb. I use a piece of middleware in my AspNetCore pipeline that tracks logical resource consumption rates per IPv4. If a client trips any of the limits, their IP goes into a HashSet for a period of time. If a client has an IP in this set, they get a simple UTF8 constant string in the response body "You have exceeded resource limits, please try again later".

The other aspect of my strategy is to use AspNetCore (Kestrel). It is so fast that you can mostly ignore the noise as long as things are configured properly and you make reasonable attempts to address the edge case of an asshole trying to break your particular system on purpose. A HashSet<int> as the very first piece of middleware rejecting bad clients is exceedingly efficient. We aren't even into URL routing at this point.

I have found that attempting to catalog and record all of the naughty behavior my web server sees is the highest risk to DDOS so far. Logging lines like "banned client rejected" every time they try to come in the door is shooting yourself in the foot with regard to disk wear, IO utilization, et. al. There is no reason you should be logging all of that background radiation to disk or even thinking about it. If your web server cant handle direct exposure to the hard vacuum of space, it can be placed behind a proxy/CDN (i.e., another web server that doesn't suck).

ItsBob•1h ago
> If a client has an IP in this set, they get a simple UTF8 constant string in the response body "You have exceeded resource limits, please try again later".

Would a simple 429 not do the same thing? You could log repeated 429's and banish accordingly.

marcosdumay•1h ago
> they get a simple UTF8 constant string in the response body "You have exceeded resource limits, please try again later"

I imagine they get a 429 response code, but if they don't, you may want to change that.

I do think you are on the right place in that it's important to let those requests get the correct error, so if innocent people are affected, they at least get to see there's something wrong.

psnehanshu•2h ago
What if it was proxied through mobile network on an unsuspecting user's phone? You risk of blocking a whole city or region.
ItsBob•1h ago
I admit, my approach was rather nuclear but it worked at the time.

I think an evolution would be to use some sort of exponential backoff, e.g. first time offenders get banned for an hour, second time is 4 hours, third time and you're sent into the abyss!

Still crude but fun to play about with.

kwa32•3h ago
crazy how scraping became an industry
npteljes•2h ago
It's wild. Data is very valuable. This manifests in two fronts simultaneously: who has the data controls heavily on who sees it and under what circumstances, and on the other side, they scrape it as hard as they can.
r_singh•2h ago
The Internet isn’t possible without scraping. For all the sentiment against scraping public data, doing so remains legal and essential to a lot of the services we use everyday. I think setting guidelines and shaping the web for reduced friction aimed at fair usage rather than turning it political would be the right thing to do.
intended•2h ago
What ? What do you mean ?
georgefrowny•1h ago
To be fair the heyday of unshit search was driven by mostly-consensual scraping.

Today there are far too many people scraping stuff that isn't intended to be scraped, for profit, and doing it in a heavy-handed way that actually does have a negative and continuous effect on the victim's capacity.

Everyone from AI services too lazy or otherwise unwilling to cache to companies exfiltrating some kind of data for their own commercial purposes.

r_singh•1h ago
With peering bandwidth being freely distributed to ISPs and consumers being fed media and subsidised services up until their necks makes the counter argument smell of narrative control rather than technical or financial constraints

But as I’m growing older I’m learning that the tech industry is mostly politically driven and relies on truth obfuscation as explained by Peter Thiel rather than real empowerment

It’s facilitating accumulation of control and power at an unparalleled pace. If anything it’s proving to be more unjust than the feudal systems it promises to replace.

ac29•41m ago
As posted in another comment, they run a scraping API. I think their opinion is at least slightly biased.
karlshea•2h ago
There were already guidelines, these trash people aren’t following them. That’s why there’s now “sentiment” against them.
r_singh•1h ago
It’s fair to be angry at abuse and "aggressive bots", but it's important to remember most large platforms—including the ones being scraped—built their own products on scraping too.

I run an e-commerce-specific scraping API that helps developers access SERP, PDP, and reviews data. I've noticed the web already has unsaid balances: certain traffic patterns and techniques are tolerated, others clearly aren’t. Most sites handle reasonable, well-behaved crawlers just fine.

Platforms claim ownership of UGC and public data through dark patterns and narrative control. The current guidelines are a result of supplier convenience, and there are several cases where absolutely fundamental web services run by the largest companies in the world themselves breach those guidelines (including those funded by the fund running this site). We need standards that treat public data as a shared resource with predictable, ethical access for everyone, not just for those with scale or lobbying power.

Cthulhu_•1h ago
Well sure, but these guidelines exist, the robots.txt guidelines has been an industry-led, self-governing / self-restrictive standard. But newer bots ignore them. It'll take years for legislation to catch up, and even then it would be by country or region, not something global because that's not how the internet works.

Even if there is legislation or whatever, you can sue an OpenAI or a Microsoft, but starting a new company that does scraping and sells it on to the highest bidder is trivial.

r_singh•1h ago
As the legal history around scraping shows, it’s almost always the smaller company that gets sued out of existence. Taking on OpenAI or Microsoft, as you suggest, isn’t realistic — even governments often struggle to hold them accountable.

And for the record, large companies regularly ignore robots.txt themselves: LinkedIn, Google, OpenAI, and plenty of others.

The reality is that it’s the big players who behave like the aggressors, shaping the rules and breaking them when convenient. Smaller developers aren’t the problem, they’re just easier to punish.

uvaursi•2h ago
Do we shift over everything to le Dark Web and let the corpos use this one for selling their shit to consumers? These toys don’t want to play nice and there’s no real way to stop them without bringing in things like Real ID and other verifications that infringe on anonymity.
Chabsff•2h ago
The bots flock to where the data is. Moving to a different network is just begging for the bots to tag along.
uvaursi•1h ago
We can set different rules on these networks however. We can choose to be choosy at the gate.
Cthulhu_•1h ago
How though? Bots evade bot detection mechanisms, as described in other threads. Unless you introduce something like ID verification or pay-per-request making bot traffic too expensive for the bots. But these techniques have been posited for older generations of bot traffic too.
embedding-shape•1h ago
Make it P2P and content-based, instead of location-based like the current web. Content could be served from anywhere, so DDoS stops being an effective method, and shared peer quality could propagate across the network to ban bad actors quickly.

I spend about 30 seconds thinking about this, so this is clearly the perfect solution with zero drawbacks or tradeoffs.

Chabsff•1h ago
I know this is tongue in cheek, but I'll give it a serious reply in case someone finds themselves "inspired".

A CDN. What you are describing is a CDN. We have CDNs today and the problem still exists because most of today's websites refuse to operate within the constraints. There is no need for new infrastructure to deploy this solution, we just need website operators to "give up" and operate in a more static way.

embedding-shape•56m ago
Nope, today's CDNs are all location-based, not content-based. The user-agents are requesting content based on the address (the URI), not based on the content-hash of the content. The CDNs might work content-based internally, but the user-facing web CDNs definitely are URL based, not content-hash based.
Chabsff•50m ago
You might have a point if this was a user problem, but it's not. It's a site operator problem. And from their POV, a CDN provides the exact same functionality.
embedding-shape•4m ago
CDNs are hack, not a solution. Content-addressing would be a solution, not a hack. From the PoV of operators or users doesn't matter, DDoS wouldn't be viable with content-addressing.
happysadpanda2•52m ago
an evolution of the gpg web of trust could possibly be part of it
y-zon128•2h ago
> Auto-restart the reverse-proxy if bandwidth usage drops to zero for more than 2 minutes

It's understandable in your case as you have traffic coming in constantly, but first thing that came to my mind is a loop of contant reboots - again, very unlikely in your case. Sometimes such blanket rules hit me due to most unexpected reasons, like the proxy somehow failed to start serving traffic in the given timeframe.

Though I completely appreciate and agree with the 'ship now something that works now' approach!