frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AT&T collapse:A single typo that shut down the US long-distance network

https://en.wikipedia.org/wiki/1971_AT%26T_network_switching_system_collapse
1•xthe•1m ago•1 comments

Debian gnome team plans to kill GTK2

https://ludditus.com/2026/01/15/debian-has-its-retards-too-they-plan-to-kill-gtk2/
1•jandeboevrie•3m ago•0 comments

Install.md: A Standard for LLM-Executable Installation

https://www.mintlify.com/blog/install-md-standard-for-llm-executable-installation
1•npmipg•6m ago•0 comments

Show HN: ExprTk High-Performance C++ Math Expression Parser Evaluation Engine

https://www.partow.net/programming/exprtk/index.html
1•exprtk•7m ago•0 comments

What happens to cities when the jobs leave?

https://deadneurons.substack.com/p/what-happens-to-cities-when-the-jobs
1•nr378•7m ago•0 comments

Drawbot: Let's hack something cute (2025)

https://www.atredis.com/blog/2025/9/30/drawbot-lets-hack-something-cute
6•notmine1337•8m ago•2 comments

Show HN: Polymcp Implements Ollama for Local and Cloud Model Execution

1•justvugg•9m ago•0 comments

Custom firmware for x4 eInk device

https://github.com/crosspoint-reader/crosspoint-reader
1•xteink•9m ago•0 comments

Radithor

https://en.wikipedia.org/wiki/Radithor
1•radeeyate•11m ago•0 comments

Show HN: Paintracker.ca, a PWA pain tracker that keeps data on device by default

https://www.paintracker.ca/
1•crisiscore_sys•13m ago•1 comments

Iran, 200 hours of a nationwide internet and telecommunications blackout

https://www.dw.com/en/iran-protests-internet-communication-blackout-social-media/a-75487555
1•us321•14m ago•0 comments

OpenAI to test ads in ChatGPT as it burns through billions

https://arstechnica.com/information-technology/2026/01/openai-to-test-ads-in-chatgpt-as-it-burns-...
2•coloneltcb•17m ago•1 comments

Agam Space – Self-hosted, zero-knowledge, end-to-end encrypted file storage

https://github.com/agam-space/agam-space
1•rameshl•17m ago•1 comments

Judge orders Anna's Archive to delete scraped data; no one thinks it will comply

https://arstechnica.com/tech-policy/2026/01/judge-orders-annas-archive-to-delete-scraped-data-no-...
2•vo2maxer•17m ago•0 comments

Adobe Photoshop 2025 Installer Now Working on Linux with Patched Wine

https://www.phoronix.com/news/Adobe-Photoshop-2025-Wine-Patch
1•speckx•18m ago•1 comments

Casey Muratori breaks down the AWS outage

https://www.youtube.com/watch?v=gstn9qcNdlc
1•marc_omorain•23m ago•0 comments

Down with the BRT, long live the Bus

https://marcochitti.substack.com/p/down-with-the-brt-long-live-the-bus
1•decimalenough•25m ago•0 comments

The Death of Software 2.0 (A Better Analogy)

https://www.fabricatedknowledge.com/p/the-death-of-software-20-a-better
1•achierius•26m ago•0 comments

How to open a file in Emacs

https://www.murilopereira.com/how-to-open-a-file-in-emacs
2•lr0•29m ago•1 comments

The cardinal sin of software architecture

https://functional.computer/blog/the-cardinal-sin-of-software-architecture
2•speckx•32m ago•1 comments

I set all 376 Vim options and I'm still a fool

https://evanhahn.com/i-set-all-376-vim-options-and-im-still-a-fool/
2•todsacerdoti•32m ago•0 comments

Diosdado Banatao, Chip Designer, Investor and Entrepreneur, Dies at 79

https://www.wsj.com/wsjplus/dashboard/articles/diosdado-bantao-chip-designer-dead-79-f205c32b
1•melling•32m ago•1 comments

Ask HN: Which system would you trust to run a business you can't afford to lose?

1•cutterlayers•34m ago•1 comments

Releasing rainbow tables to accelerate protocol deprecation

https://cloud.google.com/blog/topics/threat-intelligence/net-ntlmv1-deprecation-rainbow-tables
9•linolevan•38m ago•2 comments

Technology and Wealth: The Straw, the Siphon, and the Sieve [video]

https://www.youtube.com/watch?v=OxvRx7sQNxc
1•measurablefunc•41m ago•0 comments

We're more patient with AI than one another

https://www.uxtopian.com/journal/were-more-patient-with-ai-than-one-another
2•lucidplot•41m ago•0 comments

Stop Pulling Yourself Down

https://buanasalf.com/blog/stop-pulling-yourself-down/
1•moh20•41m ago•0 comments

Human code review is a crutch

https://deadneurons.substack.com/p/human-code-review-is-an-outdated
1•nr378•41m ago•1 comments

Kusto Query Language

https://github.com/microsoft/Kusto-Query-Language
1•tosh•42m ago•0 comments

Show HN: I made a phone in between a smartphone and dumbphone

https://bouchardindustries.com
1•bouchardio•43m ago•1 comments
Open in hackernews

LWN is currently under the heaviest scraper attack seen yet

https://social.kernel.org/notice/B2JlhcxNTfI8oDVoyO
108•luu•1h ago

Comments

zahlman•1h ago
Is it still ongoing? The thread appears to be over 24 hours old and as a quick test I had no issue loading the main page (which is as snappy and responsive as expected from a low-bandwidth site like LWN).
jzb•27m ago
Not at the moment. It’s subsided for now.
blibble•1h ago
the perverse incentive is if you ddos the website such that it shuts down, no other "AI" parasites can get the valuable data

big tech incentivised to ddos... what a world they've built

ronsor•1h ago
This sounds like a conspiracy theory.
MBCook•1h ago
I don’t think they’re saying that’s actually happening here, just that it could happen and is accidentally incentivized.
pwdisswordfishy•1h ago
If it's a conspiracy, it would be one where the Minimum Viable Conspirator Count is 1 (inclusive of one's own self).

In that case, by that rubric literally anything that you conspire with yourself to accomplish (buying next week's groceries, making a turkey sandwich...) would also be a conspiracy.

amlib•7m ago
The dead internet theory also sounded unhinged and conspiracy theory-ish a decade or so ago... yet here we are.
phkahler•1h ago
Its called pulling up the ladder behind you, or building a moat!
NitpickLawyer•1h ago
Umm... what data? That's a very old newsletter-like site. Everything that's public on it has been long scraped and parsed by whoever needed it. There's 0 valuable data there for "parasites" to parasite off of.

I also don't get the comments on the linked social site. IIUC the users posting there are somehow involved with kernel work, right? So they should know a thing or two about technical stuff? How / why are they so convinced that the big bad AI baddies are scraping them, and not some miss-configured thing that someone or another built? Is this their first time? Again, there's nothing there that hasn't been indexed dozens of times already. And... sorry to say it, but neither newsletters nor the 1-3 comments on each article are exactly "prime data" for any kind of training.

These people have gone full tinfoil hat and spewing hate isn't doing them any favours.

MBCook•1h ago
I don’t think they were talking about LWN specifically but just in general.
homebrewer•1h ago
Because it started in 2022 and hasn't subsided since? This is just the latest iteration of "AI" scrapers destroying the site, and the worst one yet.

https://lwn.net/Articles/1008897

Your nonsense about LWN being a "newsletter" and having "zero valuable data" isn't doing you any favors. It is the prime source of information about Linux kernel development, and Linux development in general.

"AI" cancer scraping the same thing over and over and over again is not news for anybody even with a cursory interest in this subject. They've been doing it for years.

NitpickLawyer•58m ago
> LWN.net is a reader-supported news site

I mean...

Again, the site is so old that anything worth while is already in cc or any number of crawls. I am not saying they weren't scraped. I'm saying they likely weren't scraped by the bad AI people. And certainly not by AI companies trying to limit others from accessing that data (as the person who I replied to stated).

MBCook•50m ago
Why is it each of your comments seems to include a dig attacking LWN?
spinningslate•7m ago
I’m going to presume good faith rather than trolling. Some questions for you:

1. Coding assistants have emerged as as one of the primary commercial opportunities for AI models. As GP pointed out, LWN is the primary discussion for kernel development. If you were gathering training data for a model, and coding assistance is one of your goals, and you know of a primary sources of open source development expertise, would you:

  (a) ignore it because it’s in a quaint old format, or

  (b) slurp up as much as you can?
2. If you’d previously slurped it up, and are now collating data for a new training run, and you know it’s an active mailing list that will have new content since you last crawled it, would you:

  (a) carefully and respectfully leave it be, because you still get benefit from the previous content even though there’s now more and it’s up to date, or

  (b) hoover up every last drop because anything you can do to get an edge over your competitors means you get your brief moment of glory in the benchmarks when you release?
gulugawa•1h ago
I've had luck blocking scrapers by overwriting JavaScript methods

" a.getElementsByTagName = function (...args) {//Clear page content}"

One can also hide components inside Shadow DOM to make it harder to scrape.

However, these methods will interfere with automated testing tools such as Playwright and Selenium. Also, search engine indexing is likely to be affected.

bogwog•42m ago
This is a fun idea, especially if you make those functions procedurally generate garbage to get them stuck
TurdF3rguson•34m ago
You think you've had luck. The truth is you have no idea of knowing if this ever had any effect at all.
chrisjj•1h ago
So which is it? DDOS attack or "AI" scrapers?
fabian2k•1h ago
Sufficiently aggressive and inconsiderate scraping is indistinguishable from a DDOS attack.
Y-bar•1h ago
A sufficiently stupid and egregious AI scraper is indistinguishable from a DDOS attack.

Edit: Fabian2k was ten seconds ahead. Damn!

TurdF3rguson•31m ago
Scrapers because DDOS implies that it's malicious rather than accidental and there's no reason to think that.
jacquesm•1h ago
AI allows companies to resell open source code as if they wrote it themselves doing an end run around all license terms. This is a major problem.

Of course they're not going to stop at just code. They need all the rest of it as well.

zipy124•1h ago
From the creators of easy money laundering (crypto bros), we now bring you easy money laundering 2: intellectual property laundering, coming to a theatre near you soon!
gruez•54m ago
>From the creators of easy money laundering (crypto bros),

Is there even any evidence that "crypto bros" and "AI bros" are even the same set of people other than being vaguely "tech" and hated by HN? At best you have someone like Altman who founded openai and had a crypto project (worldcoin), but the latter was approximately used by nobody. What about everyone else? Did Ilya Sutskever have a shitcoin a few years ago? Maybe Changpeng Zhao has an AI lab?

themafia•46m ago
> and had a crypto project (worldcoin)

That was a biometric surveillance project disguised as a crypto project.

> Is there even any evidence that "crypto bros" and "AI bros" are even the same set of people

No, the "AI" people are far worse. I always had a choice to /not/ use crypto. The "AI" people want to hamfistedly shove their flawed investment into every product under the sun.

palmotea•18m ago
> AI allows companies to resell open source code as if they wrote it themselves doing an end run around all license terms. This is a major problem.

Has it been adjudicated that AI use actually allows that? That's definitely what the AI bros want (and will loudly assert), but that doesn't mean it's true.

blakesterz•1h ago

  "It is a DDOS attack involving tens of thousands of addresses"
It is amazing just how distributed some of these things are. Even on the small sites that I help host we see these types of attacks from very large numbers of diverse IPs. I'd love to know how these are being run.
smitty1e•1h ago
Call it a "Distributed Intelligence Logic Denial Of Service" (DILDOS) attack both to name it distinctly and characterize the source.
random1234user•16m ago
Might as well call it "Artificial Intelligence Distributed Intelligence Logic Denial Of Service" (AIDILDOS) sounds about right.
PaulDavisThe1st•37m ago
another reference point: we've had well over 1M unique IP addresses hit git.ardour.org as part of stupid as hell git scraping effort. 1M !!!
wongarsu•33m ago
There are plenty of providers selling "residential proxies", distributing your crawler traffic through thousands of residential IPs. BrightData is probably the biggest, but its a big and growing market.

And if you don't care about the "residential" part you can get proxies with data center IPs for much cheaper from the same providers. But those are easily blocked

giantrobot•15m ago
In the most charitable case it's some "AI" companies with an X/Y problem. They want training data so they vibe code some naive scraper (requests is all you need!) and don't ever think to ask if maybe there's some sort of common repository of web crawls, a CommonCrawl if you will.

They don't really need to scrape training data as CommonCrawl or other content archives would be fine for training data. They don't think/know to ask what they really want: training data.

In the least charitable interpretation it's anti-social assholes that have no concept or care about negative externalities that write awful naive scrapers.

tedivm•1h ago
I solved this problem for my blog by simply not being interesting.
fancyfredbot•51m ago
If you can bore an LLM that's exciting.
chuckadams•23m ago
Bore-a-Bot, the new service from the Confuse-a-Cat company.
naiv•32m ago
TIL about Git Brag because of your blog. It is interesting.
fancyfredbot•54m ago
Who are these agressive scrapers run by?

It is difficult to figure out the incentives here. Why would anyone want to pull data from LWN (or any other site) at a rate which would cause a DDOS like attack?

If I run a big data hungry AI lab consuming training data at 100Gb/s it's much much easier to scrape 10,000 sites at 10Mb/s than DDOS a smaller number of sites with more traffic. Of course the big labs want this data but why would they risk the reputational damage of overloading popular sites in order to pull it in an hour instead of a day or two?

kylehotchkiss•52m ago
china (alibaba and tencent)
fancyfredbot•43m ago
I'm not at all sure alibaba or tencent would actually want to DDOS LWN or any other popular website.

They may face less reputational damage than say Google or OpenAI would but I expect LWN has Chinese readers who would look dimly on this sort of thing. Some of those readers probably work for Alibaba and Tencent.

I'm not necessarily saying they wouldn't do it if there was some incentive to do so but I don't see the upside for them.

philipkglass•51m ago
I don't think that most of them are from big-name companies. I run a personal web site that has been periodically overwhelmed by scrapers, prompting me to update my robots.txt with more disallows.

The only big AI company I recognized by name was OpenAI's GPTBot. Most of them are from small companies that I'm only hearing of for the first time when I look at their user agents in the Apache logs. Probably the shadiest organizations aren't even identifying their requests with a unique user agent.

As for why a lot of dumb bots are interested in my web pages now, when they're already available through Common Crawl, I don't know.

iamnothere•34m ago
Maybe someone is putting out public “scraper lists” that small companies or even individuals can use to find potentially useful targets, perhaps with some common scraper tool they are using? That could explain it? I am also mystified by this.
bjackman•43m ago
LWN includes archives of a bunch of mailing lists so that might be a factor. There are a LOT of web on that domain.
mikkupikku•39m ago
NSA, trying to force everybody onto their Cloudflare reservation.
velox_neb•29m ago
I bet some guy just told Claude Code to archive all of LWN for him on a whim.
maxbond•8m ago
Probably someone who has more access to bandwidth than sense who either rate limited themselves incorrectly (for instance, they forgot to configure it in their production environment and the default value was 0) or simply didn't even think to do so.

It would be funny if someone vibe coded scraping infrastructure and tested it at a small scale, didn't see a problem, and so turned the throttle all the way up.

bloppe•54m ago
I'm curious how they concluded this was done to scrape for AI training. If the traffic was easily distinguishable from regular users, they would be able to firewall it. If it was not, then how can they be sure it wasn't just a regular old malicious DDOS? Happens way more often than you might think. Sometimes a poorly-managed botnet can even misfire.
MBCook•51m ago
Why would anyone ever DDOS them? They’ve been around for about three decades now, I don’t know if they’ve ever had a DDOS attack before the AI crawling started.
iamnothere•43m ago
I am starting to think these are not just AI scrapers blindly seeking out data. All kinds of FOSS sites including low volume forums and blogs have been under this kind of persistent pressure for a while now. Given the cost involved in maintaining this kind of widespread constant scraping, the economics don’t seem to line up. Surely even big budget projects would adjust their scraping rates based on how many changes they see on a given site. At scale this could save a lot of money and would reduce the chance of blocking.

I haven’t heard of the same attacks facing (for instance) niche hobby communities. Does anyone know if those sites are facing the same scale of attacks?

Is there any chance that this is a deniable attack intended to disrupt the tech industry, or even the FOSS community in particular, with training data gathered as a side benefit? I’m just struggling to understand how the economics can work here.

zomiaen•33m ago
How many of these scrapers are written by AI by data-science folks who don't remotely care how often they're hitting the sites, and is data they wouldn't even think to give or ask the LLM about?
iamnothere•5m ago
But does that explain all of the various scrapers doing the same thing across the same set of sites? And again, the sheer bandwidth and CPU time involved should eventually bother the bean counters.

I did think of a couple of possibilities:

- Someone has a software package or list of sites out there that people are using instead of building their own scrapers, so everyone hits the same targets with the same pattern.

- There are a bunch of companies chasing a (real or hoped for) “scraped data” market, perhaps overseas where overhead is lower, and there’s enough excess AI funding sloshing around that they able to scrape everything mindlessly for now. If this is the case then the problem should fix itself as funding gets tighter.

2OEH8eoCRo0•11m ago
When are we going to start suing these assholes? Why isn't anybody leveraging the legal system?