frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

End of an era for me: no more self-hosted git

https://www.kraxel.org/blog/2026/01/thank-you-ai/
87•dzulp0d•2h ago

Comments

Jaxkr•1h ago
The author of this post could solve their problem with Cloudflare or any of its numerous competitors.

Cloudflare will even do it for free.

rubiquity•1h ago
The scrapers should use some discretion. There are some rather obvious optimizations. Content that is not changing is less likely to change in the future.
JohnTHaller•50m ago
They don't care. It's the reason they ignore robots.txt and change up their useragents when you specifically block them.
isodev•1h ago
I think the point of the post was how something useless (AI) and its poorly implemented scrapers is wrecking havoc in a way that’s turning the internet into a digital desert.

That Cloudflare is trying to monetise “protection from AI” is just another grift in the sense that they can’t help themselves as a corp.

denkmoon•1h ago
Cool, I can take all my self hosted stuff and stick it behind centralised enterprise tech to solve a problem caused by enterprise tech. Why even bother?
FeteCommuniste•1h ago
"Cause a problem and then sell the solution" proves a winning business strategy once more.
fouc•1h ago
you don't understand what self-hosting means. self-hosting means the site is still up when AWS and Cloudflare go down.
the_fall•1h ago
They don't. I'm using Cloudflare and 90%+ of the traffic I'm getting are still broken scrapers, a lot of them coming through residential proxies. I don't know what they block, but they're not very good at that. Or, to be more fair: I think the scrapers have gotten really good at what they do because there's real money to be made.
overgard•1h ago
I'm pretty sure scrapers aren't supposed to act as low key DOS attacks
simonw•1h ago
Cloudflare won't save you from this - see my comment here: https://news.ycombinator.com/item?id=46969751#46970522
Semaphor•28m ago
For logging, statistics etc. we have the Cloudflare bot protection on the standard paid level, ignore all IPs not from Europe (rough geolocation), and still have over twice the amount of bots that we had ~2 years ago.
CuriouslyC•1h ago
Does this author have a big pre-established audience or something? Struggling to understand why this is front-page worthy.
bibimsz•1h ago
the era of mourning has begun
fouc•1h ago
because he's unable to self-host git anymore because AI bots are hammering it to submit PRs.

self-hosting was originally a "right" we had upon gaining access to the internet in the 90s, it was the main point of the hyper text transfer protocol.

geerlingguy•1h ago
Also converting the blog from something dynamic to a static site generator. I made the same switch partly for ease of maintenance, but a side benefit is it's more resilient to this horrible modern era of scrapers far outnumbering legitimate traffic.

It's painful to have your site offline because a scraper has channeled itself 17,000 layers deep through tag links (which are set to nofollow, and ignored in robots.txt, but the scraper doesn't care). And it's especially annoying when that happens on a daily basis.

Not everyone wants to put their site behind Cloudflare.

tanduv•51m ago
sorry if i missed it, but the original post doesn't say anything about PRs... the bots only seem to be scraping the content
jaunt7632•1h ago
A healthy front page shouldn’t be a “famous people only” section. If only big names can show up there, it’s not discovery anymore, it’s just a popularity scoreboard.
ares623•1h ago
Well the fact that this supposed nobody is overwhelmed by AI scrapers should speak a lot about the issue no?
data-ottawa•1h ago
Does anyone know what's the deal with these scrapers, or why they're attributed to AI?

I would assume any halfway competent LLM driven scraper would see a mass of 404s and stop. If they're just collecting data to train LLMs, these seem like exceptionally poorly written and abusive scrapers written the normal way, but by more bad actors.

Are we seeing these scrapers using LLMs to bypass auth or run more sophisticated flows? I have not worked on bot detection the last few years, but it was very common for residential proxy based scrapers to hammer sites for years, so I'm wondering what's different.

themafia•1h ago
There's value to be had in ripping the copyright off your stuff so someone else can pass it off as their stuff. LLMs have no technical improvements so all they can do is throw more and more stolen data into it and hope it, somehow, crosses a nebulous "threshold" where it suddenly becomes actually profitable to use and sell.

It's a race to the bottom. What's different is we're much closer to the bottom now.

hsuduebc2•1h ago
I’m guessing, but I think a big portion of AI requests now come from agents pulling data specifically to answer a user’s question. I don’t think that data is collected mainly for training now but are mostly retrieved and fed into LLMs so they can generate the response. Thus so many repeated requests.
simonw•1h ago
I would love to understand this.

Just a few years ago badly behaved scrapers were rare enough not to be worth worrying about. Today they are such a menace that hooking any dynamic site up to a pay-to-scale hosting platform like Vercel or Cloud Run can trigger terrifying bills on very short notice.

"It's for AI" feels like lazy reasoning for me... but what IS it for?

One guess: maybe there's enough of a market now for buying freshly updated scrapes of the web that it's worth a bunch of chancers running a scrape. But who are the customers?

devsda•44m ago
For whatever reason, legislation is lax right now if you claim the purpose of scraping is for AI training even for copyrighted material.

May be everyone is trying to take advantage of the situation before law eventually catches up.

arnarbi•42m ago
> why they're attributed to AI?

I don’t think they mean scrapers necessarily driven by LLMs, but scrapers collecting data to train LLMs.

Lerc•1h ago
I presume people have logs that indicate the source for them to place blame on AI scrapers. Is anybody making these available for analysis so we can see exactly who is doing this?
JohnTHaller•51m ago
The big nasty AI bots use 10s of thousands of IPs distributed all over China
oceanplexian•1h ago
It's not that hard to serve some static files @ 10k RPS from something running on modest, 10 year old hardware.

My advice to the OP is if you're not experienced enough, maybe stop taking subtle digs at AI and fire up Claude Code and ask it how to set up a LAMP stack or a simple Varnish Cache. You might find it's a lot easier than writing a blog post.

QuiDortDine•1h ago
Not sure why you're talking like OP pissed in your cheerios. They are a victim of a broken system, it shouldn't be on them to spend more effort protecting their stuff from careless-to-malicious actors.
simonw•1h ago
A varnish cache won't help you if you're running something like a code forge where every commit has its own page - often more than one page, there's the page for the commit and then the page for "history from this commit" and a page for every one of the files that existed in the repo at the time of that commit...

Then a poorly written crawler shows up and requests 10,000s of pages that haven't been requested recently enough to be in your cache.

I had to add a Cloudflare Captcha to the /search/ page of my blog because of my faceted search engine - which produces may thousands of unique URLs when you consider tags and dates and pagination and sort-by settings.

And that's despite me serving ever page on my site through a 15 minute Cloudflare cache!

Static only works fine for sites that have a limited number of pages. It doesn't work for sites that truly take advantage of the dynamic nature of the web.

ninjin•50m ago
Exactly. The problem is that by their very nature some content has to be dynamically generated.

Just to add further emphasis as to how absurd the current situation is. I host my own repositories with gotd(8) and gotwebd(8) to share within a small circle of people. There is no link on the Internet to the HTTP site served by gotwebd(8), so they fished the subdomain out of the main TLS certificate. I am getting hit once every few seconds for the last six or so months by crawlers ignoring the robots.txt (of course) and wandering aimlessly around "high-value" pages like my OpenBSD repository forks calling blame, diff, etc.

Still managing just fine to serve things to real people, despite me at times having two to three cores running at full load to serve pointless requests. Maybe I will bother to address this at some point as this is melting the ice caps and wearing my disks out, but for now I hope they will choke on the data at some point and that it will make their models worse.

aguacaterojo•1h ago
How would a LAMP stack help his git server?
hattmall•55m ago
Can we not charge for access? If I have a link, that says "By clicking this link you agree to pay $10 for each access" then sending the bill?
devsda•52m ago
At this point, I think we should look at implementing filters that send different response when AI bots are detected or when the clients are abusive. Not just simple response code but one that poisons their training data. Preferably text that elaborates on the anti consumer practices of tech companies.

If there is a common text pool used across sites, may be that will get the attention of bot developers and automatically force them to backdown when they see such responses.

fennec-posix•48m ago
https://anubis.techaro.lol/docs/admin/honeypot/overview The Anubis scraper protection has this as a feature. Just sends garbage if something falls into a trap.
Vexs•42m ago
You know, I reckon if you serve up smut or instructions on bomb creation or something they stop hammering you...
JohnTHaller•52m ago
The Chinese AI scrapers/bots are killing quite a bit of the regular web now. YisouSpider absolutely pummeled my open source project's hosting for weeks. Like all Chinese AI scrapers, it ignores robots.txt. So forget about it respecting a Crawl-delay. If you block the user agent, it would calm down for a bit, then it would just come back again using a generic browser user agent from the same IP addresses. It does this across 10s of thousands of IPs.
kevin_thibedeau•22m ago
Start blocking /16s.
october8140•46m ago
You could put it behind Cloudflare and block all AI.
Joel_Mckay•45m ago
Some run git over ssh, and a domain login for https:// permission manager etc.

Also, spider traps and 42TB zip of death pages work well on poorly written scrapers that ignored robots.txt =3

vachina•14m ago
Scrapers are relentless but not DDoS levels in my experience.

Make sure your caches are warm and responses take no more than 5ms to construct.

krick•11m ago
So, what's up with these bots, why am I hearing about that so often lately? I mean, DDoS atacks aren't a new thing, and, honestly, this is pretty much the reason why Cloudflare even exists, but I'd expect OpenAI bots (or whatever this is now) to be a little bit easier to deal with, no? Like, simply having resonable aggressive fail2ban policy? Or do they really behave like a botnet, where each request comes from different IP from a different network? How? Why? What is this thing?

YouTube's $60B revenue revealed amid paid subscriber push

https://www.bbc.com/news/articles/crkrkd2xlx6o
40•1659447091•1h ago•37 comments

The Feynman Lectures on Physics (1961-1964)

https://www.feynmanlectures.caltech.edu/
171•rramadass•17h ago•57 comments

Software design is now cheap

https://dottedmag.net/blog/cheap-design/
25•dottedmag•2d ago•16 comments

Exploring a Modern SMTPE 2110 Broadcast Truck

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
52•assimpleaspossi•2d ago•5 comments

The Day the Telnet Died

https://www.labs.greynoise.io/grimoire/2026-02-10-telnet-falls-silent/
254•pjf•6h ago•177 comments

The Singularity will occur on a Tuesday

https://campedersen.com/singularity
899•ecto•11h ago•520 comments

Fun With Pinball

https://www.funwithpinball.com/exhibits/small-boards
45•jackwilsdon•4h ago•6 comments

Ex-GitHub CEO launches a new developer platform for AI agents

https://entire.io/blog/hello-entire-world/
399•meetpateltech•13h ago•350 comments

The Little Learner: A Straight Line to Deep Learning (2023)

https://mitpress.mit.edu/9780262546379/the-little-learner/
110•AlexeyBrin•2d ago•16 comments

Clean-room implementation of Half-Life 2 on the Quake 1 engine

https://code.idtech.space/fn/hl2
350•klaussilveira•17h ago•69 comments

My eighth year as a bootstrapped founder

https://mtlynch.io/bootstrapped-founder-year-8/
172•mtlynch•2d ago•52 comments

Willow – Protocols for an uncertain future [video]

https://fosdem.org/2026/schedule/event/CVGZAV-willow/
29•todsacerdoti•2d ago•1 comments

Rivian R2: Electric Mid-Size SUV

https://rivian.com/r2
62•socialcommenter•3h ago•71 comments

Simplifying Vulkan one subsystem at a time

https://www.khronos.org/blog/simplifying-vulkan-one-subsystem-at-a-time
223•amazari•15h ago•149 comments

Mathematicians disagree on the essential structure of the complex numbers (2024)

https://www.infinitelymore.xyz/p/complex-numbers-essential-structure
169•FillMaths•12h ago•217 comments

The Falkirk Wheel

https://www.scottishcanals.co.uk/visit/canals/visit-the-forth-clyde-canal/attractions/the-falkirk...
54•scapecast•8h ago•20 comments

Show HN: JavaScript-first, open-source WYSIWYG DOCX editor

https://github.com/eigenpal/docx-js-editor
60•thisisjedr•1d ago•17 comments

Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)

https://github.com/rowboatlabs/rowboat
134•segmenta•12h ago•32 comments

A brief history of oral peptides

https://seangeiger.substack.com/p/a-brief-history-of-oral-peptides
94•odedfalik•1d ago•35 comments

Tambo 1.0: Open-source toolkit for agents that render React components

https://github.com/tambo-ai/tambo
71•grouchy•8h ago•16 comments

How did Windows 95 get permission to put Weezer video 'Buddy Holly' on the CD?

https://devblogs.microsoft.com/oldnewthing/20260210-00/?p=112052
142•ingve•9h ago•108 comments

Europe's $24T Breakup with Visa and Mastercard Has Begun

https://europeanbusinessmagazine.com/business/europes-24-trillion-breakup-with-visa-and-mastercar...
740•NewCzech•17h ago•616 comments

Show HN: Model Training Memory Simulator

https://czheo.github.io/2026/02/08/model-training-memory-simulator/
3•czheo•2d ago•0 comments

Competition is not market validation

https://www.ablg.io/blog/competition-is-not-validation
78•tonioab•12h ago•26 comments

Show HN: I built a macOS tool for network engineers – it's called NetViews

https://www.netviews.app
187•n1sni•23h ago•53 comments

Show HN: Distr 2.0 – A year of learning how to ship to customer environments

https://github.com/distr-sh/distr
72•louis_w_gk•16h ago•18 comments

Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure

https://artisanforge.online/
25•grazulex•2d ago•1 comments

Markdown CLI viewer with VI keybindings

https://github.com/taf2/mdvi
60•taf2•10h ago•30 comments

Oxide raises $200M Series C

https://oxide.computer/blog/our-200m-series-c
545•igrunert•14h ago•290 comments

Show HN: Sol LeWitt-style instruction-based drawings in the browser

https://intervolz.com/sollewitt/
34•intervolz•8h ago•6 comments