frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Trying to make an Automated Ecologist: A first pass through the Biotime dataset

https://chillphysicsenjoyer.substack.com/p/trying-to-make-an-automated-ecologist
1•crescit_eundo•2m ago•0 comments

Watch Ukraine's Minigun-Firing, Drone-Hunting Turboprop in Action

https://www.twz.com/air/watch-ukraines-minigun-firing-drone-hunting-turboprop-in-action
1•breve•3m ago•0 comments

Free Trial: AI Interviewer

https://ai-interviewer.nuvoice.ai/
1•sijain2•3m ago•0 comments

FDA Intends to Take Action Against Non-FDA-Approved GLP-1 Drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
2•randycupertino•4m ago•0 comments

Supernote e-ink devices for writing like paper

https://supernote.eu/choose-your-product/
1•janandonly•6m ago•0 comments

We are QA Engineers now

https://serce.me/posts/2026-02-05-we-are-qa-engineers-now
1•SerCe•7m ago•0 comments

Show HN: Measuring how AI agent teams improve issue resolution on SWE-Verified

https://arxiv.org/abs/2602.01465
2•NBenkovich•7m ago•0 comments

Adversarial Reasoning: Multiagent World Models for Closing the Simulation Gap

https://www.latent.space/p/adversarial-reasoning
1•swyx•7m ago•0 comments

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•15m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
7•karakoram•15m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•16m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•16m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•17m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•19m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•19m ago•0 comments

Achieving Ultra-Fast AI Chat Widgets

https://www.cjroth.com/blog/2026-02-06-chat-widgets
1•thoughtfulchris•21m ago•0 comments

Show HN: Runtime Fence – Kill switch for AI agents

https://github.com/RunTimeAdmin/ai-agent-killswitch
1•ccie14019•23m ago•1 comments

Researchers surprised by the brain benefits of cannabis usage in adults over 40

https://nypost.com/2026/02/07/health/cannabis-may-benefit-aging-brains-study-finds/
1•SirLJ•25m ago•0 comments

Peter Thiel warns the Antichrist, apocalypse linked to the 'end of modernity'

https://fortune.com/2026/02/04/peter-thiel-antichrist-greta-thunberg-end-of-modernity-billionaires/
3•randycupertino•26m ago•2 comments

USS Preble Used Helios Laser to Zap Four Drones in Expanding Testing

https://www.twz.com/sea/uss-preble-used-helios-laser-to-zap-four-drones-in-expanding-testing
3•breve•31m ago•0 comments

Show HN: Animated beach scene, made with CSS

https://ahmed-machine.github.io/beach-scene/
1•ahmedoo•32m ago•0 comments

An update on unredacting select Epstein files – DBC12.pdf liberated

https://neosmart.net/blog/efta00400459-has-been-cracked-dbc12-pdf-liberated/
3•ks2048•32m ago•0 comments

Was going to share my work

1•hiddenarchitect•35m ago•0 comments

Pitchfork: A devilishly good process manager for developers

https://pitchfork.jdx.dev/
1•ahamez•35m ago•0 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
3•mltvc•40m ago•1 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•40m ago•1 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•41m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
3•SchwKatze•41m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•42m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
8•guerrilla•43m ago•1 comments
Open in hackernews

Humanely dealing with humungus crawlers

https://flak.tedunangst.com/post/humanely-dealing-with-humungus-crawlers
83•freediver•4mo ago

Comments

bobbiechen•4mo ago
>We’ve already done the work to render the page, and we’re trying to shed load, so why would I want to increase load by generating challenges and verifying responses? It annoys me when I click a seemingly popular blog post and immediately get challenged, when I’m 99.9% certain that somebody else clicked it two seconds before me. Why isn’t it in cache? We must have different objectives in what we’re trying to accomplish. Or who we’re trying to irritate.

+1000 I feel like so much bot detection (and fraud prevention against human actors, too) is so emotionally-driven. Some people hate these things so much, they're willing to cut off their nose to spite their face.

jitl•4mo ago
Really? If I’m an unsophisticated blog not using a CDN, and I get a $1000 bill for bandwidth overage or something, I’m gonna google a solve and slap it on there because I don’t want to pay another $1000 for Big Basilisk. I don’t think that’s emotional response, it’s common sense.
phantompeace•4mo ago
Wouldn't it be easier to put the unsophisticated blog behind cloudflare
mhuffman•4mo ago
As much as I like to shit on cloudflare at every opportunity, it would obviously be easier to put it behind CF than install bot detection plugins.
marginalia_nu•4mo ago
Seems like you've made profoundly questionable hosting or design choices for that to happen. Flat rate web hosting exists, and blogs (especially unsophisticated ones) do not require much bandwidth or processing power.

Misbehaving crawlers are a huge problem but bloggers are among the least affected by them. Something like a wiki or a forum is a better example, as they're in a category of websites where each page visit is almost unavoidably rendered on the fly using multiple expensive SQL queries due to the rapidly mutating nature of their datasets.

Git forges, like the one TFA is discussing, are also fairly expensive, especially as crawlers traverse historical states. When the crawler is poorly implemented they'll get stuck doing this basically forever. Detecting and dealing with git hosts is an absolute must for any web crawler due to this.

mtlynch•4mo ago
>Flat rate web hosting exists, and blogs (especially unsophisticated ones) do not require much bandwidth or processing power.

I actually find this surprisingly difficult to find.

I just want static hosting (like Netlify or Firebase Hosting), but there aren't many hosts that offer that.

There are lots of providers where I can buy a VPS somewhere and be in charge of configuring and patching it, but if I just want to hand someone a set of HTML files and some money in exchange for hosting, not many hosts fit the bill.

marginalia_nu•4mo ago
If you just want to host HTML for personal use github pages is free (and works with a custom domain). There are bandwidth limitations, but they definitely won't pull an AWS on you and send a bill that would cover a new car because a crawler acted up.
mtlynch•4mo ago
Github Pages' "soft" bandwidth limit is 100 MB. I typically use 300-500 GB per month at Netlify, so I'm over Github's limit.

I actually don't even want a free option. I want to pay a vendor that cares about keeping my website online. I'm fine paying $20-50/mo as long as it's bounded and they don't just take my site offline if I see a spike from HN.

diggan•4mo ago
> There are lots of providers where I can buy a VPS somewhere and be in charge of configuring and patching it, but if I just want to hand someone a set of HTML files and some money in exchange for hosting, not many hosts fit the bill.

Yeah, that's true, there isn't a lot of "I give you money and HTML, you host it" services out there, surprisingly. Probably the most mature, cheapest and most reliable one today would be good ol' neocities.org (run by HN user kyledrake) which basically gives you 3TB/month for $5, pretty good deal :)

Sometimes when I miss StumbleUpon I go to https://neocities.org/browse?sort_by=random which gives a fun little glimpse of the hobby/curiosity/creative web.

ctoth•4mo ago
> There are lots of providers where I can buy a VPS somewhere and be in charge of configuring and patching it, but if I just want to hand someone a set of HTML files and some money in exchange for hosting, not many hosts fit the bill.

Dreamhost! They're still around and still lovely after how many years? I even find their custom control panel charming.

hobs•4mo ago
I really like DH(though I am still mad about the cloudatcost shenanigans) and use them but if you use 200x the resources the other shared sites consume you're getting the boot just like anyone.
thaumaturgy•4mo ago
Interesting, I was under the impression this was more common than maybe it is. I know the hosting market has gotten pretty bad.

So, I'm currently building pretty much this. After doing it on the side for clients for years, it's now my full-time effort. I have a solid and stable infrastructure, but not yet an API or web frontend. If somebody wants basically ssh, git, and static (or even not static!) hosting that comes with a sysadmin's contact information for a small number of dollars per month, I can be reached at sysop@biphrost.net.

Environment is currently Debian-in-LXC-on-Debian-on-DigitalOcean.

ghssds•4mo ago
You already had a couple of suggestions but I've been happy in the past with OVH.

https://www.ovhcloud.com/en/web-hosting/compare/

tekne•4mo ago
I host my personal static site with Firebase, haven’t paid a cent yet (and don’t even think I set up billing!) Just compile and firebase deploy.
bayindirh•4mo ago
My view on this is simple:

If you're a bot which will ignore all the licenses I put on that content, then I don't want to you to be able to reach that content.

No, any amount of monetary compensation is not welcome either. I use these licenses as a matter of principle, and my principles are not for sale.

That's all, thanks.

warkdarrior•4mo ago
How can you tell a bot will ignore all your content licenses?
bayindirh•4mo ago
Currently all AI companies argue that the content they use falls under fair use, and disregard all licenses. This means any future ones respecting these licenses needs to be whitelisted.
diggan•4mo ago
How do you know that that bot is part of those AI companies? Maybe it's my personal bot you're blocking, should I also not have (indirectly) access to the content?
bayindirh•4mo ago
You can use a honest user string denoting that it's your bot. Some AI companies label their bots transparently, they show up on the logs I keep.

While I understand that you may need a personal bot to crawl or mirror a site, I can't guarantee that I'll grant you access.

I don't like to be that heavy-handed in the first place, but capitalism is making it harder to trust entities which you can't see and talk face to face.

simianparrot•4mo ago
No. Access to my content is a privilege I grant you. I decide how you get to access it, and via a bot that my setup confuses for an AI crawler belonging to an anti-human AI corporation is not a valid way to access it. Get off my virtual lawn.
diggan•4mo ago
> No. Access to my content is a privilege I grant you.

Right, I thought the conversation was about public websites on the public internet, but I think you're talking about this in the context of a private website now? I understand keeping tighter controls if you're dealing with private content you want accessible via the internet for others but not the public.

bayindirh•4mo ago
This interpretation won't take you that far.

Crawling-prevention is not new. Many news outlets or biggish websites already was preventing access by non-human agents in various ways for a very long time.

Now, non-human agents are improved and started to leech everything they can find, so the methods are evolving, too.

News outlets are also public sites on the public internet.

Source-available code repositories are also on the public internet, but said agents crawl and use that code, too, backed by fair-use claims.

privatelypublic•4mo ago
All websites are private (excepting maybe government sites). In most places the internet infrastructure itself is private.

You're conflating a legal concept that applies to areas that are shared, government owned, paid for by taxes, and the government feels like people should be able to access them.

The web is closer to a shopping mall. You're on one persons property to access other people's stuff who pay to be there. They set their own rules. If you don't follow those rules you get kicked out, charged with trespassing, and possibly banned from the mall entire.

AI bots have been asked to leave. But, since they own the mall too, the store owners are more than a little screwed.

diggan•4mo ago
> You're on one persons property to access other people's stuff who pay to be there.

I see it more like I'm knocking on people's doors (issuing GET requests with my web browser) and people open their door for me (the server responds with something) or not. If you don't wanna open the door, fine you do you, but if you do open the door, I'm gonna assume it was on purpose as I'm not trying to be malicious, I'm just a user with a browser.

> AI bots have been asked to leave. But, since they own the mall too, the store owners are more than a little screwed.

I don't understand what you mean with this, what is the mall here, are you're saying that people have websites hosted at OpenAI et al? I'm not sure how the "mall owner" and the people running the AI bots are the same owners.

privatelypublic•4mo ago
First, the mall is the internet as a whole- you're going to have to pay to be there (entrance is free, getting there is not), then you use their property to get to private businesses that have leased space at the mall.

And finally: https://www.techspot.com/news/105769-meta-reportedly-plannin...

The internet runs on backhaul. A LOT of backhaul is now owned by FAANG. Along with that, most those companies can financially ruin any business simply by banning them from the platform. So, the companies use their backhaul fiber and peering agreements to crawl everybody else. And nobody says anything because of "The Implication" that if you sue under Computer fraud and abuse Act (among others) they'll just wholesale ban you.

A "door to door" analogy doesn't work because sidewalks are generally considered "Public." The best I can tweak that analogy is a gated neighborhood and everybody has "no soliciting" signs. (NB: at least in my area, soliciting when theres a no-soliciting sign is an actual crime, on top of being trespassing)

kiitos•4mo ago
making an HTTP GET request to an IP and port over the public internet, and getting a response back, is an interaction defined in a technical context, which has its own definitions for concepts like public/private.

stuff like licenses.txt or robots.txt exist in totally separate context, which has a totally separate set of definitions for concepts like public/private.

can't really conflate context-specific concepts like public/private, over multiple and incompatible contexts like technical/legal

the claim that "a lot of backhaul is now owned by FAANG" is obviously untrue at a basic technical level. the broader argument is cynical, unfalsifiable, and uninteresting.

simianparrot•4mo ago
You’re literally visiting a service paid for by me. It’s open to the public, but it’s my domain and my server and I get to say “no thank you” to your visit if you don’t behave. You have no innate right to access the content I share.

Blocking misbehaving IP addresses isn’t new, and is another version of the same principle.

diggan•4mo ago
> but it’s my domain and my server and I get to say “no thank you” to your visit if you don’t behave [...] Blocking misbehaving IP addresses isn’t new

Absolutely, I agree that of course people are free to block whatever they want, misbehaving or not. Guess I'm just trying to figure out what sort of "collateral damage" people are OK with when putting up content on the public internet but want it to be selectively available.

> You have no innate right to access the content I share.

No, I guess that's true, I don't have any "rights" to do so. But I am gonna assume that if whatever you host is available without any authentication, protection or similar, you're fine with me viewing that. I'm not saying you should be fine with 1000s of requests per second, but since you made it public in the first place by sharing it, you kind of implicitly agreed for others to view it.

kiitos•4mo ago
doing an HTTP GET to your server is my request to access some content your server serves. that's my right as a client. and it is your server's responsibility to determine whether or not to respond to my request. that's your server's right. said another way, "access" is the responsibility of the server, not the client.
simianparrot•4mo ago
Technical pedantry aside, that's what I mean. And I choose to not respond to your request with my content if I don't think your client is acting in good faith -- ie. is a bot or crawler that disrespects robots.txt, for example.
kiitos•4mo ago
sorry, yes, i think we are in agreement
beeflet•4mo ago
I think the problem is that despite the effort, you will still end up in the dataset. So it's futile
Vegenoid•4mo ago
I think it’s better viewed through a lens of effort. Implementing systems that try harder to not challenge humans takes more work than just throwing up a catch-all challenge wall.

The author’s goal is admirable: “My primary principle is that I’d rather not annoy real humans more than strictly intended”. However, the primary goal for many people hosting content will be “block bots and allow humans with minimal effort and tuning”.

nektro•4mo ago
it's sad we've gotten to the point where mitigations against this have to be such a consideration when hosting a site
arjie•4mo ago
They don't really have to be. I don't have many mitigations and the AI bots crawl my site and it's fine. The robots.txt is pretty simple too and is really just set up to help the robot not get stuck in loops (I use Mediawiki as the CMS and it has a lot of GET paths that a normal person wouldn't choose). In my case, a machine near my desk hosts everything and it's fine.
jrochkind1•4mo ago
I used to say that, but last year it stopped being true for me.
hyperman1•4mo ago
I've been wondering about how to make a challenge that AI won't do. Some possibilities:

* Type this sentence, taken from a famous copyrighted work.

* Type Tienanmen protests.

* Type this list of swear words or sexual organs.

dweinus•4mo ago
> Type this list of swear words

1998: I swear at the computer until the page loads

2025: I swear at the computer until the page loads

seabass-labrax•4mo ago
Unfortunately for your proposal, the crawlers for training LLMs don't have the same censorship as the AI chatbots do when communicating with the end user. The censorship of chatbots is either done by means of fine-tuning (a technique which is part of the broader category of 'alignment' processes), or having a separate model (which may or may not be an LLM) filter its output. Both of these are done only at runtime, after the LLM has already been trained - and most of the crawling comes during training.

All that's to say that you can stop some of your website contents being quoted by the chatbots verbatim, but you can't prevent the crawlers using up all your bandwidth in the way you describe. You also can't stop your website contents being rehashed in a conceptual way by the chatbot later. So if I just write something copyrighted or taboo here in this comment, that won't stop an LLM being trained on the comment as a whole, but it might stop the chatbot based on that LLM from quoting it directly.

Everything is moving so quickly with AI that my comment is probably out of date the moment I type it... take it with a grain of salt :)

userbinator•4mo ago
Ask it how many letters are in certain words.
michaeljx•4mo ago
For some reason I thought this would be about dealing with very large insects
rapsacnz•4mo ago
Me too
felipeerias•4mo ago
There was another headline about “top models” getting in trouble for “history leaks” that would have been very confusing to 15 year old me.
kiitos•4mo ago
what a just totally bizarre perspective

all of the stuff that's being complained-about is absolute 100% table-stakes stuff that every http server on the public internet has needed to deal with since, man, i dunno, minimum 15 years now?

as a result literally nobody self-hosts their own HTTP content any more, unless they enjoy the challenge in like a problem-solving sense

if you are even self-hosting some kind of captcha system you've already make a mistake, but apparently this guy is not just hosting but building a bespoke one? which is like, my dude, respect, but this is light years off of the beaten path

the author whinges about google not doing their own internal rate limiting in some presumed distributed system, before any node in that system makes any http request over the open internet. that's fair and not doing so is maybe bad user behavior but on the open internet it's the responsibility of the server to protect itself as it needs to, it's not the other way around

everything this dude is yelling about is immediately solved by hosting thru a hosting provider, like everyone else does, and has done, since like 2005

jrochkind1•4mo ago
Google isn't mentioned in OP, did we read the same article?
kiitos•4mo ago
i was responding to something else, 100% my bad
nickpsecurity•4mo ago
I made my pages static HTML with no images, used a fast server, and BunnyCDN (see profile domain). Ten thousand hits a day from bots costs a penny or something. When I'm using images, I link to image hosting sites. It might get more challenging if I try to squeeze meme images in between every other paragraph to make my sites more beautiful.

Far as Ted's article, the first thing that popped in my head is that most AI crawlers hitting my sites are in big, datacenter cities: Dallas, Dublin, etc. I wonder if I could easily geo-block those cities or redirect them to pages with more checks built-in. I just haven't looked into that on my CDN's or in general in a long time.

They also usually request files from popular, PHP frameworks and othrr things like that. If you don't use PHP, you could autoban on the first request for a PHP page. Likewise for anything else you don't need.

Of the two, looking for .php is probably lightening quick with low, CPU/RAM utilization in comparison.

kragen•4mo ago
This is exciting!