frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Leaving Meta and PyTorch

https://soumith.ch/blog/2025-11-06-leaving-meta-and-pytorch.md.html
236•saikatsg•3h ago•32 comments

A Fond Farewell

https://www.farmersalmanac.com/fond-farewell-from-farmers-almanac
216•erhuve•6h ago•68 comments

Lessons from Growing a Piracy Streaming Site

https://prison.josh.mn/lessons
70•zuhayeer•2h ago•12 comments

You should write an agent

https://fly.io/blog/everyone-write-an-agent/
664•tabletcorry•12h ago•285 comments

Two billion email addresses were exposed

https://www.troyhunt.com/2-billion-email-addresses-were-exposed-and-we-indexed-them-all-in-have-i...
456•esnard•13h ago•315 comments

Kimi K2 Thinking, a SOTA open-source trillion-parameter reasoning model

https://moonshotai.github.io/Kimi-K2/thinking.html
739•nekofneko•18h ago•308 comments

Game design is simple

https://www.raphkoster.com/2025/11/03/game-design-is-simple-actually/
285•vrnvu•11h ago•81 comments

Text case changes the size of QR codes

https://www.johndcook.com/blog/2025/10/31/smaller-qr-codes/
25•ibobev•5d ago•2 comments

Show HN: I scraped 3B Goodreads reviews to train a better recommendation model

https://book.sv
396•costco•1d ago•131 comments

A Note on Fil-C

https://graydon2.dreamwidth.org/320265.html
164•signa11•8h ago•73 comments

Photoroom (YC S20) Is Hiring a Senior AI Front End Engineer in Paris

https://jobs.ashbyhq.com/photoroom/7644fc7d-7840-406d-a1b1-b9d2d7ffa9b8
1•ea016•2h ago

We built a cloud GPU notebook that boots in seconds

https://modal.com/blog/notebooks-internals
32•birdculture•4d ago•5 comments

From web developer to database developer in 10 years

https://notes.eatonphil.com/2025-02-15-from-web-developer-to-database-developer-in-10-years.html
69•pmbanugo•3d ago•18 comments

JermCAD: Browser-Based CAD Software

https://github.com/jeremyaboyd/jerm-cad
19•azhenley•4h ago•9 comments

Cryptography 101 with Alfred Menezes

https://cryptography101.ca
33•nmadden•3d ago•3 comments

Analysis indicates that the universe’s expansion is not accelerating

https://ras.ac.uk/news-and-press/research-highlights/universes-expansion-now-slowing-not-speeding
169•chrka•12h ago•142 comments

Open Source Implementation of Apple's Private Compute Cloud

https://github.com/openpcc/openpcc
401•adam_gyroscope•1d ago•87 comments

HTML Slides with notes

https://nbd.neocities.org/slidepresentation/Slide%20presentation%20about%20slides
43•Curiositry•7h ago•10 comments

The Silent Scientist: When Software Research Fails to Reach Its Audience

https://cacm.acm.org/opinion/the-silent-scientist-when-software-research-fails-to-reach-its-audie...
3•mschnell•5d ago•0 comments

Time Immemorial turns 750: The Medieval law that froze history at 1189

https://www.ianvisits.co.uk/articles/time-immemorial-turns-750-the-medieval-law-that-froze-histor...
27•zeristor•8h ago•5 comments

Swift on FreeBSD Preview

https://forums.swift.org/t/swift-on-freebsd-preview/83064
205•glhaynes•15h ago•129 comments

A startup’s quest to store electricity in the ocean

https://techcrunch.com/2025/10/22/one-startups-quest-to-store-electricity-in-the-ocean/
6•rbanffy•41m ago•1 comments

LLMs encode how difficult problems are

https://arxiv.org/abs/2510.18147
140•stansApprentice•15h ago•28 comments

Eating stinging nettles

https://rachel.blog/2018/04/29/eating-stinging-nettles/
207•rzk•21h ago•191 comments

FBI tries to unmask owner of archive.is

https://www.heise.de/en/news/Archive-today-FBI-Demands-Data-from-Provider-Tucows-11066346.html
858•Projectiboga•17h ago•426 comments

A prvalue is not a temporary

https://blog.knatten.org/2025/10/31/a-prvalue-is-not-a-temporary/
27•ingve•6h ago•53 comments

The Geometry of Schemes [pdf]

https://webhomes.maths.ed.ac.uk/~v1ranick/papers/eisenbudharris.pdf
42•measurablefunc•6d ago•9 comments

I analyzed the lineups at the most popular nightclubs

https://dev.karltryggvason.com/how-i-analyzed-the-lineups-at-the-worlds-most-popular-nightclubs/
154•kalli•19h ago•73 comments

Mathematical exploration and discovery at scale

https://terrytao.wordpress.com/2025/11/05/mathematical-exploration-and-discovery-at-scale/
253•nabla9•1d ago•116 comments

Scientists find ways to boost memory in aging brains

https://news.vt.edu/articles/2025/10/cals-jarome-improving-memory.html
149•stevenjgarner•9h ago•42 comments
Open in hackernews

Forget IPs: using cryptography to verify bot and agent traffic

https://blog.cloudflare.com/web-bot-auth/
80•todsacerdoti•5mo ago

Comments

PaulHoule•5mo ago
There is a lot of talk about AI training being a driver of bot activity, but I think AI inference is also a driver, in two ways.

(1) It's always been easy to write bots [1] [2]. If you knew beautifulsoup well you could often write a scraper in 10 minutes, now people will ask ChatGPT to write a scraper for them and have a scraper ready in 15 minutes so they're discovering how easy it is, how you don't have to limit yourself to public APIs that are usually designed to limit access, not expand it.

(2) Instead of using content to train an AI you can feed it into an AI for inference. For instance, you can tell the AI to summarize pages or to extract specific facts from pages or to classify pages. It's increasingly possible to develop a workflow like: classify 30,000 RSS feed items, select 300 items that the user will probably find interesting, crawl those 300 pages looking for hyperlinks to scientific journal articles or other links that would be better to post, crawl those links to see if the journal articles are open access, weigh various factors to decide what's likely to be the best link, do specialized image extraction so I can make a good social post, etc. It's not too hard to do but it all comes falling down if the bot has to click on fire hydrants endlessly.

[1] Polite crawlers limit how many threads they have running against a single server. If you only have one thread per server you are unlikely to overload it. If you want to make a crawler with a large thread count that is crawling a large number of servers it can be a hassle to implement this, particularly if you want to maximize performance or run a large distributed crawler. However a lot of times I do a crawling project that targets one site or five sites or that maybe crawls 1000 documents a day and in those cases the single-threaded crawler is fine.

[2] For some reason, my management has always overestimated the work of building scrapers, I think because they've been burned by UI development which is always underestimated. The fact that UI development is such a bitch actually helps with crawler development -- you might be afraid that the target site is going to change but between the high cost of making changes and the fact that Google will trash your SEO if you change anything about your site, the target site won't change.

showerst•5mo ago
Agreed on all points except [2], I run many scrapers and sites change _all the time_, often changing markup for seemingly random reasons. One government site I scrape changes ids and classes between camel and snake case every couple of weeks, it makes me wonder if it's a developer pulling a fast one on the client.
dboreham•5mo ago
Hearing this makes me suspect some tool auto-generates the id's and its config is getting changed every couple weeks by some spaces vs tabs battle between devs.
superkuh•5mo ago
I do not think that more in-house cloudflare-only "standards" open washed through their IETF employees, both of which raise the friction to participation in the web even higher for actual humans, are the way to go. Especially setups which again rely on centralized CAs and have tiny expiring lifetimes. Seems like pretty soon there'll only be one or two browsers which can even hope to access sites behind cloudflare's infrastructure. They might as well just start releasing their own browser and the transformation to AOL will be complete.
ecb_penguin•5mo ago
> I do not think that more in-house cloudflare-only "standards" open washed through their IETF employees

As someone with multiple RFCs, this is the way it's always been done. Industry has a problem, there's some collaboration with other industry or academia, someone submits a draft RFC. People are either free to adopt it or not. Sometimes there's competing proposals that are accepted, and sometimes the topic dies entirely.

> both of which raise the friction to participation in the web even higher for actual humans

Absolutely nothing wrong with this, as it's site owners that make the decision for their own sites. Yep, I do want some friction. The tradeoff saves me a ton of money. Heck, I could block most ASNs and email domains and still keep 99% of my customers.

> Seems like pretty soon there'll only be one or two browsers which can even hope to access sites behind cloudflare's infrastructure

This proposal is about bots identifying themselves through open HTTP headers.

superkuh•5mo ago
>This proposal is about bots identifying themselves through open HTTP headers.

The problem is that to CF, everything that isn't Chrome is a bot (only a slight exaggeration). So browsers that aren't made by large corporations wouldn't have this. It's like how CF uses CORS.

CORS isn't only CF but it's an example of their requiring obscure things no one else really uses, and using them in weird ways that causes most browser to be unable to do it. The HTTP header CA signing is yet another of these things. And weird modifications of TLS flags fall right in there too. It's basically Proof-of-Chrome via Gish Gallop of new "standards" they come up with.

>Absolutely nothing wrong with this, as it's site owners that make the decision for their own sites.

I agree. It's their choice. I am just laying out the consequences of these mostly uninformed choices. They won't be aware that they're blocking a large number of their actual human visitors initially. I've seen it play out again and again with sites and CF. Eventually the sites are doing as much work maintaining their whitelists of UAs and IPs that one wonders why they use CF at all if they're doing the job instead.

And that's not even starting on the bad and aggressive defaults for CF free accounts. In the last month or two they have slightly improved this. So there's some hope. They know they are a problem because they're so big,

"It was a decision I could make because I’m the CEO of a major Internet infrastructure company." ... "Literally, I woke up in a bad mood and decided someone shouldn't be allowed on the Internet. No one should have that power." - Cloudflare CEO Matthew Prince

(ps. You made some good and valid points, re: IETF process status quo, personal choice, etc, it's not me doing the downvotes)

Sophira•5mo ago
There's another problem here that I haven't seen anyone talking about, and that's the futility of trying to distinguish between "good bots" and "bad bots".

The idea of Anubis is to stop bots that are meant to gather data for AI purposes. But you know who has a really big AI right now? Google. And you know who are the people who have the most bots indexing the web for their search engine? Yup, Google.

All these discussions have been assuming that Googlebot is a "good bot", but what exactly is stopping Google from using the data from Googlebot to feed Gemini? After all, nobody's going to block Googlebot, for obvious reasons.

At most, surely the only thing that blocking AI bots will do is stop locally-running bots, or stop OpenAI (because they don't have any other legitimate reason to be running bots over the web).

nubinetwork•5mo ago
Using IPs requires next to no cpu power... if I have to start picking apart http requests and running algorithms on the traffic, I might as not well even run websites, including personal ones.
ecb_penguin•5mo ago
This already happens with TLS, JWT verification, etc.
molticrystal•5mo ago
You are right that IP checks are lightweight, though you miss that setting up TCP/IP handshakes is algorithm-heavy, but it’s transparent because hardware and kernel optimizations keep it light on the CPU. TLS encryption, through certificate checks, key exchanges, that whole negotiation, is a CPU-heavy activity, especially on servers. Most of that asymmetric crypto, like verifying certificates, isn’t helped much by hardware accelerators like AES-NI, which mainly help with session encryption. TLS is already tons of work, so HTTP Message Signatures and mTLS are like piling more hay on the stack, it’s extra work, but you’re already doing a lot at that point.

The real complaint should be about having to adopt another standard, and whether they’ll discriminate against applications like legacy RSS readers, since they’re considered a type of bot.

kbolino•5mo ago
IP bans are usually enforced before the TCP handshake proceeds: server receives SYN packet, checks source address against blocklist, and if blocked then drops it before proceeding any further in the TCP state diagram.
probably_wrong•5mo ago
Wasn't that the argument against https, namely, that it was too costly to run [1]? I also run fail2ban [2] in my servers and I rarely even notice it's there.

I'm not saying you should sit down with the iptables manual and start going through the logs, but I can see the idea taking off if all it takes is (say) one apt-get and two config lines.

[1] https://stackoverflow.com/questions/1035283/will-it-ever-be-...

[2] https://github.com/fail2ban/fail2ban

elithrar•5mo ago
IPs as identifiers aren’t great: in a world of both CGNAT (more shared IPs) and a ton of sketchy residential proxies, they’ve become poor proxies for identity of a “thing”.
kbolino•5mo ago
IPs are slowly getting worse as identifiers over time, but IP and IP range bans are like port-shifting SSH: you can often get a lot of defense against low-effort attacks for similarly low amounts of effort.
senectus1•5mo ago
This clever bunny did something very similar.. (but self hosted)

https://xeiaso.net/blog/2025/anubis/

I love the approach.. If I could be arsed blogging I'd probably set it up myself.

mshockwave•5mo ago
I might be wrong, but it seems like Anubis asks the _client_ to solve the cryptography challenges while the approach Cloudflare described here asks the server to verify the (cryptography) signature?
senectus1•5mo ago
the client thats scraping... yes. it shifts the load to the offending service. its not a big issue if you're human but if you're a AI scraping bot its a load of heavy resources.
ralferoo•5mo ago
Altcha is a similar thing: https://github.com/altcha-org/altcha

I recently implemented a very similar thing to its obfuscation via proof-of-work (https://altcha.org/docs/obfuscation/) in my C++ REST backend and flutter front-end, and use it for rate-limiting on APIs that allow creation of a new account or sending sign-up e-mails.

I have an authentication token that's then wrapped with AES-GCM using a random IV and the client is given the key, IV stem and a maximum count for the IV.

lockhead•5mo ago
This would help detecting legit BOTs for sure, but as Origin you would still have the same issue than before, as you still need to be able to discern between "real" Users and all the malicious Traffic. The Amount of "good" bots is way smaller than that, and by good behavior and transparent data much easier to identify even without this kind of stuff. So to make real use of this, Users would also need to do this and suddenly "privacy hell" would be too kind to call this.
Sophira•5mo ago
Taking this to its logical extreme, if it ended up getting used enough, then governments could be tempted to enforce its use.
drtgh•5mo ago
It does not sound extreme, unfortunately. Meanwhile the malicious traffic would keep their activity with spoofed-and-so-on certs, from the very beginning.
az09mugen•5mo ago
Totally agree, that's conceptually the same problem as robots.txt. As stated in https://www.robotstxt.org/faq/blockjustbad.html :

> But almost all bad robots ignore /robots.txt, making that pointless.

unsolved73•5mo ago
Interesting proposal.

The current situation is getting worse day after day because everybody want to ScRaPe 4lL Th3 W38!!

Verifying Ed25519 signature is almost free on modern CPUs, I just wonder why they go with an obscure RFC for HTTP signatures instead of using plain JSON Web Tokens in an header.

JWTs are universal. Parsing this custom format will certainly lead to a few interesting bugs.

dboreham•5mo ago
The subtext surely is: "and we're going to charge for crawler traffic next".
ok123456•5mo ago
What about an SMTP proof of work extension? Smaller SMTP relays that would typically have a harder time sending mail could opt in to solve a problem to increase the chance of delivery. The difficulty of the problem could be inversely related to reputation.
ipdashc•5mo ago
https://en.wikipedia.org/wiki/Hashcash