frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
426•klaussilveira•5h ago•97 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
21•mfiguiere•42m ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
775•xnx•11h ago•472 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
142•isitcontent•6h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
135•dmpetrov•6h ago•57 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
41•quibono•4d ago•3 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
246•vecti•8h ago•117 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
70•jnord•3d ago•4 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
180•eljojo•8h ago•124 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
314•aktau•12h ago•154 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
12•matheusalmeida•1d ago•0 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
311•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
397•todsacerdoti•13h ago•217 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
322•lstoll•12h ago•233 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
12•kmm•4d ago•0 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
48•phreda4•5h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
109•vmatsiiako•11h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
186•i5heu•8h ago•129 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
236•surprisetalk•3d ago•31 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
976•cdrnsf•15h ago•415 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
144•limoce•3d ago•79 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
17•gfortaine•3h ago•2 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
49•ray__•2h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
41•rescrv•13h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
35•lebovic•1d ago•11 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
52•SerCe•2h ago•42 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
77•antves•1d ago•57 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
18•MarlonPro•3d ago•4 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
108•coloneltcb•2d ago•71 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
39•nwparker•1d ago•10 comments
Open in hackernews

HTTP/1.1 must die: the desync endgame

https://portswigger.net/research/http1-must-die
42•sprawl_•5mo ago

Comments

superkuh•5mo ago
> If we want a secure web, HTTP/1.1 must die.

Yes, the corporations and insitutions and their economic transactions must be the highest and only priority. I hear that a lot from commercial people with commercial blinders on.

They simply cannot see beyond their context and realize the web, http/1.1 is used by human people that don't have the same use cases or incredibly stringent identity verification needs. Human use cases don't matter to them because they are not profitable.

Also, this "attack" only works on commercial style complex CDN setups. It wouldn't effect human hosted webservers at all. So yeah, commercial companies, abandon HTTP, go to your HTTP/3 with all it's UDP only and CA TLS only and no self signing and no clear text. And leave the actual web on HTTP/1.1 HTTP+HTTPS alone.

cyberax•5mo ago
> Also, this "attack" only works on commercial style complex CDN setups. It wouldn't effect human hosted webservers at all.

All you need is a faulty caching proxy in front of your PHP server. Or maybe that nice anti-bot protection layer.

It really, really is easy to get bitten by this.

jsnell•5mo ago
The author is only arguing against HTTP/1.1 for use between proxies and backends. Explicitly so:

> Note that disabling HTTP/1 between the browser and the front-end is not required

layer8•5mo ago
It requires rather careful reading to understand that. Most of the site sounds like they want to eliminate HTTP/1.1 wholesale.
plorkyeran•5mo ago
The fact that this is a footnote at the end of a long article is a rather significant problem with the article.
GuB-42•5mo ago
Yes!

Let's get real, online security is mostly a commercial thing. Why do you think Google pushed so hard for HTTPS? Do you really think it is to protect your political opinions? No one cares about them, but a lot of people care about your credit card.

That's something I disagree with the people who made Gemini, a "small web" protocol for people who want to escape the modern web with its ads, tracking and bloat. They made TLS a requirement. Personally, I would have banned encryption. There is a cost, but it is a good way to keep commercial activity out.

I am not saying that the commercial web is bad, it may be the best thing that happened in the 21th century so far, but if you want to escape from it for a bit, I'd say plain HTTP is the way to go.

Note: of course if you need encryption and security in general for non commercial reason, use it, and be glad for the commercial web for helping you with that.

spenczar5•5mo ago
I dont know, arguing that http/2 is safer overall is a... bold claim. It is sufficiently complex that there is no standard implementation in the Python standard library, and even third party library support is all over the place. requests doesn't support it; httpx has experimental, partial, pre-1.0 support. Python http/2 servers are virtually unavailable at all. And it's not just Python - I remember battling memory leaks, catastrophic deadlocks, and more in the grpc-go implementation of http/2, in its early days.

HTTP 1.1 connection reuse is indeed more subtle than it first appears. But http/2 is so hard to get right.

jcdentonn•5mo ago
Not sure about servers, but we had http/2 clients in java for a very long time.
cyberax•5mo ago
An HTTP/2 client is pretty easy to implement. Built-in framing automatically improves a lot of complexity, and if you don't need multiple streams, you can simplify the overall state machine.

Perhaps something like "HTTP/2-Lite" profile is in order? A minimal profile with just 1 connection, no compression, and so on.

spenczar5•5mo ago
Isn't the original post about servers? A minimal client doesn't help with server security.

I would endorse your idea, though, speaking more broadly! That does sound useful.

jiehong•5mo ago
nghttp2 is a C lib that can be used for serving as a server in many cases. Rust has the http2 crate.

Perhaps it isn’t that easy, but it could be put in common and used a bit everywhere.

ameliaquining•5mo ago
These sound to me like they are mostly problems with protocol maturity rather than with its fundamental design. If hypothetically the whole world decided to move to HTTP/2, there'd be bumps in the road, but eventually at steady state there'd be a number of battle-tested implementations available with the defect rates you'd expect of mature widely used open-source protocol implementations. And programming language standard libraries, etc., would include bindings to them.
Bender•5mo ago
Speaking of http/2 [1] - August 14, 2025

The underlying vulnerability, tracked as CVE-2025-8671, has been found to impact projects and organizations such as AMPHP, Apache Tomcat, the Eclipse Foundation, F5, Fastly, gRPC, Mozilla, Netty, Suse Linux, Varnish Software, Wind River, and Zephyr Project. Firefox is not affected.

[1] - https://www.securityweek.com/madeyoureset-http2-vulnerabilit...

yencabulator•5mo ago
Protocol smuggling is a lot more severe than DoS.
nayuki•5mo ago
Discussed a few weeks ago: https://http1mustdie.com/ , https://news.ycombinator.com/from?site=http1mustdie.com
nitwit005•5mo ago
It seems they're very familiar with some HTTP1.1 problems. I suspect they'll feel less confident in HTTP2.0 if they spent some time there.

I'll note articles about HTTP2.0 vulnerabilities have been posted with some regularity here: https://news.ycombinator.com/item?id=44909416

ameliaquining•5mo ago
The section "How secure is HTTP/2 compared to HTTP/1?" (https://portswigger.net/research/http1-must-die#how-secure-i...) responds to this. In short, there's an entire known class of vulnerabilities that affects HTTP/1 but not HTTP/2, and it's not feasible for HTTP/1 to close the entire vulnerability class (rather than playing whack-a-mole with bugs in individual implementations) because of backwards compatibility. The reverse isn't true; most known HTTP/2 vulnerabilities have been the kind of thing that could also have happened to HTTP/1.

Is there a reason you don't find this persuasive?

nitwit005•5mo ago
The new features/behaviors in the new protocol inherently create new classes of vulnerabilities. That above link relates to an issue with RST_STREAM frames. You can't have issues with frames if you lack frames.

It's quite possible the old issues are worse than the new ones, but it's not obvious that's the case.

mittensc•5mo ago
The article is a nice read on request smuggling.

It over-reaches with argument about disallowing http/1.1.

Parsers should be better.

Moving to another protocol won't solve the issue. It will be written by the same careless engineers. So same companies will have the same issues or worse...

We just lose readability/debuggability/accesibility.

ameliaquining•5mo ago
It's not correct to attribute all bugs to carelessness, and therefore assume that engineer conscientiousness is the only criterion affecting defect rates. Some software architectures, protocol designs, programming languages, etc., are less prone than others to certain kinds of implementation bugs, by leaving less room in the state space for them to hide undetected. Engineers of any skill level will produce far more defects if they write in assembly, than if they write the same code in a modern language with good static analysis and strong runtime-enforced guarantees. Likewise for other foundational decisions affecting how to write a program.

The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.

mittensc•5mo ago
> It's not correct to attribute all bugs to carelessness

Sure, just the bugs in the link.

Content-Length+Transfer-Encoding should be bad request.

RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to forwarding a message via a"

Content-Lenght: \r\n7 is also a bad request.

Just those mean whoever wrote the parser didn't even bother read the RFC...

No parsing failure checks either...

That kind of person will mess up HTTP/2 as well.

It's not a protocol issue if you can't even be bothered to read the spec.

> The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.

Fair enough, I disagree with that conclusion. I'm really curious what kind of bugs the engineers above would add with HTTP/2, will be fun.

adgjlsfhk1•5mo ago
I think the main point is that these sorts of parsing mistakes shouldn't be so easily exploitable and the problem is that the length is non trivial to parse, so if you mess up the parsing of that it escalates the security of a ton of other bugs.
ameliaquining•5mo ago
At least in the Cloudflare case, if you look at the postmortem (https://blog.cloudflare.com/resolving-a-request-smuggling-vu...) and the commit that fixed the bug (https://github.com/cloudflare/pingora/commit/fda3317ec822678...), it's significantly more complicated than "they didn't read the RFC", and a conclusion that a diligent engineer would never ever make this kind of mistake does not seem justified.
jsnell•5mo ago
> Content-Length+Transfer-Encoding should be bad request.

Maybe it should be, but it isn't. In fact, all revisions of the HTTP/1.1 RFC have made it clear that if both headers are present, the receiver must treat it as the Content-Length header not being present. Not as an error.

RFC 2616 "If the message does include a non-identity transfer-coding, the Content-Length MUST be ignored."

RFC 7230 "If a message is received with both a Transfer-Encoding and a Content-Length header field, the Transfer-Encoding overrides the Content-Length"

RFC 9112 "... Transfer-Encoding is defined as overriding Content-Length, as opposed to them being mutually incompatible."

> RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to forwarding a message via a"

That's a spec mistake in the original RFC, corrected in later revisions. It would be an absurd requirement: if the input is chunked, the output must be chunked as well. If the sender is sending gzip and the receiver accepts it, what is gained from the proxy decompressing the stream only to immediately compress it?

> Content-Lenght: \r\n7 is also a bad request.

... I mean, yes, it would be a bad request. But the example in the article is "Content-Lenght: \r\n 7" which isn't invalid. It's a feature defined in RFC 2616 as folding. It was a bad idea and deprecated in later revisions, but that just means that a client should not send it. A server or proxy can either reject the message or undo the folding, they're just not allowed to pass them through unmodified.

JdeBP•5mo ago
My WWW site has been served up by publicfile for many years now, and reading through this I kept having the same reaction, over and over, which is that the assumption that "websites often use reverse proxies" is upgraded in the rest of the article to everyone always uses back-ends and proxies. It's as if there is a monocultural world of HTTP/1.1 WWW servers; and not only does the author discount everything else apart from the monoculture, xe even encourages increasing the monoculture as a survival tactic, only then to state that the monoculture must be killed.

The irony that near the foot of the article it encourages people to "Avoid niche webservers" because "Apache and nginx are lower-risk" is quite strong, given that my publicfile logs show that most of the continual barrage of attacks a public WWW server like mine is subject to are query parameter injection attempts, and attacks quite evidently directed against WordPress, Apache, AWS, and these claimed "lower risk" softwares. (There was another lengthy probe to find out where WordPress was installed a couple of minutes ago, as I write this. Moreover, the attacker who has apparently sorted every potentially vulnerable PHP script into alphabetical order and just runs through them must be unwittingly helping security people, I would have thought. (-:)

Switching from my so-called "niche webserver", which does not have these mechanisms to be exploited, to Apache and nginx would be a major retrograde step. Not least because djbwares publicfile nowadays rejects HTTP/0.9 and HTTP/1.0 by default, and I would be going back to accepting them, were I foolish enough to take this paper's advice.

"Reject requests that have a body" might have been the one bit of applicable good advice that the paper has, back in October 1999. But then publicfile came along, in November, whose manual has from the start pointed out (https://cr.yp.to/publicfile/httpd.html) that publicfile httpd rejects requests that have content lengths or transfer encodings. It's a quarter of a century late to be handing out that advice as if it were a new security idea.

And the whole idea that this is "niche webservers" is a bit suspect. I publish a consolidated djbwares that incorporates publicfile. But the world has quite a few other cut down versions (dropping ftpd being a popular choice), homages that are "inspired by publicfile" but not written in C, and outright repackagings of the still-available original. It's perhaps not as niche as one might believe by only looking at a single variant.

I might be in the vanguard in the publicfile universe of making HTTP/0.9 and HTTP/1.0 not available in the default configuration, although there is a very quiet avalanche of that happening elsewhere. I'm certainly not persuaded by this paper, though, based entirely upon a worldview, that publicfile is direct evidence of not being universal truth, to consider that I need do anything at all about HTTP/1.1. I have no back-end servers, no reverse proxies, no CGI, no PHP, no WordPress, no acceptance of requests with bodies, and no vulnerability to these "desync" problems that are purportedly the reason that I should switch over to the monoculture and then switch again because the monoculture "must die".