HTTP 1.1 connection reuse is indeed more subtle than it first appears. But http/2 is so hard to get right.
Perhaps something like "HTTP/2-Lite" profile is in order? A minimal profile with just 1 connection, no compression, and so on.
I would endorse your idea, though, speaking more broadly! That does sound useful.
Perhaps it isn’t that easy, but it could be put in common and used a bit everywhere.
The underlying vulnerability, tracked as CVE-2025-8671, has been found to impact projects and organizations such as AMPHP, Apache Tomcat, the Eclipse Foundation, F5, Fastly, gRPC, Mozilla, Netty, Suse Linux, Varnish Software, Wind River, and Zephyr Project. Firefox is not affected.
[1] - https://www.securityweek.com/madeyoureset-http2-vulnerabilit...
I'll note articles about HTTP2.0 vulnerabilities have been posted with some regularity here: https://news.ycombinator.com/item?id=44909416
Is there a reason you don't find this persuasive?
It's quite possible the old issues are worse than the new ones, but it's not obvious that's the case.
It over-reaches with argument about disallowing http/1.1.
Parsers should be better.
Moving to another protocol won't solve the issue. It will be written by the same careless engineers. So same companies will have the same issues or worse...
We just lose readability/debuggability/accesibility.
The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
Sure, just the bugs in the link.
Content-Length+Transfer-Encoding should be bad request.
RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to forwarding a message via a"
Content-Lenght: \r\n7 is also a bad request.
Just those mean whoever wrote the parser didn't even bother read the RFC...
No parsing failure checks either...
That kind of person will mess up HTTP/2 as well.
It's not a protocol issue if you can't even be bothered to read the spec.
> The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
Fair enough, I disagree with that conclusion. I'm really curious what kind of bugs the engineers above would add with HTTP/2, will be fun.
Maybe it should be, but it isn't. In fact, all revisions of the HTTP/1.1 RFC have made it clear that if both headers are present, the receiver must treat it as the Content-Length header not being present. Not as an error.
RFC 2616 "If the message does include a non-identity transfer-coding, the Content-Length MUST be ignored."
RFC 7230 "If a message is received with both a Transfer-Encoding and a Content-Length header field, the Transfer-Encoding overrides the Content-Length"
RFC 9112 "... Transfer-Encoding is defined as overriding Content-Length, as opposed to them being mutually incompatible."
> RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to forwarding a message via a"
That's a spec mistake in the original RFC, corrected in later revisions. It would be an absurd requirement: if the input is chunked, the output must be chunked as well. If the sender is sending gzip and the receiver accepts it, what is gained from the proxy decompressing the stream only to immediately compress it?
> Content-Lenght: \r\n7 is also a bad request.
... I mean, yes, it would be a bad request. But the example in the article is "Content-Lenght: \r\n 7" which isn't invalid. It's a feature defined in RFC 2616 as folding. It was a bad idea and deprecated in later revisions, but that just means that a client should not send it. A server or proxy can either reject the message or undo the folding, they're just not allowed to pass them through unmodified.
The irony that near the foot of the article it encourages people to "Avoid niche webservers" because "Apache and nginx are lower-risk" is quite strong, given that my publicfile logs show that most of the continual barrage of attacks a public WWW server like mine is subject to are query parameter injection attempts, and attacks quite evidently directed against WordPress, Apache, AWS, and these claimed "lower risk" softwares. (There was another lengthy probe to find out where WordPress was installed a couple of minutes ago, as I write this. Moreover, the attacker who has apparently sorted every potentially vulnerable PHP script into alphabetical order and just runs through them must be unwittingly helping security people, I would have thought. (-:)
Switching from my so-called "niche webserver", which does not have these mechanisms to be exploited, to Apache and nginx would be a major retrograde step. Not least because djbwares publicfile nowadays rejects HTTP/0.9 and HTTP/1.0 by default, and I would be going back to accepting them, were I foolish enough to take this paper's advice.
"Reject requests that have a body" might have been the one bit of applicable good advice that the paper has, back in October 1999. But then publicfile came along, in November, whose manual has from the start pointed out (https://cr.yp.to/publicfile/httpd.html) that publicfile httpd rejects requests that have content lengths or transfer encodings. It's a quarter of a century late to be handing out that advice as if it were a new security idea.
And the whole idea that this is "niche webservers" is a bit suspect. I publish a consolidated djbwares that incorporates publicfile. But the world has quite a few other cut down versions (dropping ftpd being a popular choice), homages that are "inspired by publicfile" but not written in C, and outright repackagings of the still-available original. It's perhaps not as niche as one might believe by only looking at a single variant.
I might be in the vanguard in the publicfile universe of making HTTP/0.9 and HTTP/1.0 not available in the default configuration, although there is a very quiet avalanche of that happening elsewhere. I'm certainly not persuaded by this paper, though, based entirely upon a worldview, that publicfile is direct evidence of not being universal truth, to consider that I need do anything at all about HTTP/1.1. I have no back-end servers, no reverse proxies, no CGI, no PHP, no WordPress, no acceptance of requests with bodies, and no vulnerability to these "desync" problems that are purportedly the reason that I should switch over to the monoculture and then switch again because the monoculture "must die".
superkuh•5mo ago
Yes, the corporations and insitutions and their economic transactions must be the highest and only priority. I hear that a lot from commercial people with commercial blinders on.
They simply cannot see beyond their context and realize the web, http/1.1 is used by human people that don't have the same use cases or incredibly stringent identity verification needs. Human use cases don't matter to them because they are not profitable.
Also, this "attack" only works on commercial style complex CDN setups. It wouldn't effect human hosted webservers at all. So yeah, commercial companies, abandon HTTP, go to your HTTP/3 with all it's UDP only and CA TLS only and no self signing and no clear text. And leave the actual web on HTTP/1.1 HTTP+HTTPS alone.
cyberax•5mo ago
All you need is a faulty caching proxy in front of your PHP server. Or maybe that nice anti-bot protection layer.
It really, really is easy to get bitten by this.
jsnell•5mo ago
> Note that disabling HTTP/1 between the browser and the front-end is not required
layer8•5mo ago
plorkyeran•5mo ago
GuB-42•5mo ago
Let's get real, online security is mostly a commercial thing. Why do you think Google pushed so hard for HTTPS? Do you really think it is to protect your political opinions? No one cares about them, but a lot of people care about your credit card.
That's something I disagree with the people who made Gemini, a "small web" protocol for people who want to escape the modern web with its ads, tracking and bloat. They made TLS a requirement. Personally, I would have banned encryption. There is a cost, but it is a good way to keep commercial activity out.
I am not saying that the commercial web is bad, it may be the best thing that happened in the 21th century so far, but if you want to escape from it for a bit, I'd say plain HTTP is the way to go.
Note: of course if you need encryption and security in general for non commercial reason, use it, and be glad for the commercial web for helping you with that.