On the one hand, looks like decent cleanup. (IIRC, engines in particular will not be missed).
On the other hand, breaking compatibility is always a tradeoff, and I still remember 3.x being... not universally loved.
From what I remember hearing, the move from 2 to 3 was hard.
But, thousand yard stare it was the version for the FIPS patches to 1.0.2.
According to this one should not be using v3 at all..
For those not familiar: until OpenSSL 3.4.1, if you wanted use OpenSSL and wanted to implement HTTP/3, which uses QUIC as the underlying protocol, you had to use their entire QUIC stack; you couldn't have a QUIC implementation and only use OpenSSL for the encryption parts.
QUIC, for those not familiar, is basically "what if we re-implemented TCP's functionality on top of UDP, but we could throw out all the old legacy crap". Complicated but interesting, except that if OpenSSL's implementation didn't do what you want or didn't do it well, you either had to put up with it or go use some other SSL library somewhere else. That meant that if you were using e.g. curl built against OpenSSL then curl also inherently had to use OpenSSL's QUIC implementation even if there were better ones available.
Daniel Stenberg from Curl wrote a great blog post about how bad and dumb that was if anyone is interested. https://daniel.haxx.se/blog/2026/01/17/more-http-3-focus-one...
The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...
Here are some juicy quotes:
> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.
> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.
> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.
> In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing.
Ah yes, the ole' " fn(args: Map<String, Any>)" approach. Highly auditable, and Very Safe.The software quality side of OpenSSL paradoxically probably regressed since Heartbleed: there's a rough consensus that the design of OpenSSL 3.0 was a major step backwards, not least for performance, and more than one large project (but most notably pyca/cryptography) is actively considering moving away from OpenSSL entirely as a result. Again: while security concerns might be an ancillary issue in those potential migrations, the core issue is just that OpenSSL sucks to work with now.
:)
capitol_•3h ago
bombcar•2h ago
arcfour•2h ago
bombcar•1h ago
altairprime•1h ago
https://developer.apple.com/documentation/security/sec_proto...
Presumably anyone besides Safari can opt-in to that testing today, but I wouldn’t ship it worldwide and expect nice outcomes until (I suspect) after this fall’s 27 releases. Maybe someone could PR the WebKit team to add that feature flag in the meantime?
kro•2h ago
But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there. The real benefits are on huge cloud hosting platforms.
Bender•31m ago
"Nginx 1.30 incorporates all of the changes from the Nginx 1.29.x mainline branch to provide a lot of new functionality like Multipath TCP (MPTCP)."
"Nginx 1.30 also adds HTTP/2 to backend and Encrypted Client Hello (ECH), sticky sessions support for upstreams, and the default proxy HTTP version being set to HTTP/1.1 with Keep-Alive enabled."
[1] - https://news.ycombinator.com/item?id=47770007
tialaramex•21m ago
So e.g. they'd work for exactly the way you use say TLS 1.0 in the Netscape 4 web browser which was popular when the middlebox was first marketed, or maybe they cope with exactly the features used in Safari but since Safari never sets this bit flag here they reject all connections with that flag.
What TLS learned is summarized as "have one joint and keep it well oiled" and they invented a technique to provide that oiling for one working joint in TLS, GREASE, Generate Random Extensions And Sustain Extensibility. The idea of GREASE is, if a popular client (say, the Chrome web browser) just insists on uttering random nonsense extensions then to survive in the world where that happens you must not freak out when there are extensions you do not understand. If your middlebox firmware freaks out when seeing this happen, your customers say "This middlebox I bought last week is broken, I want my money back" so you have to spend a few cents more to never do that.
But, since random nonsense is now OK, we can ship a new feature and the middleboxes won't freak out, so long as our feature looks similar enough to GREASE.
ECH achieves the same idea, when a participating client connects to a server which does not support ECH as far as it knows, it acts exactly the same as it would for ECH except, since it has neither a "real" name to hide nor a key to encrypt that name it fills the space where those would fit with random gibberish. As a server, you get this ECH extension you don't understand, and it is filled with random gibberish you also don't understand, this seems fine because you didn't understand any of it (or maybe you've switched it off, either way it's not relevant to you).
But for a middlebox this ensures they can't tell whether you're doing ECH. So, either they reject every client which could do ECH, which again that's how you get a bunch of angry customers, or, they accept such clients and so ECH works.
ocdtrekkie•1h ago
quantummagic•1h ago
vman81•55m ago
altairprime•46m ago
In the snooping-mandatory scenario, either you have a mandatory outbound PAC with SSL-terminating proxy that either refuses CONNECT traffic or only allows that which it can root CA mitm, or you have a self-signed root CA mitm’ing all encrypted connections it recognizes. The former will continue functioning just fine with no issues at providing that; the latter will likely already be having issues with certificate-pinned apps and operating system components, not to mention likely being completely unaware of 80/udp, and should be scheduled for replacement by a solution that’s actually effective during your next capital budgeting interval.
kccqzy•43m ago
ocdtrekkie•38m ago
hypeatei•1h ago
Eventually these blocks won't be viable when big sites only support ECH. It's a stopgap solution that's delaying the inevitable death of SNI filtering.
ocdtrekkie•36m ago
Big sites care about money more than your privacy, and forcing ECH is bad business.
And sure, kill SNI filtering, most places that block ECH will be happy to require DPI instead, while you're busy shooting yourself in the foot. I don't want to see all of the data you transmit to every web provider over my networks, but if you remove SNI, I really don't have another option.
Bender•29m ago
Russia blocked it for Cloudflare because the outer SNI was obviously just for ECH but that won't stop anyone from using generic or throw-away domains as the outer SNI. As for reasonable I don't quite follow. Only censorious countries or ISP's would do such a thing.
I can foresee Firewall vendors possibly adding a category for known outer-SNI domains used for ECH but at some point that list would be quite cumbersome.