I still like to occasionally refer to TLS 1.3 as "SSL 3.4" to see whether people are aware of the history.
Plus, who doesn't like to sound like a snake sometimes? Snakes are badass.
And that data-stream the interface that TLS provides; to the higher layers it looks like a transport layer.
> And that data-stream the interface that TLS provides
That’s exactly the problem. You might lose a UDP packet. That would corrupt data encrypted with stream cipher.
With DTLS, each packet is encrypted individually.
https://en.m.wikipedia.org/wiki/Datagram_Transport_Layer_Sec...
DTLS, by contrast, provides a record number hint (the low order bits of the record number and epoch) to assist in record number reconstruction: https://www.rfc-editor.org/rfc/rfc9147.html#name-reconstruct....
(It isn't quite a layer-3/internetwork-layer -like interface; from the UDP that it sits on, it has a multiplexing component that is "half" of a layer 4 interface.)
I like to imagine an alternate past where ipsec "won" and how that would affect our expectations of secure connections. One thing different is that the security would handled at the os level instead of the application level, on the one hand this is nice all application get a secure connection whether they want one or not, on the other hand the application has no idea it is using a secure transport and has no way of indicating this to the user.
Anyhow the opportunistic connection employment of ipsec never really got implemented and all we use it for anymore is as a dedicated tunnel. one that is fiendishly difficult to employ.
I think the primary problem with ipsec is that it tried to be too flexible. this made setting up a ipsec link a non-trivial exercise in communication, and the process never got streamlined enough to just be invisible and work.
Transport Layer Security is widely documented as beginning in 1999.
I can find references to "Thread Local Storage" going back to at least 1996. That particular term seems more common in the Microsoft (and maybe IBM, does anyone have an OS/2 programming manual?) world at the time; Pthreads (1995) and Unix in general tended to call it "thread-specific data".
It's possible that the highly influential 2001 Itanium ABI document (which directly led to Drepper's TLS paper) brought the term to (widespread) use in the broader Unix world, though Sun (for both Solaris and Java?) was using the term previously. But it's also possible that I'm just missing the reference material.
Look to Windows NT rather than to OS/2 for thread-local storage. TlsAlloc() et al. were in the Win32 API right from NT 3.1, I think.
> Netscape also wanted to address SSL 2 issues, but wasn't going to let Microsoft take leadership/ownership in the standard, so they developed SSL 3.0, which was a more significant departure.
I remember this moment and this is where I realized that Microsoft wasn't always the bad guy here. They had the better implementation and were willing to share it. But Netscape in this instance acted like kids and wouldn't cooperate at all. Which is why this meeting had to occur and by that point it was clear Netscape had lost the browser and it wasn't going to be close.
Hence the quick about face by Netscape to accept what was pretty much Microsoft's proposed solution.
I can't speak to the rest of Microsoft's browser decisions and given the court ruling it's clear they weren't the good guys either but this opened my eyes to the fact that all companies are the bad guys some time:)
FSF hated Microsoft because they released binaries without source code, they were THE enemy, nowadays, you are lucky if you get a binary to study and modify! The standard from any competitive developer is to hide the binary and source behind a server. Try to study and modify that!
Who needs to add a CORS header to allow Sentry.io or Cloudflare's metrics to work on this 2014 era SaaS that the developer has wandered away from?
I think that's a bit of an oversimplification - FOSS-leaning people had a pretty large set of reasons to dislike and distrust MS back then. "Embrace, Extend, Extinguish" was a big one, calling linux/FOSS a cancer, their money and influence being used to fund the whole SCO debacle amongst other things. They were pretty actively evil, not just "closed source".
There was very good reason not to let MS gain de-facto control of an open protocol, because 90s and 00s microsoft would not have hesitated to find ways to use that dominance to screw the competition.
Two decades later, and it is still common for people to call TLS SSL.
Oh, please.
https://en.wikipedia.org/wiki/Criticism_of_Microsoft
The "velvet sweatshop" one is sufficient, but plenty of others to choose from. Don't have a source at hand but I remember it was known for its "work 3 years there and then you need to retire early from burnout" culture. There's also a really good (and highly depressing) 2001 German documentary around that "feature" called "Leben nach Microsoft" (Life after Microsoft).
And the classic https://en.wikipedia.org/wiki/Microserfs
There was really less than zero reason to trust M$ in the 90s and early 00s.
Some companies make abuse a business model. I don't see how anyone can defend a position where they only look at isolated actions of a company and not their overall strategic positioning. There are boundaries. Ethical boundaries. If you never experience the consequences of your actions, if nobody ever objects to your behavior, you will not stop. Especially not a distributed organism of a company, which has no inherent ethical boundaries; its boundaries are those that affect business, so you need to teach them in business. If your business model is based on treating your own employees like slaves, it is you who is cancer, not the other.
Calling that “kid-like behavior” is misguided on two levels. First, as noted, Netscape’s actions were arguably rational in context - pushing back against a powerful incumbent trying to steer an open standard toward a proprietary implementation.
Second, the phrase itself leans on a dismissive and inaccurate stereotype. Kids aren’t inherently irrational or overly emotional; in fact, there’s substantial research showing that young people behave quite logically given their environment. Framing behavior this way isn’t just lazy; it reinforces the kind of condescension that later gets labeled as “adverse childhood experiences” in therapy, assuming someone even gets the chance to unpack and not replicate it.
On both levels, it is DARVO.
I've found that certain crowds will get angry about the vernacular vs a crowd that always understood something a particular way.
In any event, we have to stick with the times, especially with new entrants that stick with the new terms.
The important bits:
- "SSL" is a set of protocols so ridiculously old, busted and insecure that nobody should ever use them. It's like talking about Sanskrit; ancient and dead.
- "TLS" is way better than "SSL", but still there are insecure versions. Any version before 1.2 is no longer supported due to security holes.
- Technically an "ssl certificate" is neither "SSL" nor "TLS", it's really an "X.509 Certificate with Extended Key Usage: Server Authentication". But that doesn't roll off the tongue. You could use a cert from 1996 in a modern TLS server; the problem would be its expiration date, and the hash/signature functions used back then are deprecated. (some servers still support insecure methods to support older clients, which is bad)
No one should use SSL or AngularJS in 2025 unless they have to maintain some legacy stuff for important reasons.
But yes, it's all a bit irrelevant now that anything below TLS 1.2 is sketchy to use.
The nomenclature was complicated in people's minds by SMTP. Because there was SMTP over a largely transparent encrypted connection, and SMTP where it started unencrypted and negotiated a switch, as well as plain old cleartext. It didn't help that RFC 2487 explained that STARTTLS negotiated "TLS more commonly known as SSL". RFC 8314 explains some of the historical mess that SMTP got into with two types of SMTP (relay and submission) and three types of transport.
And the "S" for "submission" could be confused with the "S"s in both "SSL" and "TLS". It's not just TLAs that are ambiguous, indeed. There was confusion over "SMTPS" and "SSMTP", not helped at all by the people who named programs things like "sSMTP".
I'm still calling it SSL in 2025. (-: And so is Erwin Hoffmann.
* https://www.fehcom.de/ipnet/sslserver.html
* https://manpages.debian.org/unstable/ssmtp/ssmtp.8.en.html
> Good catch, it misled me for years !
Randomness and the Netscape Browser January 1996 Dr. Dobb's Journal
https://people.eecs.berkeley.edu/~daw/papers/ddj-netscape.ht...
This was written in 1996. The language used feels already much different from today's publications. God I feel old.
That depends on which publications you're looking at, just as it did in 1996. An article from LWN [1] today, for example, reads in a fairly similar style. Maybe slightly less stuffy, because it's targeted at a slightly more general audience.
[1] https://lwn.net/
Was it the same issue though --
the Netscape SSL issue is from year 1996.
The linked NYT article is about vulnerability with public key encryption dated 2012 by different authors.
I think it’s fair to say they’re very similar, with a few “bug fixes”. It’s been a while since I’ve thought about either though, and might be forgetting a few things. I’ve only ever implemented SSL3 and TLS1.0 together, so there may be some details I’m forgetting.
1. Say SSL or TLS?
2. How old are you (or did you start working before 1999?)
I'll reply with my answer too.
2. Started working before 1999
If I need to specifically say SSL or TLS, it's SSL (as in OpenSSL, LibreSSL, BoringSSL, SSL certificates, Qualys SSL Labs, SSL Server Test). TLS is a made up name for SSL.
I do say e.g. "TLSv1.2" if I need to name the specific protocol, that's about it.
I was working before 1999.
I'm 51, started working in IT in the mid 90's.
2. Graduated and started in 2015.
2. 38 - Started working in 2011, but my first forays into network programming was in something like 2004-2005.
Looked over onto my other screen and sure enough the function I'd literally minutes before added an if statement to went
public Builder sslCertNotBefore(Instant sslCertNotBefore) {
if (sslCertNotBefore.isAfter(MAX_UNIX_TIMESTAMP)) {
sslCertNotBefore = MAX_UNIX_TIMESTAMP;
}
this.sslCertNotBefore = sslCertNotBefore;
return this;
}
I think possibly part of the problem is that we as programmers typically don't deal with TLS directly. The code above is part of a system I wrote that extracts detailed certificate information from HTTPS connections, and man was it ever a hassle to wrestle all the information I was interested in out of the java standard library.Sure on the one hand it's easier to not mess up if it's all automatic and out of sight, but at the same time, it's not exactly beneficial to the spread of deeper awareness of how TLS actually works when it's always such a black box.
2. Started working after 1999
But yeah, I learned about SSL back in the crypto wars days of the 1990s, back when you had to pirate the so-called "US only" version of Netscape if you wanted decent SSL encryption, so I might be just using the old term out of habit.
https://web.archive.org/web/19990911233949/http://www73.nets...
The US had some strange ideas about cryptography:
https://en.wikipedia.org/wiki/Crypto_Wars#PC_era
https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...
https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_i...
Even today, people and marketing pages promote "SSL" term. Unless you specifically google, "What is the deference between SSL and TLS?" most people would have no idea what TLS is.
(2) 37. I've been an Internet user since ~1995 and been working in tech since 2004.
2. Started my first IT job on a computer networking team in 2012.
Mid 30s, SSL.
I work in cybersecurity and all the tools in the firewall/cert world still say "SSL decryption" and "SSL certificate". TLS is just a "major version" of SSL in my mind.
Libraries with TLS in their names are less frequently used
GnuTLS, mbedTLS, s2n-tls and RustTLS.
SSL for websies, TLS for email, tunnels, XMPP, etc.
It's the ergonomic choice (;
I guess it follows that Twitter/X might never be able to pull off a rebrand again.
When do I say TLS, when that one annoying guy joins the call that always corrects you. Everyone hates him, and he doesn’t care.
To devs: SSL
Did not start working before 1999. Started using Linux in 2003.
2) before 1999. IIRC, the first SSL certificate I was involved with getting required the use of a fax machine.
2. I started programming professionally in 1998 and I'm in my early 50s.
I think the TLS v1.2 pushed me that way
2. I’m old enough to remember 56-bit SSL encryption in browsers
2. I'm 56 and was active in computer clubs in the late 80s, no network, no hard drive, thousands of floppy's.
No
SSLv2 was the first widely deployed version of SSL, but as this post indicates, had a number of issues.
SSLv3 is a more or less completely new protocol
TLS 1.0 is much like SSLv3 but with some small revisions made during the IETF standardization process.
TLS 1.1 is a really minor revision to TLS 1.0 to address some issues with the way block ciphers were used.
TLS 1.2 is a moderately sized revision to TLS 1.1 to adjust to advances in cryptography, specifically adding support for newer hashes in response to weaknesses in MD5 and SHA-1 and adding support for AEAD cipher suites such as AES-GCM.
TLS 1.3 is mostly a new protocol though it reuses some pieces of TLS 1.2 and before.
Each of these protocols has been designed so that you could automatically negotiate versions, thus allowing for clients and servers to independently upgrade without loss of connectivity.
And ensuring decades of various downgrade attacks
This was necessary to bypass various broken server side implementations, and broken middleboxes, but wasn’t necessarily a flaw in TLS itself.
But from the learnings of this issue preventing 1.2 deployment, TLS 1.3 goes out of its way to look very similar on the wire to 1.2
It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility (yes I realize the 's' stands for secure, not 'ssl', but httpt would have still worked as "HTTP with TLS")
That would have been at least little bit insane, since then web links would be embedding the protocol version number. As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work.
I wish we could go the other way - and make http:// implicitly use TLS when TLS is available. Having http://.../x and https://.../x be able to resolve to different resources was a huge mistake.
Wouldn't we be able to just redirect https->httpt like http requests do right now?
Sure it'd be a tiny bit more overhead for servers, but no different than what we already experienced moving away from unencrypted http
But think about it from the perspective of a web browser or curl. You can’t rely on all web servers having such a redirect for their URLs. Web browsers would need to support old versions of TLS to make old URLs work. They’d need to support old versions of tls indefinitely so as to not break old URLs.
Using an old version of tls isn’t like using an old version of the C compiler. Old versions of tls have well documented problems with security implications. That’s why we made new versions. Maintaining lots of versions of TLS multiplies the security surface area for bugs, and makes you vulnerable to downgrade attacks.
No site needs to do this though, and I can't recall seeing a site with sensitive user info that supports http in recent years. And in the current situation, many sites are still supporting old versions of https (SSL2). A protocol name upgrade would give you more certainty that you're connecting over a secure connection, and perhaps a better indication if you've accidentally used a less-secure connection than intended.
I mean actually your exact argument could be made about http vs https, that http+SSL should have become the default (without changing the protocol name of http://), and by changing the protocol name it made it so that some websites still accept http. I guess in practice there's a slight difference since http->https involved a default port change and ssl2 -> tls did not, so in the former case the name change was important to let clients know to use a different default port; but ignoring that, the same argument could be made, and I would have disagreed with it there too.
Specifying the protocol... in the protocol portion of the URL... can be useful for users.
First, recall that links are very often inter-site, so the consequence would be that even when a server upgraded to TLS 1.2, clients would still try to connect with TLS 1.1 because they were using the wrong kind of link. This would relay delay deployment. By contrast, today when the server upgrades then new clients upgrade as well.
Second, in the Web security model, the Origin of a resource (e.g., the context in which the JS runs) is based on scheme/host/port. So httpt would be a different origin from HTTPS. Consider what happens if the incoming link is https and internal links are httpt now different pages are different origins for the same site.
These considerations are so important that when QUIC was developed, the IETF decided that QUIC would also be an https URL (it helps that IETF QUIC's cryptographic handshake is TLS 1.3).
The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.
The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.
They learned the lesson of IPv6 here.
Moreover, even in the best case scenario this means that you don't get the benefits of deployment for years if not decades. Even 7 years out, TLS 1.3 is well below 100% deployment. To take a specific example here: we want to deploy PQ ciphers ASAP to prevent harvest-and-decrypt attacks. Why should this wait for 100% deployment?
> The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.
I don't think this is really that accurate, especially on the Web. The actual widely in use options are fairly narrow.
TLS is used in a lot of different settings, so it's unsurprising that there are a lot of options to cover those settings. TLS 1.3 did manage to reduce those quite a bit, however.
Case in point: IPv6 adoption. There's no interoperability or negotiation between it and IPv4 (at least, not in any way that matters), which has led to the mess we're in today.
1. You actually get benefit during the transition period because you get to use the new version.
2. You get to test the new version at scale, which often reveals issues, as it did with TLS 1.3. It also makes it much easier to measure deployment because you can see what is actually negotiated.
3. Generally, implementations are very risk averse and so aren't willing to disable older versions until there is basically universal deployment, so it takes the pressure off of this decision.
Part of the motivation of TLS 1.3 was to mitigate that. It removed a lot of options for negotiating the ciphersuite.
Fortunately that’s all behind us now, and transitioning from 1.2 to 1.3 is going much smoother than 1.0 to 1.2 went.
Previously (in earlier protocol versions) nobody stood up to the crap middleboxes even though it's bad for all normal users.
First the success rate of any new IP-based protocol through most devices is incredibly low, especially now that NAT is so common.
Second, part of why QUIC runs over UDP is because the operating system generally won't let applications send raw IP datagrams.
Even running over UDP, QUIC has nontrivial failure rates and the browsers have to fall back to TLS over TCP.
Could you expand a bit here? Do you just mean how extensions to the protocol are handled, etc., or the overall process and involved parties?
Edit: https://wiki.xmpp.org/web/Securing_XMPP
SSL is appropriately strict. Auth and encryption, both c2s and s2c, go together. They were a bit lax on upgrades in the past, but as another comment said, Google just said you fix your stuff or else Chrome will show a very scary banner on your website. Yes you can skip it or force special things like auth without encryption, but it's impossible to do by accident.
The handshake is unencrypted so you can modify the messages to make it look like the server only supports broken ciphers. Then the man in the middle can read all of the encrypted data because it was badly encrypted.
A surprising number of servers still support broken ciphers due to legacy uses or incompetence.
This is the message that returns a list of supported ciphers and key exchange protocols. There’s no data in this first packet.
Alice: I’d like to connect Bob: Sure here is a list of protocols we could use:
You modify bob’s message so that bob only suggests insecure protocols.
You might be proposing that Alice asks Trent for Bob’s public key … But that’s not how TLS works.
If the "negotiated" cipher suite is weak enough to allow real-time impersonation of Bob, though, pre-1.3 versions are still vulnerable; that's another reason not to keep insecure cipher suites around in a TLS config.
What an attacker can do is block handshakes with parameters they don’t like. Some clients would retry a new handshake with an older TLS version, because they’d take the silence to mean that the server has broken negotiation.
Then you can MITM, force both sides to use the weak crypto, which can be broken, and you're in the middle. Also not really so relevant today.
The basic math of any kind of negotiation is that you need the minimum set of cryptographic parameters supported by both sides to be secure enough to resist downgrade. This is too small a space to support a complete accounting of the situation, but roughly:
- In pre-TLS 1.3 versions of TLS, the Finished message was intended to provide secure negotiation as long as the weakest joint key exchange was secure, even if the weakest joint record protection algorithm was insecure, because the Finished provides integrity for the handshake outside of the record layer.
- In TLS 1.3, the negotiation messages are also signed by the server, which is intended to protect negotiation as long as the weakest joint signature algorithm is secure. This is (I believe) the best you can do with a client and server which have never talked to each other, because if the signature algorithm is insecure, the attacker can just impersonate the server directly.
- TLS 1.3 also includes a mechanism intended to prevent against TLS 1.3 -> TLS 1.2 downgrade as long as the TLS 1.2 cipher suite involves server signing (as a practical matter, this means ECDHE). Briefly, the idea is to use a sentinel value in the random nonces, which are signed even in TLS 1.2 (https://www.rfc-editor.org/rfc/rfc8446#section-4.1.3).
[1] https://en.wikipedia.org/wiki/Logjam_(computer_security) [2] https://www.usenix.org/legacy/publications/library/proceedin...
If you fell under one of those exceptions, you could get a special certificate for your website (from, e.g. Verisign) that allowed the webserver to "step up" the encryption negotiation with the browser to stronger algorithms and/or key lengths.
One of the many things it brought is session tickets, enabling server-side session resumption without requiring servers to keep synced-up state. Another is Server Name Indication, enabling servers to use more than one certificate.
Extensions (including SNI) are in later spec but introduces in RFC 3546 (https://www.rfc-editor.org/rfc/rfc3546). Session tickets are in RFC 4507.
What TLS 1.0 did was to leave the door open for extensions by allowing the ClientHello to be longer than what was specified. See https://www.rfc-editor.org/rfc/rfc2246.html#section-7.4.1.2 (scroll to "Forward Compatibility Note")
I'm halfway convinced that they have made subsequent versions v1.1, v1.2, and v1.3 in an outrageously stubborn refusal to admit that they were objectively incorrect to reset the version number.
M$ (appropriate name for that time) of the day was doing its best to own everything and the did not let up on trying to hold back the open source internet technologies until the early 2010's I believe. Its my opinion that they were successful in killing Java Applets, which were never able to improve past the first versions and JavaScript and CSS in general was held back many years.
I still recall my corporate overloards trying to push me to support IE's latest 'technologies' but I resisted and instead started supporting Mozilla 3.0 as soon as they fixed some core JS bugs for our custom built enterprise JavaScript SPA tools in the early 2000's. It turned out to be a great decision as the fortune 500 company started using Mozilla / Firefox in other internal apps in later years long before it became common place.
It’s even more appropriate nowadays, I’d say.
No, Java applets failed because they became the poster child for "Java is slow" take. Even though it wasn't exactly true in general, it was certainly true of applets, what with waiting for them to download and then waiting for the JVM to spin up.
What killed them was 1) HTML/JS itself getting better at dynamic stuff that previously required something like applets, and 2) Flash taking over the remaining niche for which HTML wasn't good enough.
> Java Applets also froze the entire browser when loading.
More than just "poster child", I believe Java applets are the origin of the "Java is slow" meme. The first time many people heard of Java would be when it locked up their browser for a whole minute while loading an applet, with a status bar message pointing to Java as the culprit.
A representative vulnerability is "trusted method chaining". You (the attacker) construct a chain of standard library objects that call each other in unexpected ways. You can make use of the fact that you can subclass a standard library class and implement a standard library interface, in order to implement the interface methods with the base class's implementations, to construct more unusual pathways. Then you get some standard library entry point to call the first method in the chain. Since your code doesn't appear on the call stack at any point (it's just the standard library calling the standard library) whatever is at the bottom of the call stack, at the end of the chain, infers a trusted context and can access files or whatever. Of course, finding a method chain that's possible to construct and does something malicious is non-trivial.
Bill Gates would've bought OpenAI. Satya shares their mission of developing AI for the good of humanity. He charitably donated billions of dollars in Azure credits in exchange for nothing besides a voice at the table and a license to help enable other organisations use AI through MS services.
In a way it's a PR difference, but I feel that understates the change.
I had high hopes for s2n, but looks like it never really caught on outside of AWS.
https://www.shodan.io/search/report?query=ssl.version%3Asslv...
And a trend line of how it's changed:
https://trends.shodan.io/search?query=ssl.version%3Asslv2#ov...
It has dropped significantly though over the years but it will continue to stick around for a while.
If you look around you'll find services, today, that haven't been upgraded in decades.
Would be much easier if everyone just talks about TLS to mean modern encrypted network traffic. Mention SSL if you really use it because you have legacy system running.
Plus the most popular implementation of TLS remains the OpenSSL implementation.
Private enterprise should be the last people on earth to be allowed to label themselves. I have many marketer friends I love, but I truly think the practice of trying to pimp businesses to rich individuals has been probably the biggest waste of human effort in history (outside of maybe carbon-capture efforts). We're just stuck with shitty brands, broken products, and stupid consumers who think they're getting the best.
Article: (literal quote): “for some reason”.
WhyNotHugo•7mo ago