TLDR OpenSSL days seem to be coming to an end, but Rustls C bindings add not production ready yet.
You can get FIPS by using some of the third party back-end integration via aws-lc-rs.
The first is C bindings for the native Rustls API. This should work great for anyone who wants to use Rustls from C, but it means writing to the Rustls API.
The second is C bindings that provide OpenSSL compatibility. This only supports a subset of the OpenSSL API (enough for Nginx but not yet HAProxy, for example), so not everything that uses OpenSSL will work with the Rustls OpenSSL compatibility layer yet. We are actively improving the amount of OpenSSL API surface that we support.
https://github.com/rustls/rustls-openssl-compat
We just work with normal issues/PRs, and there is a Rustls discord channel if you want to chat. We'd love your help!
If I were to optimize it, and the cycling rate is fixed and long, I would have the global storage be behind a simple Mutex, and be something like (Expiration, oldval, newval), on use, check a threadlocal copy, use it if it's not expired, otherwise lock the global, if the global is not expired, copy it to thread local. If the global is expired, generate a new one, saving the old value so that the previous generation tickets are still valid.
You can use a simple Mutex, because contention is limited to the expiration window. You could generate a new ticket secret outside the lock to reduce the time spent while locked, at the expense of generating a ticket secret that's immediately discarded for each thread except the winning thread. Not a huge difference either way, unless you cycle tickets very frequently, or run a very large number of threads.
I'd like to take a look and try to understand why there's such a big difference in handshake performance. I wouldn't expect single threaded handshake performance to vary so much between stacks... it should be mostly limited by crypto operations. Last time, they did say something about having a cpu optimization for handshaking that the other stack might not have, but this is on a different platform and they didn't mention that.
I'd also be interested in seeing what it looks like with OpenSSL 1.1.1, given the recent article from HAProxy about difficulties with OpenSSL 3 [2]
[1] https://www.memorysafety.org/blog/rustls-performance-outperf...
On my machine dual-socket Intel(R) Xeon(R) CPU L5640 running 14.2-RELEASE-p3, I found similar differences in speed with the system openssl (3.0.16) and rustls from head (795ae1f5d0435dbc80dac04ec147e85d4970563c).
Openssl 3.0.16 (FreeBSD base 14.2-RELEASE-p3)
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1275.38 handshakes/s (512 / 0.401448)
Rustls: (795ae1f5d0435dbc80dac04ec147e85d4970563c)
handshakes TLSv1_3 EcdsaP256 TLS13_AES_256_GCM_SHA384 server server-auth no-resume 1998.39 handshakes/s
I looked at a lot of stuff, but no real smoking guns. There's a difference in behavior between the two handshakes, but it's not that different. openssl-bench generates 4 application packet wrappers for the 'first flight', whereas rustls generates one which contains the 4 messages of encrypted extensions, server cert, server cert verify, server handshake finished; this seems like it could be significant, but I couldn't easily undo it to test. Also, openssl-bench generates 2 more application packets after receiving the client handshake finished; I'm pretty sure those are tickets, but turning off ticket generation was ~ 1% improvement, so whatever.However, one of my friends suggested aws-lc might just be super fast, so I ran openssl-bench linked against that and saw a big improvement. So I went ahead and tried with all the options from FreeBSD pkg. Here's my list of results:
aws-lc-1.48.4 (freebsd pkg)
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 2478.93 handshakes/s (512 / 0.206541)
openssl111-1.1.1w_2 (freebsd pkg)
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1773.9 handshakes/s (512 / 0.28863)
openssl-3.0.16,1
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1333.5 handshakes/s (512 / 0.383951)
openssl31-3.1.8
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1387.69 handshakes/s (512 / 0.368958)
openssl32-3.2.4
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1353.54 handshakes/s (512 / 0.378267)
openssl33-3.3.3
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1406.62 handshakes/s (512 / 0.363994)
openssl34-3.4.1
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1393.34 handshakes/s (512 / 0.367463)
openssl35-3.5.0.b1
handshakes server TLSv1.3 TLS_AES_256_GCM_SHA384 1155 handshakes/s (512 / 0.443289)
boringssl-0.0.0.0.2025.03.27.01_1
did not manage to get a matching cipher
libressl-4.0.0_1
(does not compile, don't care to fix)
So.... in my testing, on my machine, rustls is faster than openssl-bench linked against openssl and openssl 1.1.1 is faster than openssl 3.x, but openssl-bench linked against aws-lc is faster than rustls.I'll try to get ahold of the authors tomororow and suggest they add openssl-bench linked against aws-lc to their test.
I feel bad for other/new system languages, you get so much for the steeper learning curve with Rust (cult membership optional). And I think it’s genuinely difficult to reproduce Rust’s feature set.
I don't like the RiiR cult. I do like smart use of a safer language and think long-term it can get better than C++ with the right work.
For certain types of people, Rust has a way of just feeling better to use. After learning Rust I just can't imagine choosing to use C or C++ for a future project ever again. Rust is just too good.
I’m not arguing that we should rewrite everything in rust. C and C++ are fine languages. But sometimes it really is better to just have your code in a different language rather than deal with FFI. For example, I have some collaborative text editing code in rust and recently I just ported the whole thing to typescript, because it’s just straight out easier to use in a browser that way, compared to dealing with a wasm bundle.
I think the big mistake people make when rewriting into a different language is doing a refactor at the same time. This is the wrong way to go about it. First port directly the code you have. Then port your tests and get them passing in the new language. Then refactor. Obviously there’s always some language differences - but ideally you can confine differences within modules, and keep most of the module boundaries intact through the rewrite. You can also refactor before translating your code. If I were porting something to rust that wouldn’t pass the borrow checker, this is probably what I’d do. First refactor everything to make the borrow checker happy - so for example, make sure your structs / classes are in a strict tree. Then get tests passing. Translate between languages and cleanup.
If you approach it like that, rewriting code is a largely mechanical process. It really takes a lot less time than people think to translate code, since you don’t actually have to understand every single line to do it correctly. So the time taken scales based on the number of lines of code. Not the number of hours it took to write! And then, if you want to refactor your new program at the end, go for it.
Kill it with fire.
If rust avoids defects due to programmer mistakes, then throwing a shitty programmer at it is functionally safe (or safer than otherwise) because their shitty code won’t compile. So worst case is they don’t do any harm, and best case is free cheap labor
But presumably your you want to reduce all of the mistakes.
Rust won't prevent/discourage a lot of the other classes of mistake that a bad programmer is creating.
Rust having more powerful modeling tools than the usual language sales it more viable to use crappier programmers, not less.
Obviously if your goal is perfection, then only hire programmers capable of writing perfection. If you’re making the trade-off of cheaper/junior resources and putting more effort into testing/api design/type design/code review to help defend against the inevitable errors, then your genai code fits neatly into the same equation
Going from Rust to TypeScript will normally be pretty easy—though if things like numeric and bytewise manipulations are involved, it can be tough. Going from TypeScript to Rust will often be easy, but also often be fiendishly difficult to do without refactoring a lot, due to ownership model differences.
Occasionally I’ve chosen to do a refactoring in the source language first, and then port that. That can work decently, though it depends so much on exactly what the changes are and why they are, which is often to do with which two languages are involved.
The best days I ported maybe 1500 lines. On the "worst" days I did 0 lines. In all, 500 lines a day feels like the right ballpark for this sort of work.
But assuming you're experienced with Rust, porting something is actually both easy and enjoyable.
Porting something as your first Rust project is going to end up like a dirty hybrid.
It's an excellent learning opportunity, but it will not leave a good showcase of Rust.
In many (most?) situations I think Rust is effectively as fast as C, but it's not a given. They're close enough that depending on the situation, one can be faster than the other.
If you told me I had to make a larger and more complex piece of code fast though, I'd pick Rust. Because of the rules that the Rust compiler enforces, it's easier to have confidence in the correctness of your code and that really frees you up when you're dealing with multi-threaded or algorithmic complexity. For example, you can be more confident about how little locking you can get away with, what the minimum amount of time is that you need to keep some memory around, or what exact state is possible and thus needs to be handled at any given time.
There are some things that make rav1d performance particularly challenging. For example - unlike Rustls, which was written from the start in Rust, rav1d is largely the result of C to Rust translation. This means the code is mostly not idiomatic Rust, and the Rust compiler is generally more optimized for idiomatic Rust. How much this particular issue contributes to the performance gap I don't know, but it's a suspect and to the extent that it's worth pursuing, one would probably want to figure out where being idiomatic matters most instead of blanket rewriting everything.
I think it’s not a good indication of the success of the language.
It's not practical right now to write high performance cryptographic code in a secure way (e.g. without side channels) in anything other than assembly.
Regarding crypto operations, I know as of now for rust projects assembly is a must to have constant time guarantees.
Maybe there could be a way with intrinsics and a constant-time marker, similar to unsafe, to use pure rust.
In the meantime I think there still is too much C code.
It’s a great step in the good direction by the way.
From the AWS-LC README: https://github.com/aws/aws-lc
> A portable C implementation of all algorithms is included and optimized assembly implementations of select algorithms is included for some x86 and Arm CPUs.
It also states that it kind of forked BoringSSL and OpenSSL.
You’re right though that most of the memory safety attack surface has been replaced with Rust.
Ideally the C would eventually move to Rust, but I think aws-lc needs to work in many contexts where a Rust toolchain is not available so it might be a while.
Graviola is an interesting option under development, in part because it gets rid of the C:
TLS is the protocol generation and parsing, hopefully (but not always) including certificate parsing.
Crypto tends to have clear, fixed buffer sizes, and OpenSSL tends to have good implementations of it with reasonable interfaces.
Protocol parsing and certificate parsing and validation are where many more problems happen that memory safety can reduce. High profile crypto problems are generally information leaks from non-constant time algorithms leaking information; although information leaks also happen from protocol code too.
Even if unrestricted asm is inherently unsafe, there's got to be a subset of instructions and operand types you can guarantee is safe if called a certain way.
Too bad there are those who think they should use Rust to write GUIs and end-user applications, which is where Rust ergonomics breaks down fast.
Disagree here, happy that those people are experimenting, and Rust is being used all the way up and down the stack. I may not prefer using Rust for web pages, but I'm super glad that some people want to -- projects like Dioxus and Tauri are really fantastic to use, and reflect well on the ecosystem as a whole.
I think Rust gained from this enthusiasm, because it's one of the languages that I think goes almost everywhere, if you pay the high upfront cost of learning. There really aren't that many domains where you absolutely couldn't write Rust. The fact that Swift is also chasing this quality suggests it's valuable.
But that doesn't say anything about ergonomics for big production systems.
With this in mind, I'm curious: What do you feel are good use cases for Rust?
Worst use-cases: GUIs and end-user applications.
However I tried rustls with redis for my axum application, for some reason it was not working, even though my self signed ca certificate was updated in my system's local CA store.
After a lot of try I gave up then thought about trying native tls, and it worked in first go.
Was there no way to provide a custom CA store (that only included your self signed one)?
You're trying to get rid of OpenSSL, but you're actually relying on OpenSSL code. Sounds a bit iffy imo. Can somebody provide a bit more depth here?
Or is it just the OpenSSL TLS API that is hopelessly confusing and bug inducing? I can imagine that the crypto primitives in OpenSSL are very solid.
In any case, OpenSSL does a whole bunch of things, and one of those is providing low-level cryptographic routines. When people talk about issues with OpenSSL, they're usually not (in my experience) talking about issues with its low-level cryptographic routines. They're talking about things like the TLS implementation and API.
Rustls has its own Rust code for the TLS protocol and certificate parsing/validation, which doesn't come, directly or by lineage, from OpenSSL or any OpenSSL derivatives.
Although of course the Rust compiler has no way to inspect this ChaCha20 primitive and check it is memory safe, we can "vouch" for it, and these primitives have been eyeballed by a huge number of people since they're so widely used so it feels as reasonable as the claim that ChaCha20 itself works, which has been considered by plenty of cryptanalysis experts from government and industry.
Pretty much everything else is Rust, so the bit-twiddling inside a DER implementation to parse certificates is Rust, the TLS handshake implementation is Rust, and so on.
I've not had a need, but I wrote a just enough protocol code for TLS 1.3 in Erlang as a prototype, and it wouldn't be awful in Perl with pack/unpack; binary matching is a lot nicer in Erlang though. :P
Used that prototype to inform development of a Java implementation of TLS 1.3 protocol only (crypto and certificate parsing and verification through system libraries) to get consistent TLS 1.3 features on a very popular Android app. I think Google has a thing you can do to get a TLS 1.3 capable stack now, but not then.
With the TLS illustrated series [1] it would be easier than when I did it. The test vectors in the TLS 1.3 rfcs and drafts are very nice to have too.
Given how aws-lc powers both of these articles, I'm curious how Rustls compares to s2n-tls - AWS's TLS library to go along with aws-lc.
koakuma-chan•1mo ago