Hence I'd be a lot more enthusiastic about NIST guidance on these points.
I'd like to see more devices able to pair with NFC, but even that's standardised for Bluetooth, just underused.
The initial state of ChaCha20 also has only at most 320 unknown bits (512 bits - 128 constant bits - 64 bits of a nonce). Actually you normally also know the counter, so there are only 256 unknown bits.
Of course the actual strength of the cipher cannot exceed the size of the state, but the design strength must be much lower for this cipher. It competes with AES-128, which is designed for an 128-bit strength.
320 bits of state is more than enough for a cipher that must have an 128-bit strength, or even for a cipher designed for a 256-bit strength, like AES-256 or ChaCha20.
This cipher is a lot more heavy.
loeg•16h ago
https://x.com/matthew_d_green/status/1948476801824485606
2OEH8eoCRo0•16h ago
cvwright•16h ago
Sanzig•14h ago
This standard targets hardware without AES accelerators like microcontrollers. Now, realistically, ChaCha20-Poly1305 is probably a good fit for most of those use cases, but it's not a NIST standard so from the NIST perspective it doesn't apply. Ascon is also a fair bit faster than ChaCha20 on resource constrained hardware (1.5x-3x based on some benchmarks I found online).
stouset•11h ago
I guess we’ve had quite a few years to improve things.
adrian_b•5h ago
Its slowness in software and quickness in hardware have almost nothing to do with it being sponge-based, but are caused by the Boolean functions executed by the Keccak algorithm, which are easy to implement in hardware, but need many instructions on most older CPUs (but much less instructions on Armv9 CPUs or AMD/Intel CPUs with AVX-512).
The sponge construction is not inherently slower than the Merkle–Damgård construction. One could reuse the functions iterated by SHA-512 or by SHA-256 and reorganize them to be used in a sponge-based algorithm, obtaining similar speeds with the standard algorithms.
That is not done because for the sponge construction it is better to design a mixing function with a single wider input instead of a mixing function with 2 narrower inputs, like for Merkle–Damgård. Therefore it is better to design the function from the beginning for being used inside a sponge construction, instead of trying to adapt functions intended for other purposes.
rocqua•16h ago
The point of these primitives is not to trade security for ease of computation. The point is to find alternatives that are just as strong as AES-128 but with less computation. The trade-off is in how long and hard people have tried to break it.
thmsths•16h ago
Taek•16h ago
If you don't have any cycles to spare, you can upgrade to an MCU that does have cycles to spare for less than $0.50 in small batches, and an even smaller price delta for larger batches.
Any device that doesn't use cryptography isn't using it because the manager has specifically de-prioritized it. If you can't afford the $0.50 per device, you probably can't afford the dev that knows his way around cryptography either.
Sanzig•14h ago
Well, no. If you can do 1 AES block per second, that's a throughput of a blazing fast 16 bytes per second.
I know that's a pathological example, but I do understand your point - a typical workload on an MCU won't have to do much more than encrypt a few kilobytes per second for sending some telemetry back to a server. In that case, sure: ChaCha20-Poly1305 and your job is done.
However, what about streaming megabytes per second, such as an uncompressed video stream? In that case, lightweight crypto may start to make sense.
tptacek•14h ago
Taek•14h ago
An uncompressed video stream at 240p, 24 frames per second is 60 mbps, not really something an IoT device can handle. And if the video is compressed, decompression is going to be significantly more expensive than AES - adding encryption is not a meaningful computational overhead.
HeatrayEnjoyer•13h ago
dylan604•12h ago
yardstick•10h ago
Eg a small edge gateway could be doing the VPN, while the end device is decoding the video.
dylan604•49m ago
dylan604•12h ago
Modern encrypted streaming uses pre-existing compressed video where the packets are encrypted on the way to you by the streaming server. It would have to uniquely encrypt the data being sent to every single user hitting that server. So it's not just a one and done type of thing. It is every bit of data for every user. So that scales to a lot of CPU on the server side to do the encryption. Yes, on the receiving side while your device is only dealing with the one single stream, more CPU cycles will be spent decompressing the video compared to decrypting. But again, that's only have of the encrypt/decrypt cycle
torgoguys•11h ago
>1 operation per second would refer to cryptographic signatures. If you are doing Chacha, the speeds are more like 1 mbps. AES is probably closer to 400 kbps.
It sounds to me like you, sir or madame, have not worked with truly tiny MCUs. :-)
But yes, there are inexpensive MCUs where you can do quite a bit of crypto in software at decent speeds.
wakawaka28•14h ago
The monetary cost is most likely not the problem. Tacking on significant additional work is bound to consume more power and generate heat. Tiny devices often have thermal and power limits to consider.
adgjlsfhk1•11h ago
tptacek•16h ago
Taek•15h ago
tptacek•14h ago
Hardware capabilities vary widely; there isn't one optimal algorithm that fits (or, in the case of MCUs, is even viable) on every platform.
What's worse, efforts to shoehorn front-line mainstream constructions onto MCUs often result in insecure implementations, because, especially without hardware support (like carryless multiplication instructions), it's very difficult to get viable performance without introducing side channels.
throw0101a•28m ago
Depends on what you mean by "performance". It could be latency: high frequency traders (HFTs) could probably be happy if their order data is protected for "only" an hour if it means dropping latency from (e.g.) 42 nanoseconds down to 24.
An hour ago for some trading platforms is stale as a decade ago.
mananaysiempre•2h ago
I’m guessing that the chip area of hardware AES is utterly inconsequential compared to all the peripherals you get on a modern micro, but the manufacturers are going to keep charging specialized-applications money for that until we’re all on 32-bit ARMs with multipliers and ChaCha becomes viable.
ebiederm•15h ago
When dealing with cryptography it is always necessary to remember cryptography is developed and operates in an adversarial environment.
Sanzig•14h ago
tptacek•14h ago
Sanzig•14h ago
tptacek•14h ago
adgjlsfhk1•11h ago
anfilt•11h ago
I think the biggest problem is how they went about trying standardize it back in the day.
tptacek•16h ago
(I'm not dissing Green here; I'm saying, I don't think he meant that statement to be proxied to a generalist audience as an effective summation of lightweight cryptography).
adastra22•15h ago
If someone is telling you that we need a new, faster standard for cryptography, and the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use. If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure. Or at least that it doesn’t meet the level of security for general use, which took a cryptographer is approximately the same thing.
tptacek•14h ago
No, this is not at all true.
wmf•14h ago
Because the field of cryptography advances? You could have made the same argument about Salsa/ChaCha but those are great ciphers. And now we have Ascon which has the same security level but I guess is even faster.
adgjlsfhk1•11h ago
tptacek•11h ago
stouset•11h ago
tptacek•11h ago
stouset•9h ago
throw0101c•11h ago
Not everything needs to be as strong as AES, just "strong enough" for the purpose.
Heck, the IETF has published TLS cipher suites with zero encryption, "TLS 1.3 Authentication and Integrity-Only Cipher Suites":
* https://datatracker.ietf.org/doc/html/rfc9150
Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.
thyristan•7h ago
Same thing with weaker ciphers. They are a target to downgrade to, if an attacker wishes to break into your connection.
Dylan16807•4h ago
Intended... Do any experts think that? Can you cite a couple? Or direct evidence of course.
Unless I'm missing a joke.
thyristan•3h ago
https://www.nist.gov/news-events/news/2024/01/new-nccoe-guid... with associated HN discussion https://news.ycombinator.com/item?id=39849754
https://www.rfc-editor.org/rfc/rfc9150.html was the one reintroducing NULL ciphers into TLS1.3. RFC9150 is written by Cisco and ODVA who previously made a fortune with TLS interception/decryption/MitM gear, selling to enterprises as well as (most probably, Cisco has been a long-time bedmate of the US gov) spying governments. The RFC weakly claims "IoT" as the intended audience due to cipher overhead, however, that is extremely hard to believe. They still do SHA256 for integrity, they still do all the very complicated and expensive TLS dance, but then skip encryption and break half the protocol on the way (since stuff like TLS1.3 RTT needs confidentiality). So why do all the expensive TLS dance at all when you can just slap a cheaper HMAC on each message and be done? The only sensible reason is that you want to have something in TLS to downgrade to.
Dylan16807•2h ago
thyristan•2h ago
And all the earlier weaker ciphers were explicit device configuration as well. You could configure your webserver or client not to use them. But the problem is that there are easy accidental misconfigurations like "cipher-suite: ALL", well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents. Proper design would actually just not create a product that can be mishandled, and early TLS1.3 had that property (at least with regards to cipher selection). Now it's back to "hope your config is sane" and "hope your vendor didn't screw up". Which is exactly what malicious people need to hide their intent and get in their decryption backdoors.
Dylan16807•1h ago
> well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents
Maybe... This still feels like a thing that's only going to show up on local networks and you don't need attacks for local monitoring. Removing encryption across the Internet requires very special circumstances and also lets too many people in.
tptacek•1h ago
yardstick•10h ago
Tech Politics comes into it.
TJSomething•9h ago
ifwinterco•7h ago
ignoramous•12h ago
Google needed faster than standard AES cryptography for File-based Encryption on Android Go (low cost) devices: https://tosc.iacr.org/index.php/ToSC/article/view/7360 / https://security.googleblog.com/2019/02/introducing-adiantum...
tptacek•12h ago
aseipp•11h ago
> If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure.
Lol that is just not true at all. A major point of discussion when NIST announced the SHA3 finalist being Keccak back in ~2012(?) was that BLAKE1 at the time offered significantly better software performance, which was considered an important practical reality, and was faster than SHA-2 at a higher (but insignificantly so) security margin; their own report admitted as much. The BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.
So why did they pick Keccak? Because they figured that SHA-2 was plenty good and already deployed widely, so "SHA-2 but a little faster" was not as compelling as a standard that complimented it in hardware; they also liked Keccak's unique sponge design that was new and novel at the time and allowed AEAD, domain separation, etc. They admit ultimately any finalist including BLAKE would have been a good pick. You can go read all of this yourself. The Keccak team even has new work on more performant sponge-inspired designs, such as their work on Farfalle and deck functions.
The reality is that some standards are chosen for a bunch of reasons and performance is only one of them, though very important. But there's plenty of room for non-standard things, too.
AyyEye•9h ago
That is not even remotely significant. Facebook spends 25% (1 out of every 4) of my CPU cycles on tracking. Pretty much anything else they optimize (are they still using python and php?) Will have a bigger impact.
ifwinterco•7h ago
fsflover•6h ago
Telemakhos•2h ago
That's their core business.
zarzavat•8h ago
IIRC, Keccak had a smaller chip area than Blake. Hardware performance is more important than software performance if the algorithm is likely to be implemented in hardware, which is a good assumption for a NIST standard. Of course, SHA3 hasn't taken off yet but that's more to do with how good SHA2 is.
> BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.
Given that the purpose of the competition was to replace SHA2 if/when it is weakened, choosing a similar construction would not have been the right choice.
adrian_b•5h ago
jona-f•9h ago
adrian_b•5h ago
It is intended only for microcontrollers embedded in various systems, e.g. the microcontrollers from a car or from a robot that automate various low-level functions (not the general system control), or from various sensors or appliances.
It is expected that the data exchanged by such microcontrollers is valuable only if it can be deciphered in real time.
If an attacker would be able to decipher the recorded encrypted data by brute force after a month, or even after a week, it is expected that the data will be useless. Otherwise, standard cryptography must be used.
conradev•8h ago
My desktop CPU has AES in hardware, so it’s fast enough to just run AES.
My phone’s ARM CPU doesn’t have AES in hardware, so it’s not fast enough. ChaCha20 is fast enough, though, and especially with the SIMD support on most ARM processors.
All this paper is saying is that ChaCha20 is not fast enough for some devices, and so folks had to put in intellectual effort to make a new thing that is.
But even further: everyone’s definition for “fast enough” is different. Cycles per byte matters more if you encrypt a lot of bytes.
adrian_b•5h ago
"Lightweight" cryptography is not intended for something as powerful as a smartphone, but only for microcontrollers that are embedded in small appliances, e.g. sensors that transmit wirelessly the acquired data.
throw0101c•3h ago
I remember when Sun announced the UltraSPARC T2 in 2007 which had on-die hardware for (3)DES, AES, SHA, RSA, etc:
* https://en.wikipedia.org/wiki/UltraSPARC_T2
(It also had two 10 GigE modules right on the chip.)
throw0101a•44m ago
Or we've had enough "spare" transistors and die space to devote some to crypto, hashing, and checksumming instructions. I remember the splash Sun made when they announced on-die crypto hardware in 2007 (as well as on-die 10 GigE):
* https://en.wikipedia.org/wiki/UltraSPARC_T2
* PDF: https://www.oracle.com/technetwork/server-storage/solaris/do...
glitchc•12h ago
If "every little bit helps" is true for the environment, it's also true for cryptography, and vice versa.
theteapot•11h ago
stouset•11h ago
No, not really.
Algorithms tend to fall pretty squarely in either the “prevent your sibling from reading your diary” or the “prevent the NSA and Mossad from reading your Internet traffic” camps.
Computers get faster every year, so a cipher with a very narrow safety margin will tend to become completely broken rapidly.
adrian_b•4h ago
Some things must be encrypted well enough so that even if NSA records them now, even 10 years or 20 years later they will not be able to decipher them.
Other things must be encrypted only well enough so that nobody will be able to decipher them close to real time. If the adversaries decipher them by brute force after a week, the data will become useless by that time.
Lightweight cryptography is for this latter use case.
throw0101c•11h ago
* https://datatracker.ietf.org/doc/html/rfc9150
Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.
corranh•9h ago
The question is should we switch from some ridiculously insecure crappy crypto on a $3 part to this better lightweight crypto implementation.
Yah, we probably should, it’s better than what we had right?
npteljes•54m ago
"Elliptic Curve Cryptography"
"Post-quantum cryptography"
"Public-key cryptography"
https://en.wikipedia.org/wiki/Cryptography
yalogin•33m ago
I haven’t looked at the lightweight crypto proposals though but if NIST is proposing it I am optimistic. If the proposals from various cryptographers around the world made through all the process it’s pretty good.