Hence I'd be a lot more enthusiastic about NIST guidance on these points.
I'd like to see more devices able to pair with NFC, but even that's standardised for Bluetooth, just underused.
I have a few of these ESP-8266 remotes, and battery life is fine. https://github.com/mcer12/Hugo-ESP8266
The initial state of ChaCha20 also has only at most 320 unknown bits (512 bits - 128 constant bits - 64 bits of a nonce). Actually you normally also know the counter, so there are only 256 unknown bits.
Of course the actual strength of the cipher cannot exceed the size of the state, but the design strength must be much lower for this cipher. It competes with AES-128, which is designed for an 128-bit strength.
320 bits of state is more than enough for a cipher that must have an 128-bit strength, or even for a cipher designed for a 256-bit strength, like AES-256 or ChaCha20.
For the secure hash function, the capacity should be at least twice the target, that is for 128-bit security you need 256 bits of capacity. ASCON hash uses 256 bit capacity and 320-256 = 64 bit rate, so to get a 32-byte hash of a 8-byte string (without padding), you'll need to do at least 4 permutations.
If you can design a secure permutation that permutes 257 bits, you can make a secure, but impractical hash function from it by setting the rate to 1 bit.
For the duplex mode that's used for authenticated encryption, capacity can be lower, because it's keyed -- it's 192 bits in ASCON.
This assumes the permutation of the 320-byte state itself is secure, of course.
This cipher is a lot more heavy.
This is a bad analogy, even it it's apt. Only in that an encryption scheme can be technically weaker than another, but still plenty secure. As long as it does the job and hasn't been broken.
For example, part of the spec is simply hashing and packet signing... not necessarily making the packets secret, but authenticating the origin/source. It isn't necessarily about creating the most secure vault in the world, but securing what might otherwise be completely insecure channels of communications/attack. It's also not about a $1000 smart phone, but a micro controller that's a fraction of a dollar on devices that have a total BoM that is very low cost or small.
loeg•5mo ago
https://x.com/matthew_d_green/status/1948476801824485606
2OEH8eoCRo0•5mo ago
cvwright•5mo ago
Sanzig•5mo ago
This standard targets hardware without AES accelerators like microcontrollers. Now, realistically, ChaCha20-Poly1305 is probably a good fit for most of those use cases, but it's not a NIST standard so from the NIST perspective it doesn't apply. Ascon is also a fair bit faster than ChaCha20 on resource constrained hardware (1.5x-3x based on some benchmarks I found online).
stouset•5mo ago
I guess we’ve had quite a few years to improve things.
adrian_b•5mo ago
Its slowness in software and quickness in hardware have almost nothing to do with it being sponge-based, but are caused by the Boolean functions executed by the Keccak algorithm, which are easy to implement in hardware, but need many instructions on most older CPUs (but much less instructions on Armv9 CPUs or AMD/Intel CPUs with AVX-512).
The sponge construction is not inherently slower than the Merkle–Damgård construction. One could reuse the functions iterated by SHA-512 or by SHA-256 and reorganize them to be used in a sponge-based algorithm, obtaining similar speeds with the standard algorithms.
That is not done because for the sponge construction it is better to design a mixing function with a single wider input instead of a mixing function with 2 narrower inputs, like for Merkle–Damgård. Therefore it is better to design the function from the beginning for being used inside a sponge construction, instead of trying to adapt functions intended for other purposes.
rocqua•5mo ago
The point of these primitives is not to trade security for ease of computation. The point is to find alternatives that are just as strong as AES-128 but with less computation. The trade-off is in how long and hard people have tried to break it.
thmsths•5mo ago
Taek•5mo ago
If you don't have any cycles to spare, you can upgrade to an MCU that does have cycles to spare for less than $0.50 in small batches, and an even smaller price delta for larger batches.
Any device that doesn't use cryptography isn't using it because the manager has specifically de-prioritized it. If you can't afford the $0.50 per device, you probably can't afford the dev that knows his way around cryptography either.
Sanzig•5mo ago
Well, no. If you can do 1 AES block per second, that's a throughput of a blazing fast 16 bytes per second.
I know that's a pathological example, but I do understand your point - a typical workload on an MCU won't have to do much more than encrypt a few kilobytes per second for sending some telemetry back to a server. In that case, sure: ChaCha20-Poly1305 and your job is done.
However, what about streaming megabytes per second, such as an uncompressed video stream? In that case, lightweight crypto may start to make sense.
tptacek•5mo ago
Taek•5mo ago
An uncompressed video stream at 240p, 24 frames per second is 60 mbps, not really something an IoT device can handle. And if the video is compressed, decompression is going to be significantly more expensive than AES - adding encryption is not a meaningful computational overhead.
HeatrayEnjoyer•5mo ago
dylan604•5mo ago
yardstick•5mo ago
Eg a small edge gateway could be doing the VPN, while the end device is decoding the video.
dylan604•5mo ago
yardstick•5mo ago
dylan604•5mo ago
Modern encrypted streaming uses pre-existing compressed video where the packets are encrypted on the way to you by the streaming server. It would have to uniquely encrypt the data being sent to every single user hitting that server. So it's not just a one and done type of thing. It is every bit of data for every user. So that scales to a lot of CPU on the server side to do the encryption. Yes, on the receiving side while your device is only dealing with the one single stream, more CPU cycles will be spent decompressing the video compared to decrypting. But again, that's only have of the encrypt/decrypt cycle
torgoguys•5mo ago
>1 operation per second would refer to cryptographic signatures. If you are doing Chacha, the speeds are more like 1 mbps. AES is probably closer to 400 kbps.
It sounds to me like you, sir or madame, have not worked with truly tiny MCUs. :-)
But yes, there are inexpensive MCUs where you can do quite a bit of crypto in software at decent speeds.
wakawaka28•5mo ago
The monetary cost is most likely not the problem. Tacking on significant additional work is bound to consume more power and generate heat. Tiny devices often have thermal and power limits to consider.
adgjlsfhk1•5mo ago
wakawaka28•5mo ago
tptacek•5mo ago
Taek•5mo ago
tptacek•5mo ago
Hardware capabilities vary widely; there isn't one optimal algorithm that fits (or, in the case of MCUs, is even viable) on every platform.
What's worse, efforts to shoehorn front-line mainstream constructions onto MCUs often result in insecure implementations, because, especially without hardware support (like carryless multiplication instructions), it's very difficult to get viable performance without introducing side channels.
throw0101a•5mo ago
Depends on what you mean by "performance". It could be latency: high frequency traders (HFTs) could probably be happy if their order data is protected for "only" an hour if it means dropping latency from (e.g.) 42 nanoseconds down to 24.
An hour ago for some trading platforms is stale as a decade ago.
mananaysiempre•5mo ago
I’m guessing that the chip area of hardware AES is utterly inconsequential compared to all the peripherals you get on a modern micro, but the manufacturers are going to keep charging specialized-applications money for that until we’re all on 32-bit ARMs with multipliers and ChaCha becomes viable.
ebiederm•5mo ago
When dealing with cryptography it is always necessary to remember cryptography is developed and operates in an adversarial environment.
Sanzig•5mo ago
tptacek•5mo ago
Sanzig•5mo ago
tptacek•5mo ago
adgjlsfhk1•5mo ago
torgoguys•5mo ago
I quite liked the remarkable simplicity of Speck. Performance was better than Ascon in my limited testing. It seems like it should be smaller on-die or in bytes of code, and with possibly lower power consumption. And round key generation was possible to compute on-the-fly (reusing the round code!) for truly tiny processors.
anfilt•5mo ago
I think the biggest problem is how they went about trying standardize it back in the day.
tptacek•5mo ago
(I'm not dissing Green here; I'm saying, I don't think he meant that statement to be proxied to a generalist audience as an effective summation of lightweight cryptography).
adastra22•5mo ago
If someone is telling you that we need a new, faster standard for cryptography, and the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use. If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure. Or at least that it doesn’t meet the level of security for general use, which took a cryptographer is approximately the same thing.
tptacek•5mo ago
No, this is not at all true.
wmf•5mo ago
Because the field of cryptography advances? You could have made the same argument about Salsa/ChaCha but those are great ciphers. And now we have Ascon which has the same security level but I guess is even faster.
adgjlsfhk1•5mo ago
tptacek•5mo ago
stouset•5mo ago
tptacek•5mo ago
stouset•5mo ago
throw0101c•5mo ago
Not everything needs to be as strong as AES, just "strong enough" for the purpose.
Heck, the IETF has published TLS cipher suites with zero encryption, "TLS 1.3 Authentication and Integrity-Only Cipher Suites":
* https://datatracker.ietf.org/doc/html/rfc9150
Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.
thyristan•5mo ago
Same thing with weaker ciphers. They are a target to downgrade to, if an attacker wishes to break into your connection.
Dylan16807•5mo ago
Intended... Do any experts think that? Can you cite a couple? Or direct evidence of course.
Unless I'm missing a joke.
thyristan•5mo ago
https://www.nist.gov/news-events/news/2024/01/new-nccoe-guid... with associated HN discussion https://news.ycombinator.com/item?id=39849754
https://www.rfc-editor.org/rfc/rfc9150.html was the one reintroducing NULL ciphers into TLS1.3. RFC9150 is written by Cisco and ODVA who previously made a fortune with TLS interception/decryption/MitM gear, selling to enterprises as well as (most probably, Cisco has been a long-time bedmate of the US gov) spying governments. The RFC weakly claims "IoT" as the intended audience due to cipher overhead, however, that is extremely hard to believe. They still do SHA256 for integrity, they still do all the very complicated and expensive TLS dance, but then skip encryption and break half the protocol on the way (since stuff like TLS1.3 RTT needs confidentiality). So why do all the expensive TLS dance at all when you can just slap a cheaper HMAC on each message and be done? The only sensible reason is that you want to have something in TLS to downgrade to.
Dylan16807•5mo ago
thyristan•5mo ago
And all the earlier weaker ciphers were explicit device configuration as well. You could configure your webserver or client not to use them. But the problem is that there are easy accidental misconfigurations like "cipher-suite: ALL", well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents. Proper design would actually just not create a product that can be mishandled, and early TLS1.3 had that property (at least with regards to cipher selection). Now it's back to "hope your config is sane" and "hope your vendor didn't screw up". Which is exactly what malicious people need to hide their intent and get in their decryption backdoors.
Dylan16807•5mo ago
> well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents
Maybe... This still feels like a thing that's only going to show up on local networks and you don't need attacks for local monitoring. Removing encryption across the Internet requires very special circumstances and also lets too many people in.
tptacek•5mo ago
thyristan•5mo ago
I don't understand how this isn't obvious. Unencrypted means it is monitorable.
tptacek•5mo ago
yardstick•5mo ago
Tech Politics comes into it.
TJSomething•5mo ago
ifwinterco•5mo ago
ignoramous•5mo ago
Google needed faster than standard AES cryptography for File-based Encryption on Android Go (low cost) devices: https://tosc.iacr.org/index.php/ToSC/article/view/7360 / https://security.googleblog.com/2019/02/introducing-adiantum...
tptacek•5mo ago
aseipp•5mo ago
> If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure.
Lol that is just not true at all. A major point of discussion when NIST announced the SHA3 finalist being Keccak back in ~2012(?) was that BLAKE1 at the time offered significantly better software performance, which was considered an important practical reality, and was faster than SHA-2 at a higher (but insignificantly so) security margin; their own report admitted as much. The BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.
So why did they pick Keccak? Because they figured that SHA-2 was plenty good and already deployed widely, so "SHA-2 but a little faster" was not as compelling as a standard that complimented it in hardware; they also liked Keccak's unique sponge design that was new and novel at the time and allowed AEAD, domain separation, etc. They admit ultimately any finalist including BLAKE would have been a good pick. You can go read all of this yourself. The Keccak team even has new work on more performant sponge-inspired designs, such as their work on Farfalle and deck functions.
The reality is that some standards are chosen for a bunch of reasons and performance is only one of them, though very important. But there's plenty of room for non-standard things, too.
AyyEye•5mo ago
That is not even remotely significant. Facebook spends 25% (1 out of every 4) of my CPU cycles on tracking. Pretty much anything else they optimize (are they still using python and php?) Will have a bigger impact.
ifwinterco•5mo ago
fsflover•5mo ago
Telemakhos•5mo ago
That's their core business.
zarzavat•5mo ago
IIRC, Keccak had a smaller chip area than Blake. Hardware performance is more important than software performance if the algorithm is likely to be implemented in hardware, which is a good assumption for a NIST standard. Of course, SHA3 hasn't taken off yet but that's more to do with how good SHA2 is.
> BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.
Given that the purpose of the competition was to replace SHA2 if/when it is weakened, choosing a similar construction would not have been the right choice.
aseipp•5mo ago
I don't think that's necessarily a given at all, but I grant that's mostly a matter of opinion I guess.
> Given that the purpose of the competition was to replace SHA2 if/when it is weakened
I think the dirty secret hiding there is that I see very few actual expectations SHA2 will ever be broken. Assuming it can be and picking a different secure construction, of course, is a good idea. But even the designers of BLAKE have admitted such and so did NIST.
adrian_b•5mo ago
jona-f•5mo ago
adrian_b•5mo ago
It is intended only for microcontrollers embedded in various systems, e.g. the microcontrollers from a car or from a robot that automate various low-level functions (not the general system control), or from various sensors or appliances.
It is expected that the data exchanged by such microcontrollers is valuable only if it can be deciphered in real time.
If an attacker would be able to decipher the recorded encrypted data by brute force after a month, or even after a week, it is expected that the data will be useless. Otherwise, standard cryptography must be used.
tptacek•5mo ago
conradev•5mo ago
My desktop CPU has AES in hardware, so it’s fast enough to just run AES.
My phone’s ARM CPU doesn’t have AES in hardware, so it’s not fast enough. ChaCha20 is fast enough, though, and especially with the SIMD support on most ARM processors.
All this paper is saying is that ChaCha20 is not fast enough for some devices, and so folks had to put in intellectual effort to make a new thing that is.
But even further: everyone’s definition for “fast enough” is different. Cycles per byte matters more if you encrypt a lot of bytes.
adrian_b•5mo ago
"Lightweight" cryptography is not intended for something as powerful as a smartphone, but only for microcontrollers that are embedded in small appliances, e.g. sensors that transmit wirelessly the acquired data.
throw0101c•5mo ago
I remember when Sun announced the UltraSPARC T2 in 2007 which had on-die hardware for (3)DES, AES, SHA, RSA, etc:
* https://en.wikipedia.org/wiki/UltraSPARC_T2
(It also had two 10 GigE modules right on the chip.)
ack_complete•5mo ago
adrian_b•5mo ago
However the poster above was talking about smartphone Arm-based CPUs. I doubt that there has ever existed a 64-bit ARM-based CPU for smartphones that did not implement the cryptography extension. Even the CPUs having only Cortex-A53 cores that I am aware of, made by some Chinese companies for extremely cheap mobile phones, had this extension.
conradev•5mo ago
Newer ones have the Qualcomm 215, which, yes, is 64-bit 4x A53
From that perspective, LWC is only useful on old (existing?) microcontrollers: the Cortex-A320 that came out this year is 64-bit.
Hardware cycles take time, though, and it will be some time before everything is 64-bit!
throw0101a•5mo ago
Or we've had enough "spare" transistors and die space to devote some to crypto, hashing, and checksumming instructions. I remember the splash Sun made when they announced on-die crypto hardware in 2007 (as well as on-die 10 GigE):
* https://en.wikipedia.org/wiki/UltraSPARC_T2
* PDF: https://www.oracle.com/technetwork/server-storage/solaris/do...
glitchc•5mo ago
If "every little bit helps" is true for the environment, it's also true for cryptography, and vice versa.
theteapot•5mo ago
stouset•5mo ago
No, not really.
Algorithms tend to fall pretty squarely in either the “prevent your sibling from reading your diary” or the “prevent the NSA and Mossad from reading your Internet traffic” camps.
Computers get faster every year, so a cipher with a very narrow safety margin will tend to become completely broken rapidly.
adrian_b•5mo ago
Some things must be encrypted well enough so that even if NSA records them now, even 10 years or 20 years later they will not be able to decipher them.
Other things must be encrypted only well enough so that nobody will be able to decipher them close to real time. If the adversaries decipher them by brute force after a week, the data will become useless by that time.
Lightweight cryptography is for this latter use case.
adgjlsfhk1•5mo ago
staplefire•5mo ago
Playing close to the margin is super dangerous.
throw0101c•5mo ago
* https://datatracker.ietf.org/doc/html/rfc9150
Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.
corranh•5mo ago
The question is should we switch from some ridiculously insecure crappy crypto on a $3 part to this better lightweight crypto implementation.
Yah, we probably should, it’s better than what we had right?
riedel•5mo ago
npteljes•5mo ago
"Elliptic Curve Cryptography"
"Post-quantum cryptography"
"Public-key cryptography"
https://en.wikipedia.org/wiki/Cryptography
yalogin•5mo ago
I haven’t looked at the lightweight crypto proposals though but if NIST is proposing it I am optimistic. If the proposals from various cryptographers around the world made through all the process it’s pretty good.
npteljes•5mo ago
However if we shift the focus to marketing, lightweight works nothing like that in that regard. They use it to imply that the product is fast, lean, to the point, contains nothing unnecessary, has fewer, but solid options, energetic, aspiring. Implying that the other products are heavier, bulkier, cumbersome, bloated, slow, overcumbered, overly complex.
It's already used in for example LDAP, which I don't think anyone scoffs at (well, with regards to the lightweight qualifier), and lightweight software has its own wiki page, implying that the usage of the term wrt/ software is widespread enough to be notable. They write that this means "a computer program that is designed to have a small memory footprint, low CPU usage, overall a low usage of system resources".
All this to write that lightweight conveys a good meaning indeed. About the only widespread negative usage I can think of is when people call others "lightweights", implying that they performed below expectations.
yalogin•5mo ago
For me if it comes from NIST, I trust it (even with their past shenanigans) because I know the process they use and also they only produce these standards to be used for federal governent software. So they will do a good job.