Yesterday, Daniel J. Bernstein published a paper alleging that Kyber-512, an encryption algorithm selected as a NIST post-quantum contender, wasn't nearly as secure as its stewards say.
No, obviously not. 40 plus 40 is 80, and 2^40 times 2^40 is 2^80, but 2^40 plus 2^40 is only 2^41.
Take a deep breath and relax. When cryptographers are analyzing the security of cryptographic systems, of course they don't make stupid mistakes such as multiplying numbers that should have been added.
If such an error somehow managed to appear, of course it would immediately be caught by the robust procedures that cryptographers follow to thoroughly review security analyses.
Furthermore, in the context of standardization processes such as the NIST Post-Quantum Cryptography Standardization Project (NISTPQC), of course the review procedures are even more stringent.
The only way for the security claims for modern cryptographic standards to turn out to fail would be because of some unpredictable new discovery revolutionizing the field.
Oops, wait, maybe not. In 2022, NIST announced plans to standardize a particular cryptosystem, Kyber-512. As justification, NIST issued claims regarding the security level of Kyber-512. In 2023, NIST issued a draft standard for Kyber-512.
NIST's underlying calculation of the security level was a severe and indefensible miscalculation. NIST's primary error is exposed in this blog post, and boils down to nonsensically multiplying two costs that should have been added.
How did such a serious error slip past NIST's review process? Do we dismiss this as an isolated incident? Or do we conclude that something is fundamentally broken in the procedures that NIST is following?
Given that I've never heard of it (and I routinely work with security-related things like OpenSSH and TLS, certs, etc.) I'll assume the impact of this finding is relatively low.