history suggests otherwise
> The fact that 12 previously unknown vulnerabilities could still be found there, including issues dating back to 1998, suggests that manual review faces significant limits, even in mature, heavily audited codebases.
no, the code is simply beyond horrible to read, not to mention diabolically bad
if you've never tried it, have a go, but bring plenty of eyebleach
If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.
What’s eye bleachy about this beyond regular C/C++?
For context I’m fluent in C#/javascript/ruby and generally understand structs and pointers although not confident in writing performant code with them.
Part of OpenSSL's incomprehensibility is that it is not C++ and therefore lacks automatic memory management. Because it doesn't have built-in allocation and initialization, it is filled with BLAH_grunk_new and QVQ_hurrr_init. "new" and "init" semantics vary between modules because it's all ad hoc. Sometimes callees deallocate their arguments.
The only reason is needs module prefixes like BLAH and QVQ and DERP is that again it is not C++ and lacks namespaces. To readers, this is just visual noise. Sometimes a function has the same name with a different module, and compatible function signature, so it's possible to accidentally call the wrong one.
As for all the slop the Curl team has been putting up with, I suppose a fool with a tool is still a fool.
I suspect this year we are going to see a _lot_ more of this.
While it's good these bugs are being found and closed, the problem is two fold
1) It takes time to get the patches through distribution 2) the vast majority of projects are not well equipped to handle complex security bugs in a "reasonable" time frame.
2 is a killer. There's so much abandonware out there, either as full apps/servers or libraries. These can't ever really be patched. Previously these weren't really worth spending effort on - might have a few thousand targets of questionable value.
Now you can spin up potentially thousands of exploits against thousands of long tail services. In aggregate this is millions of targets.
And even if this case didn't exist it's going to be difficult to patch systems quickly enough. Imagine an adversary that can drip feed zero days against targets.
Not really sure how this can be solved. I guess you'd hope that the good guys can do some sort of mega patch against software quicker than bad actors.
But really as the npm debacle showed the industry is not in a good place when it comes to timely secure software delivery even without millions of potential new zero days flying around.
Without Humans, AI does nothing. Currently, at least.
I checked the stack overflow that was marked High, and Fil-C prevents that one.
One of the out-of-bounds writes is also definitely prevented.
It's not clear if Fil-C protects you against all of the others (Fil-C won't prevent denial of service, and that's what some of these are; Fil-C also won't help you if you accidentally didn't encrypt something, which is what another one of these bugs is about).
The one about forgetting to encrypt some bytes is marked Low Severity because it's an API that they say you're unlikely to use. Seems kinda believable but also ....... terrifying? What if someone is calling the AESNI codepath directly for reasons?
Here's the data about that one:
"Issue summary: When using the low-level OCB API directly with AES-NI or other hardware-accelerated code paths, inputs whose length is not a multiple of 16 bytes can leave the final partial block unencrypted and unauthenticated.
Impact summary: The trailing 1-15 bytes of a message may be exposed in cleartext on encryption and are not covered by the authentication tag, allowing an attacker to read or tamper with those bytes without detection."
Better software is out there.
dnw•53m ago
This sounds like a great approach. Kudos!