After that, any CVE opened by a member of the public, and subsequently confirmed by a third party neutral auditor/pentester, would result in 1) fines to Cloudflare, 2) award to the CVE opener, and 3) give grounds to Cloudflare to sue their initial auditor.
But that's just a mental experiment.
For that you need regulation that enforces it. On a global scale it is pretty difficult, since it's a country-by-country thing... If you say e.g. for customers in the US, then US Congress needs to pass legislation on that. Trend is however to install backdoors everywhere, so good luck with that.
It's a very different system from software's "NO WARRANTY OF ANY KIND".
When you purchase a car, you pay actual money, and that adds liability, so if it implodes I feel like I can at least get money back, or sue the vendor for negligence. OSS is not like that. You get something for free and there is a big sign saying "lol have fun", and it's also incredibly well known that software is all buggy and bad with like maybe 3 exceptions.
> If you bought a car and your dealer had you sign an EULA with that sentence in it (pertaining specifically to the security features of your car)
If the security features are implemented in software, like "iOS app unlock", no I would not expect it to actually be secure.
It is well known that while the pure engineering disciplines, those that make cars and planes and boats, mostly know what they're doing... the software engineering industry knows how to produce code that constantly needs updates and still manages to segfault in so much as a strong breeze, even though memory safety has been a well understood problem for longer than most developers have been alive.
Congrats, the brakes failed, you caused bodily damage to an innocent bystander. Do you take full responsibility for that? I guess you do.
Now build a security solution that you sell to millions of users. Have their private data exposed to attackers because you used a third party library that was not properly audited. Do you take any responsibility, beyond the barebones "well I installed their security patches"?
> It is well known that while the pure engineering disciplines, those that make cars and planes and boats, mostly know what they're doing... the software engineering industry knows how to produce code that constantly needs updates and still manages to segfault in so much as a strong breeze, even though memory safety has been a well understood problem for longer than most developers have been alive.
We're aligned there. In a parallel universe, somehow we find a way to converge. Judging by the replies and downvotes, not on this universe.
Otherwise, nobody would ever write non-commercial cryptographic libraries any longer. Why take the risk? (And good luck with finding bugs in commercial, closed source cryptographic libraries and getting them fixed...)
Build an open-source security solution as an individual? Well done you, and maybe here's a grant to be able to spend more of your free time on it, if you choose to do so.
Use an open-source security solution to sell stuff to the public and make a profit? Make sure you can vouch for the security, otherwise no profit for you.
Code is speech. Speech is protected (at least in the US).
Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.
> Your implementation leaks secret data when the input isn't a curve point.
Some of the issues like validating input seem like should have been noticed. But of course one would need to understand how it works to notice it. And certainly, in a company like CF someone would know how this is supposed to work…
Surely the devs would have at least opened wikipedia to read
https://en.wikipedia.org/wiki/FourQ
> In order to avoid small subgroup attacks,[6] all points are verified to lie in an N-torsion subgroup of the elliptic curve, where N is specified as a 246-bit prime dividing the order of the group.
They put their name behind it https://blog.cloudflare.com/introducing-circl/ and it looks like whoever they hired to do the work couldn't even read the wikipedia page for the algorithm.
That's sort of make it look worse then, doesn't it? The main issue isn't that subtle. Even the wikipedia mentions it:
> points should always be validated before being relied upon for any computation.
Moreover the paper https://eprint.iacr.org/2015/565.pdf also mentions a few times:
> Algorithm 2 assumes that the input point P is in E(Fp2)[N], i.e., has been validated according to Appendix A
Appendix A:
> The main scalar multiplication routine (in Algorithm 2) assumes that the input point lies in E(Fp2 )[N]. However, since we have #E(Fp2) = 392 · N, and in light of small subgroup attacks [39] that can be carried out in certain scenarios, here we briefly mention how our software enables the assertion...
Case in point: https://www.daemonology.net/blog/2011-01-18-tarsnap-critical...
Not saying the same situation either; obviously Colin made a silly mistake while refactoring.
We don't actually know the issue with these implementors, but again I ask, with having actual professionals in the field, what should they have done instead of rolling their own for a primitive that doesn't exist in the language?
Funny if that's true.
The only use listed at https://en.wikipedia.org/wiki/FourQ is "FourQ is implemented in the cryptographic library CIRCL, published by Cloudflare."
mmsc•1d ago
A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.
At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.
cedws•1d ago
I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
bri3d•1d ago
What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?
I'm genuinely curious about how you suggest we achieve
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.
And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."
jacquesm•1d ago
bri3d•1d ago
What proposed regulation could address a current failure to value bugs in the existing market?
The parent post suggested regulation as a solution for:
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
I don't know how this would work and am interested in learning.
cedws•1d ago
For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.
This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.
bri3d•1d ago
> If a company has a security hole that leads to disclosure of sensitive information, it should be fined
What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?
> such fines can be used for rewards.
So we're back to the award allocation problem.
> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.
Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.
cedws•1d ago
It's pretty clear whatever security 'strategy' we're using right now doesn't work. I'm subscribed to Troy Hunt's breach feed and it's basically weekly now that another 10M, 100M records are leaked. It seems foolish to continue like this. If governments want to take threats seriously a new strategy is needed that mobilises security experts and dishes out proper penalties.
bri3d•1d ago
My goal was to learn whether there was an insight beyond "we should take the thing that doesn't work and move it into the government where it can continue to not work," because I'd find that interesting.
tptacek•1d ago
akerl_•1d ago
There's a reason Good Samaritan laws are built around rendering aid to injured humans: there is no equivalent if you go down the street popping peoples' car hoods to refill their windshield wiper fluid.
lenerdenator•1d ago
Show me the incentives, and I'll show you the outcomes.
We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.
Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.
dpoloncsak•1d ago
akerl_•1d ago
Why is it inherently desirable that society penalize companies that get hacked above and beyond people choosing not to use their services, or selling off their shares, etc?
lenerdenator•1d ago
It'd be one thing if these were isolated incidents, but they're not.
Furthermore, the methods you mention simply aren't effective. Our economy is now so consolidated that many markets only have a handful of participants offering goods or services, and these players often all have data and computer security issues. As for divestiture, most people don't own shares, and those who do typically don't know they own shares of a specific company. Most shareholders in the US are retirement or pension funds, and they are run by people who would rather make it impossible for the average person to bring real consequences to their holdings for data breaches, than cause the company to spend money on fixing the issues that allow for the breaches to begin with. After all, it's "cheaper".
akerl_•1d ago
It's never made sense to me.
I can see that being true in specific instances: many people in the US don't have great mobility for residential ISPs, or utility companies. And there's large network effects for social media platforms. But if any significant plurality of users cared about the impact of service breaches, or bad privacy policies, surely we'd see the impact somewhere in the market? We do in some related areas: Apple puts a ton of money into marketing about keeping people's data and messages private. WhatsApp does the same. But there are so many companies out there, lots of them have garbage security practices, lots of them get compromised, and I'm struggling to remember any example of a consumer company that had a breach and saw any significant impact.
To pick an example: in 2014 Home Depot had a breach of payment data. Basically everywhere that has Home Depots also has Lowes and other options that sell the same stuff. In most places, if you're pissed at Home Depot for losing your card information, you can literally drive across the street to Lowes. But it doesn't seem like that happened.
Is it possible that outside of tech circles where we care about The Principle Of The Thing, the market is actually correct in its assessment of the value for the average consumer business of putting more money into security?
lan321•1d ago
Plenty of my normie friends don't want new cars for example due to all the tracking and subscription garbage, but realistically, what can you do when the old ones slowly get outlawed/impossible to maintain due to part shortages.
lenerdenator•19h ago
> To pick an example: in 2014 Home Depot had a breach of payment data. Basically everywhere that has Home Depots also has Lowes and other options that sell the same stuff. In most places, if you're pissed at Home Depot for losing your card information, you can literally drive across the street to Lowes. But it doesn't seem like that happened.
No one considers these things when they're buying plumbing tape. Really, you shouldn't have to consider that. You should be able to do commerce without having to wonder if some guy on the other side of the transaction is going to get his yearly bonus by cutting the necessary resources to keep you from having to deal with identity theft.
> Is it possible that outside of tech circles where we care about The Principle Of The Thing, the market is actually correct in its assessment of the value for the average consumer business of putting more money into security?
Let's try with a company that has your data and see how correct "the market" is. Principles are the things you build a functioning society upon, not quarterly returns.
akerl_•17h ago
What do you mean? Tons of companies with my data have been breached.
bongodongobob•1d ago
akerl_•1d ago
bongodongobob•1d ago
quicksilver03•3h ago
tptacek•1d ago
CaptainOfCoit•1d ago
Just for fun, do you happen to have any links to public reports like that? Seems entertaining if nothing else.
CiPHPerCoder•1d ago
https://hackerone.com/paragonie/hacktivity?type=team
The policy was immediate full disclosure, until people decided to flood us with racist memes. Those didn't get published.
Some notable stinkers:
https://hackerone.com/reports/149369
https://hackerone.com/reports/244836
https://hackerone.com/reports/115271
https://hackerone.com/reports/180074
lvncelot•1d ago
CaptainOfCoit•1d ago
> Please read it and let me know and I'm very sorry for the last report :) also please don't close it as N/A and please don't publish it without my confirm to do not harm my Reputation on hacker on community
I was 90% sure it was a troll too, but based on this second report I'm not so sure anymore.
nightpool•1d ago
joatmon-snoo•1d ago
poorman•1d ago
If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.
The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.
Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...
wslh•1d ago
miohtama•1d ago
davidczech•1d ago
saurik•1d ago
andersa•1d ago
This was about an issue in a C++ RPC framework not validating object references are of the correct type during deserialization from network messages, so the actual impact is kind of unbounded.
baby•1d ago