After that, any CVE opened by a member of the public, and subsequently confirmed by a third party neutral auditor/pentester, would result in 1) fines to Cloudflare, 2) award to the CVE opener, and 3) give grounds to Cloudflare to sue their initial auditor.
But that's just a mental experiment.
For that you need regulation that enforces it. On a global scale it is pretty difficult, since it's a country-by-country thing... If you say e.g. for customers in the US, then US Congress needs to pass legislation on that. Trend is however to install backdoors everywhere, so good luck with that.
Otherwise, nobody would ever write non-commercial cryptographic libraries any longer. Why take the risk? (And good luck with finding bugs in commercial, closed source cryptographic libraries and getting them fixed...)
Build an open-source security solution as an individual? Well done you, and maybe here's a grant to be able to spend more of your free time on it, if you choose to do so.
Use an open-source security solution to sell stuff to the public and make a profit? Make sure you can vouch for the security, otherwise no profit for you.
Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.
mmsc•2h ago
A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.
At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.
cedws•2h ago
I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
bri3d•1h ago
What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?
I'm genuinely curious about how you suggest we achieve
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.
And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."
jacquesm•1h ago
bri3d•1h ago
What proposed regulation could address a current failure to value bugs in the existing market?
The parent post suggested regulation as a solution for:
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
I don't know how this would work and am interested in learning.
cedws•35m ago
For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.
This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.
bri3d•11m ago
> If a company has a security hole that leads to disclosure of sensitive information, it should be fined
What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?
> such fines can be used for rewards.
So we're back to the award allocation problem.
> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.
Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.
tptacek•9m ago
lenerdenator•1h ago
Show me the incentives, and I'll show you the outcomes.
We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.
Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.
dpoloncsak•26m ago
bongodongobob•11m ago
tptacek•2h ago
CaptainOfCoit•1h ago
Just for fun, do you happen to have any links to public reports like that? Seems entertaining if nothing else.
CiPHPerCoder•1h ago
https://hackerone.com/paragonie/hacktivity?type=team
The policy was immediate full disclosure, until people decided to flood us with racist memes. Those didn't get published.
Some notable stinkers:
https://hackerone.com/reports/149369
https://hackerone.com/reports/244836
https://hackerone.com/reports/115271
https://hackerone.com/reports/180074
lvncelot•56m ago
CaptainOfCoit•34m ago
> Please read it and let me know and I'm very sorry for the last report :) also please don't close it as N/A and please don't publish it without my confirm to do not harm my Reputation on hacker on community
I was 90% sure it was a troll too, but based on this second report I'm not so sure anymore.
poorman•1h ago
If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.
The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.
Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...
wslh•1h ago
miohtama•1h ago