frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•3m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•3m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•4m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•4m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•5m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•8m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•8m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•10m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•11m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•12m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•12m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•12m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•12m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•13m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•14m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•15m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•20m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•21m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•22m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
25•bookofjoe•22m ago•9 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•23m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•24m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•24m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•25m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•25m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•25m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•25m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•26m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•27m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•28m ago•0 comments
Open in hackernews

Cryptographic Issues in Cloudflare's Circl FourQ Implementation (CVE-2025-8556)

https://www.botanica.software/blog/cryptographic-issues-in-cloudflares-circl-fourq-implementation
166•botanica_labs•3mo ago

Comments

mmsc•3mo ago
>after having received a lukewarm and laconic response from the HackerOne triage team.

A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.

At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.

cedws•3mo ago
IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre. And that’s assuming that the company even has a responsible disclosure program or you risk putting your ass on the line.

I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

bri3d•3mo ago
> We need Good Samaritan laws that legally protect and reward white hats.

What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?

I'm genuinely curious about how you suggest we achieve

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.

And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."

jacquesm•3mo ago
Legal protections have absolutely nothing to do with 'the existing market'.
bri3d•3mo ago
Yes, and my question is both genuine and concrete:

What proposed regulation could address a current failure to value bugs in the existing market?

The parent post suggested regulation as a solution for:

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

I don't know how this would work and am interested in learning.

cedws•3mo ago
For the protections part: it means creating a legal framework in which white hats can ethically test systems without companies having a responsible disclosure program. The problem with responsible disclosure programs is that the companies with the worst security don't give a shit and won't have such a program. They may even threaten such Good Samaritans for reporting issues in good faith, there have been many such cases.

For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.

This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.

bri3d•3mo ago
Sure, I'm all for protection for white hats, although I don't think is at all relevant and don't see this as a particularly prominent practical problem in the modern day.

> If a company has a security hole that leads to disclosure of sensitive information, it should be fined

What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?

> such fines can be used for rewards.

So we're back to the award allocation problem.

> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.

Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.

cedws•3mo ago
You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?

It's pretty clear whatever security 'strategy' we're using right now doesn't work. I'm subscribed to Troy Hunt's breach feed and it's basically weekly now that another 10M, 100M records are leaked. It seems foolish to continue like this. If governments want to take threats seriously a new strategy is needed that mobilises security experts and dishes out proper penalties.

bri3d•3mo ago
> You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?

My goal was to learn whether there was an insight beyond "we should take the thing that doesn't work and move it into the government where it can continue to not work," because I'd find that interesting.

tptacek•3mo ago
None of this has anything to do with the story we're commenting on; this kind of vulnerability research has never been legally risky.
akerl_•3mo ago
You're (thankfully) never going to get a legal framework that allows "white hats" to test another person's computer without their permission.

There's a reason Good Samaritan laws are built around rendering aid to injured humans: there is no equivalent if you go down the street popping peoples' car hoods to refill their windshield wiper fluid.

lenerdenator•3mo ago
> IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre.

Show me the incentives, and I'll show you the outcomes.

We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.

Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.

dpoloncsak•3mo ago
I love this idea, but I feel like it just devolves into ways to classify that 'specific exploit' is/isn't technically a 0-day, so they can/can't be held liable
akerl_•3mo ago
Why?

Why is it inherently desirable that society penalize companies that get hacked above and beyond people choosing not to use their services, or selling off their shares, etc?

lenerdenator•3mo ago
Because they were placed in a position of trust and failed. Typically, the failure stems from a lack of willingness to expend the resources necessary to prevent the failure.

It'd be one thing if these were isolated incidents, but they're not.

Furthermore, the methods you mention simply aren't effective. Our economy is now so consolidated that many markets only have a handful of participants offering goods or services, and these players often all have data and computer security issues. As for divestiture, most people don't own shares, and those who do typically don't know they own shares of a specific company. Most shareholders in the US are retirement or pension funds, and they are run by people who would rather make it impossible for the average person to bring real consequences to their holdings for data breaches, than cause the company to spend money on fixing the issues that allow for the breaches to begin with. After all, it's "cheaper".

akerl_•3mo ago
I feel like this kind of justification comes up every time this topic is on HN: that the reason companies aren't being organically penalized for bad IT/infosec/privacy behavior is because the average person doesn't have leverage or alternatives.

It's never made sense to me.

I can see that being true in specific instances: many people in the US don't have great mobility for residential ISPs, or utility companies. And there's large network effects for social media platforms. But if any significant plurality of users cared about the impact of service breaches, or bad privacy policies, surely we'd see the impact somewhere in the market? We do in some related areas: Apple puts a ton of money into marketing about keeping people's data and messages private. WhatsApp does the same. But there are so many companies out there, lots of them have garbage security practices, lots of them get compromised, and I'm struggling to remember any example of a consumer company that had a breach and saw any significant impact.

To pick an example: in 2014 Home Depot had a breach of payment data. Basically everywhere that has Home Depots also has Lowes and other options that sell the same stuff. In most places, if you're pissed at Home Depot for losing your card information, you can literally drive across the street to Lowes. But it doesn't seem like that happened.

Is it possible that outside of tech circles where we care about The Principle Of The Thing, the market is actually correct in its assessment of the value for the average consumer business of putting more money into security?

lan321•3mo ago
I think it's more simple in the Home Depot example. Even if you care about the breach what are you gonna do? Home Depot got hacked so they'll now probably get some more security staff. Funding for the quarter is secured. Lowes has not been hacked. Does that mean they won't be hacked? Not really... For cheap smart home shit it doesn't even matter since the company will go bankrupt and change hands 3 times in the next 5 years and again, they are all garbage. Either they'll get hacked or they'll sell your data anyway.

Plenty of my normie friends don't want new cars for example due to all the tracking and subscription garbage, but realistically, what can you do when the old ones slowly get outlawed/impossible to maintain due to part shortages.

lenerdenator•3mo ago
People give up on getting companies to be good actors because ultimately they're just a single person with a job and maybe a small savings account, looking at suing a company with absolutely no guarantee of ever recovering a cent on all of the trouble that their lax security policies cost them. Oh, and litigation is a rich man's sport.

> To pick an example: in 2014 Home Depot had a breach of payment data. Basically everywhere that has Home Depots also has Lowes and other options that sell the same stuff. In most places, if you're pissed at Home Depot for losing your card information, you can literally drive across the street to Lowes. But it doesn't seem like that happened.

No one considers these things when they're buying plumbing tape. Really, you shouldn't have to consider that. You should be able to do commerce without having to wonder if some guy on the other side of the transaction is going to get his yearly bonus by cutting the necessary resources to keep you from having to deal with identity theft.

> Is it possible that outside of tech circles where we care about The Principle Of The Thing, the market is actually correct in its assessment of the value for the average consumer business of putting more money into security?

Let's try with a company that has your data and see how correct "the market" is. Principles are the things you build a functioning society upon, not quarterly returns.

akerl_•3mo ago
> Let's try with a company that has your data and see how correct "the market" is.

What do you mean? Tons of companies with my data have been breached.

bongodongobob•3mo ago
Companies get hacked because Bob in finance doesn't have MFA and got a phishing email. In my experience working for MSP's it's always been phishing and social engineering. I have never seen a company comprised from some obscure bug in software. This may be different for super large organizations that are international targets, but for the average person or business, you're better off spending time just MFAing everything you can and using common sense.
akerl_•3mo ago
Just to clarify: if Bob in Finance doesn't have phishing-resistant MFA, that's an organizational failure that's squarely homed in the IT and Infosec world.
bongodongobob•3mo ago
Absolutely. It's extremely common with small and midsize businesses that don't have any IT on staff.
quicksilver03•3mo ago
Having seen some of those cases, I'd say it's rather because Bob in Finance doesn't want to be bothered with MFA and has raised so much stink with the CFO that IT has been ordered to disable MFA for him.
tptacek•3mo ago
The backstory here, of course, is that the overwhelming majority of reports on any HackerOne program are garbage, and that garbage definitely includes 1990s sci.crypt style amateur cryptanalyses.
CaptainOfCoit•3mo ago
> 1990s sci.crypt style amateur cryptanalyses

Just for fun, do you happen to have any links to public reports like that? Seems entertaining if nothing else.

CiPHPerCoder•3mo ago
Most people don't make their spam public, but I did when I ran this bounty program:

https://hackerone.com/paragonie/hacktivity?type=team

The policy was immediate full disclosure, until people decided to flood us with racist memes. Those didn't get published.

Some notable stinkers:

https://hackerone.com/reports/149369

https://hackerone.com/reports/244836

https://hackerone.com/reports/115271

https://hackerone.com/reports/180074

lvncelot•3mo ago
That last one has to be a troll, holy shit.
CaptainOfCoit•3mo ago
From another bogus report from the same actor: https://hackerone.com/reports/180393

> Please read it and let me know and I'm very sorry for the last report :) also please don't close it as N/A and please don't publish it without my confirm to do not harm my Reputation on hacker on community

I was 90% sure it was a troll too, but based on this second report I'm not so sure anymore.

nightpool•3mo ago
I like the bit where he tried to get paid by Hackerone for the bug you reported:

     i think there a bug here on your last comment. can i report it to hackerone ? they will reward me ?
joatmon-snoo•3mo ago
This is great to see, much appreciated for the disclosure!
poorman•3mo ago
There is definitely a miss-alignment of incentives with the bug bounty platforms. You get a very large number of useless reports which tends to create a lot of noise. Then you have to sift through a ton of noise to once in a while get a serious report. So the platforms up-sell you on using their people to sift through the reports for you. Only these people do not have the domain knowledge expertise to understand your software and dig into the vulnerabilities.

If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.

The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.

Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...

wslh•3mo ago
The best thing you can do is to include an exploit when it is possible, so this can be validated automatically and clear the noise.
miohtama•3mo ago
The useless reports are because there are a lot of useless people
davidczech•3mo ago
AI generated bounty report spam is a huge problem now.
saurik•3mo ago
One way to correct this misalignment is to give the bounty platform a cut of the bounty. This is how Immunifi works, and I've so far not heard anyone unhappy with communicating with them (though, I of course will not be at all shocked or surprised if a billion people reply to me saying I simply haven't talked to the right people and in fact everyone hates them ;P).
andersa•3mo ago
Had the same experience last time I attempted to report an issue on Hacker One. Triage did not seem to actually understand the issue and insisted on needing a PoC they could run themselves that demonstrated the maximum impact for some reason, even though any developer familiar with the actual code at hand could see the problem in about ten seconds. Ended up writing to some old security email I found for the company to look at the report and they took care of it one day later, so good ending I guess.

This was about an issue in a C++ RPC framework not validating object references are of the correct type during deserialization from network messages, so the actual impact is kind of unbounded.

baby•3mo ago
From what I understand these aya the triagers are AI, but the bug reports are AI as well :o)
Rygian•3mo ago
Here's an idea, from a parallel universe: Cloudflare should have been forced, by law, to engage a third party neutral auditor/pentester, and fix or mitigate each finding, before being authorised to expose the CIRCL lib in public.

After that, any CVE opened by a member of the public, and subsequently confirmed by a third party neutral auditor/pentester, would result in 1) fines to Cloudflare, 2) award to the CVE opener, and 3) give grounds to Cloudflare to sue their initial auditor.

But that's just a mental experiment.

trklausss•3mo ago
What do you mean, practices from safety-critical industries applied to security? Unpossible! (end /s)

For that you need regulation that enforces it. On a global scale it is pretty difficult, since it's a country-by-country thing... If you say e.g. for customers in the US, then US Congress needs to pass legislation on that. Trend is however to install backdoors everywhere, so good luck with that.

jjk7•3mo ago
The license reads: 'THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"'.
Rygian•3mo ago
If you bought a car and your dealer had you sign an EULA with that sentence in it (pertaining specifically to the security features of your car), would you feel safe to ride it at highway speeds?
stonemetal12•3mo ago
Every used car sold outside of the major brand's certified used car programs is "As Is". So yeah, I would.
AlotOfReading•3mo ago
Speaking to US laws, auto manufacturers are required to fix design bugs that cause safety issues regardless of warranty or used status, at no cost to the owner. You may be familiar with the standard name for those fixes, "recalls". It's illegal to sell a vehicle with unresolved recalls, though the government deliberately avoids enforcing that as aggressively as they could.

It's a very different system from software's "NO WARRANTY OF ANY KIND".

TheDong•3mo ago
If I went to a lot that had a sign at the entrance saying "Open Source Cars, feel free to open the hood and look to learn stuff. No warranty implied. Some may not function. All free to duplicate, free to take parts from, and free to take home", and then took a car from the lot and drove it home, no I would not be surprised if it fell apart before getting out of the lot.

When you purchase a car, you pay actual money, and that adds liability, so if it implodes I feel like I can at least get money back, or sue the vendor for negligence. OSS is not like that. You get something for free and there is a big sign saying "lol have fun", and it's also incredibly well known that software is all buggy and bad with like maybe 3 exceptions.

> If you bought a car and your dealer had you sign an EULA with that sentence in it (pertaining specifically to the security features of your car)

If the security features are implemented in software, like "iOS app unlock", no I would not expect it to actually be secure.

It is well known that while the pure engineering disciplines, those that make cars and planes and boats, mostly know what they're doing... the software engineering industry knows how to produce code that constantly needs updates and still manages to segfault in so much as a strong breeze, even though memory safety has been a well understood problem for longer than most developers have been alive.

Rygian•3mo ago
> then took a car from the lot and drove it home, no I would not be surprised if it fell apart before getting out of the lot.

Congrats, the brakes failed, you caused bodily damage to an innocent bystander. Do you take full responsibility for that? I guess you do.

Now build a security solution that you sell to millions of users. Have their private data exposed to attackers because you used a third party library that was not properly audited. Do you take any responsibility, beyond the barebones "well I installed their security patches"?

> It is well known that while the pure engineering disciplines, those that make cars and planes and boats, mostly know what they're doing... the software engineering industry knows how to produce code that constantly needs updates and still manages to segfault in so much as a strong breeze, even though memory safety has been a well understood problem for longer than most developers have been alive.

We're aligned there. In a parallel universe, somehow we find a way to converge. Judging by the replies and downvotes, not on this universe.

jonathanstrange•3mo ago
What? We're talking about a free open source library (that I happen to use). Nobody who writes and publishes software for free should be subject to any such regulations. That's why the licenses all contain some "provided as is, no warranty" clause.

Otherwise, nobody would ever write non-commercial cryptographic libraries any longer. Why take the risk? (And good luck with finding bugs in commercial, closed source cryptographic libraries and getting them fixed...)

Rygian•3mo ago
Taking the parallel-universe idea a bit further: for-profit actors must accept financial accountability for the open source software they engage with, whereas not-for-profit actors are exempt or even incentivised.

Build an open-source security solution as an individual? Well done you, and maybe here's a grant to be able to spend more of your free time on it, if you choose to do so.

Use an open-source security solution to sell stuff to the public and make a profit? Make sure you can vouch for the security, otherwise no profit for you.

jonathanstrange•3mo ago
No thanks, that would kill my one-man software business before I have even started selling a single product, and I'd also have to withdraw every open source repository I have on Github.If you want to pay 10 times more for software and make sure only large corporations sell it to you, your plan is fantastic. Otherwise, not so great.
Rygian•3mo ago
Not sure why you choose an interpretation that goes against your interest, instead of the more advantageous one, namely that your one-man software business would be able to charge a sizeable premium if the buyer is planning to use your software in a security-sensitive operation.
jonathanstrange•3mo ago
You're talking about a forced price increase and in the B2C market consumers do not pay sizeable premiums. Apropos "security-sensitive operation": Any software that connects to a network is a security-sensitive operation. There is no alternative reality were your proposal wouldn't just drastically raise the price of software and make it essentially impossible for small companies to use open source software because of the increased legal risk and auditing costs.

There are already plenty of certifications that are required for software in certain business areas such as HIPA and FIPS certifications. However, these are voluntary for companies who want to sell in sectors that require them. Assuring compliance is very costly in development, auditing, and bureaucratic overhead, and for this reason this kind of software is very expensive. If Cloudflare was forced to get expensive certifications for the Circl library, they wouldn't publish it as open source. They'd perhaps sell it at a high price point. That wouldn't be an advantage for anyone. Without such libraries communication would be very insecure by default. The whole internet is running on open source security libraries that individual developers cannot implement on their own (and it would be a bad idea if they tried). Not just the internet by the way, the same holds for nearly every cryptographic and otherwise security relevant library of programming languages.

semiquaver•3mo ago
Seems like you want open source software to die.
Rygian•3mo ago
A more charitable interpretation could be "seems like you want large corporations, which have the financial means, to take security seriously and build a respectable process before publishing security solutions whatever the license".
semiquaver•3mo ago
All software is a security solution in one way or another. If open sourcing something risked massive liability no one would do it.
ramon156•3mo ago
Lol based on what law? They're doing nothing illegal. Insane take
qeternity•3mo ago
People really just go on the internet and say stuff.

Code is speech. Speech is protected (at least in the US).

csmantle•3mo ago
User-supplied EC point validation is one of the most basic yet crucial steps in a sound implementation. I wonder why no one (and no tests) at CloudFlare caught these carelessnesses pre-signoff and pre-release.
bri3d•3mo ago
The article's deep dive into the math does it a disservice IMO, by making this seem like an arcane and complex issue. This is an EC Cryptography 101 level mistake.

Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.

tptacek•3mo ago
It's more subtle than that and is not actually that simple (though the attack is). The "modern" curve constructions pioneered by Bernstein are supposed to be misuse-resistant in this regard; Bernstein popularized both Montgomery and Edwards curves. His two major curve implementations are Curve25519 and Ed25519, which are different mathematical representations of the same underlying curve. Curve25519 famously isn't vulnerable to this attack!
edelbitter•3mo ago
Bernstein also published a simple checklist [1] of what people are likely to do wrong if not ruled out by design. Bullet point 2 on that list was:

> Your implementation leaks secret data when the input isn't a curve point.

[1]: https://safecurves.cr.yp.to/

tptacek•3mo ago
Oh, my God, I'm just now remembering why this curve was called FourQ.
rdtsc•3mo ago
Does the “don’t implement your own cryptography” advice apply to multi-billion companies, or it’s just for regular, garden variety developers?

Some of the issues like validating input seem like should have been noticed. But of course one would need to understand how it works to notice it. And certainly, in a company like CF someone would know how this is supposed to work…

Surely the devs would have at least opened wikipedia to read

https://en.wikipedia.org/wiki/FourQ

> In order to avoid small subgroup attacks,[6] all points are verified to lie in an N-torsion subgroup of the elliptic curve, where N is specified as a 246-bit prime dividing the order of the group.

commandersaki•3mo ago
So should they have opted for an inexistent implementation of FourQ in Go so they don't have to roll their own (keeping in mind this is a library for experimental deployment of PQ and ECC)?
rdtsc•3mo ago
They should have found someone who knows what they are doing or not implement it at all. We're talking about a company with a $1B+ yearly revenue here.

They put their name behind it https://blog.cloudflare.com/introducing-circl/ and it looks like whoever they hired to do the work couldn't even read the wikipedia page for the algorithm.

wbl•3mo ago
Both Kris and Armando have PhDs in cryptography. The issues here are a lot more subtle than that wiki article makes it seem.
rdtsc•3mo ago
> Both Kris and Armando have PhDs in cryptography. The issues here are a lot more subtle than that wiki article makes it seem.

That's sort of make it look worse then, doesn't it? The main issue isn't that subtle. Even the wikipedia mentions it:

> points should always be validated before being relied upon for any computation.

Moreover the paper https://eprint.iacr.org/2015/565.pdf also mentions a few times:

> Algorithm 2 assumes that the input point P is in E(Fp2)[N], i.e., has been validated according to Appendix A

Appendix A:

> The main scalar multiplication routine (in Algorithm 2) assumes that the input point lies in E(Fp2 )[N]. However, since we have #E(Fp2) = 392 · N, and in light of small subgroup attacks [39] that can be carried out in certain scenarios, here we briefly mention how our software enables the assertion...

commandersaki•3mo ago
You realise experts at cryptography, even implementation, are fallible right?

Case in point: https://www.daemonology.net/blog/2011-01-18-tarsnap-critical...

Not saying the same situation either; obviously Colin made a silly mistake while refactoring.

We don't actually know the issue with these implementors, but again I ask, with having actual professionals in the field, what should they have done instead of rolling their own for a primitive that doesn't exist in the language?

tptacek•3mo ago
CloudFlare gets to roll cryptography; they employ a bunch of serious cryptographers. This is a good attack, and it's subtler than it looks.
donavanm•3mo ago
to wit even then the old maxim still applies to _most developers inside cloudflare_. Yes, some global/specialist corps can have actual applied crypto and security. But the vast vast majority of usage should still be using tools developed and tested by actual SMEs.
commandersaki•3mo ago
This is a pretty good write up, but took more than the suggested 2 minutes to read.
rodolphoarruda•3mo ago
Side note: what a nice background gradient those guys put into that website! It goes from dark sky blue to dry desert soil at the bottom. Nice artistic touch.
neilv•3mo ago
> FourQ [...] Its name is derived from the four dimensional Gallant–Lambert–Vanstone scalar multiplication,

Funny if that's true.

tptacek•3mo ago
The backstory on the name is --- I think --- a lot funnier. Read it out loud fast.
neilv•3mo ago
Do you know the full joke behind it, like was someone being told off by someone else?
tptacek•3mo ago
Yep.
tveita•3mo ago
Is FourQ used enough for anyone to be affected by this?

The only use listed at https://en.wikipedia.org/wiki/FourQ is "FourQ is implemented in the cryptographic library CIRCL, published by Cloudflare."

moktonar•3mo ago
This smells like bugdoor to me..
tptacek•3mo ago
Yes. A bugdoor. In an algorithm virtually nothing on the Internet uses. Makes perfect sense.
moktonar•3mo ago
And I hope it stays like this... imagine if Cloudflare started using it to "secure" all its comunications...