frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google demonstrates 'verifiable quantum advantage' with their Willow processor

https://blog.google/technology/research/quantum-echoes-willow-verifiable-quantum-advantage/
156•AbhishekParmar•1h ago•95 comments

Cryptographic Issues in Cloudflare's Circl FourQ Implementation (CVE-2025-8556)

https://www.botanica.software/blog/cryptographic-issues-in-cloudflares-circl-fourq-implementation
88•botanica_labs•2h ago•33 comments

Linux Capabilities Revisited

https://dfir.ch/posts/linux_capabilities/
87•Harvesterify•3h ago•15 comments

MinIO stops distributing free Docker images

https://github.com/minio/minio/issues/21647#issuecomment-3418675115
473•LexSiga•10h ago•283 comments

Designing software for things that rot

https://drobinin.com/posts/designing-software-for-things-that-rot/
83•valzevul•18h ago•12 comments

Bild AI (YC W25) Is Hiring a Founding AI Engineer

https://www.ycombinator.com/companies/bild-ai/jobs/m2ilR5L-founding-engineer-applied-ai
1•rooppal•3m ago

AI assistants misrepresent news content 45% of the time

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
235•sohkamyung•3h ago•178 comments

SourceFS: A 2h+ Android build becomes a 15m task with a virtual filesystem

https://www.source.dev/journal/sourcefs
58•cdesai•4h ago•23 comments

The security paradox of local LLMs

https://quesma.com/blog/local-llms-security-paradox/
67•jakozaur•4h ago•43 comments

Internet's biggest annoyance: Cookie laws should target browsers, not websites

https://nednex.com/en/the-internets-biggest-annoyance-why-cookie-laws-should-target-browsers-not-...
373•SweetSoftPillow•4h ago•408 comments

Die shots of as many CPUs and other interesting chips as possible

https://commons.wikimedia.org/wiki/User:Birdman86
143•uticus•4d ago•29 comments

The Logarithmic Time Perception Hypothesis

http://www.kafalas.com/Logtime.html
9•rzk•1h ago•4 comments

French ex-president Sarkozy begins jail sentence

https://www.bbc.com/news/articles/cvgkm2j0xelo
287•begueradj•11h ago•360 comments

Patina: a Rust implementation of UEFI firmware

https://github.com/OpenDevicePartnership/patina
79•hasheddan•1w ago•13 comments

Go subtleties

https://harrisoncramer.me/15-go-sublteties-you-may-not-already-know/
157•darccio•1w ago•113 comments

Evaluating the Infinity Cache in AMD Strix Halo

https://chipsandcheese.com/p/evaluating-the-infinity-cache-in
128•zdw•12h ago•52 comments

Farming Hard Drives (2012)

https://www.backblaze.com/blog/backblaze_drive_farming/
16•floriangosse•6d ago•5 comments

Show HN: Cadence – A Guitar Theory App

https://cadenceguitar.com/
143•apizon•1w ago•35 comments

Knocker, a knock based access control system for your homelab

https://github.com/FarisZR/knocker
54•xlmnxp•8h ago•86 comments

Meta is axing 600 roles across its AI division

https://www.theverge.com/news/804253/meta-ai-research-layoffs-fair-superintelligence
7•Lionga•21m ago•1 comments

A non-diagonal SSM RNN computed in parallel without requiring stabilization

https://github.com/glassroom/goom_ssm_rnn
4•fheinsen•6d ago•0 comments

Greg Newby, CEO of Project Gutenberg Literary Archive Foundation, has died

https://www.pgdp.net/wiki/In_Memoriam/gbnewby
376•ron_k•8h ago•61 comments

Tesla Recalls Almost 13,000 EVs over Risk of Battery Power Loss

https://www.bloomberg.com/news/articles/2025-10-22/tesla-recalls-almost-13-000-evs-over-risk-of-b...
151•zerosizedweasle•4h ago•138 comments

The Dragon Hatchling: The missing link between the transformer and brain models

https://arxiv.org/abs/2509.26507
115•thatxliner•4h ago•66 comments

LLMs can get "brain rot"

https://llm-brain-rot.github.io/
451•tamnd•1d ago•277 comments

Cigarette-smuggling balloons force closure of Lithuanian airport

https://www.theguardian.com/world/2025/oct/22/cigarette-smuggling-balloons-force-closure-vilnius-...
52•n1b0m•3h ago•20 comments

Ghostly swamp will-O'-the-wisps may be explained by science

https://www.snexplores.org/article/swamp-gas-methane-will-o-wisp-chemistry
25•WaitWaitWha•1w ago•11 comments

Starcloud

https://blogs.nvidia.com/blog/starcloud/
137•jonbaer•5h ago•185 comments

Power over Ethernet (PoE) basics and beyond

https://www.edn.com/poe-basics-and-beyond-what-every-engineer-should-know/
219•voxadam•6d ago•175 comments

Ask HN: Our AWS account got compromised after their outage

370•kinj28•1d ago•90 comments
Open in hackernews

Cryptographic Issues in Cloudflare's Circl FourQ Implementation (CVE-2025-8556)

https://www.botanica.software/blog/cryptographic-issues-in-cloudflares-circl-fourq-implementation
87•botanica_labs•2h ago

Comments

mmsc•2h ago
>after having received a lukewarm and laconic response from the HackerOne triage team.

A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.

At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.

cedws•2h ago
IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre. And that’s assuming that the company even has a responsible disclosure program or you risk putting your ass on the line.

I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

bri3d•1h ago
> We need Good Samaritan laws that legally protect and reward white hats.

What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?

I'm genuinely curious about how you suggest we achieve

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.

And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."

jacquesm•1h ago
Legal protections have absolutely nothing to do with 'the existing market'.
bri3d•1h ago
Yes, and my question is both genuine and concrete:

What proposed regulation could address a current failure to value bugs in the existing market?

The parent post suggested regulation as a solution for:

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

I don't know how this would work and am interested in learning.

cedws•35m ago
For the protections part: it means creating a legal framework in which white hats can ethically test systems without companies having a responsible disclosure program. The problem with responsible disclosure programs is that the companies with the worst security don't give a shit and won't have such a program. They may even threaten such Good Samaritans for reporting issues in good faith, there have been many such cases.

For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.

This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.

bri3d•11m ago
Sure, I'm all for protection for white hats, although I don't think is at all relevant and don't see this as a particularly prominent practical problem in the modern day.

> If a company has a security hole that leads to disclosure of sensitive information, it should be fined

What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?

> such fines can be used for rewards.

So we're back to the award allocation problem.

> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.

Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.

tptacek•9m ago
None of this has anything to do with the story we're commenting on; this kind of vulnerability research has never been legally risky.
lenerdenator•1h ago
> IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre.

Show me the incentives, and I'll show you the outcomes.

We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.

Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.

dpoloncsak•26m ago
I love this idea, but I feel like it just devolves into ways to classify that 'specific exploit' is/isn't technically a 0-day, so they can/can't be held liable
bongodongobob•11m ago
Companies get hacked because Bob in finance doesn't have MFA and got a phishing email. In my experience working for MSP's it's always been phishing and social engineering. I have never seen a company comprised from some obscure bug in software. This may be different for super large organizations that are international targets, but for the average person or business, you're better off spending time just MFAing everything you can and using common sense.
tptacek•2h ago
The backstory here, of course, is that the overwhelming majority of reports on any HackerOne program are garbage, and that garbage definitely includes 1990s sci.crypt style amateur cryptanalyses.
CaptainOfCoit•1h ago
> 1990s sci.crypt style amateur cryptanalyses

Just for fun, do you happen to have any links to public reports like that? Seems entertaining if nothing else.

CiPHPerCoder•1h ago
Most people don't make their spam public, but I did when I ran this bounty program:

https://hackerone.com/paragonie/hacktivity?type=team

The policy was immediate full disclosure, until people decided to flood us with racist memes. Those didn't get published.

Some notable stinkers:

https://hackerone.com/reports/149369

https://hackerone.com/reports/244836

https://hackerone.com/reports/115271

https://hackerone.com/reports/180074

lvncelot•56m ago
That last one has to be a troll, holy shit.
CaptainOfCoit•34m ago
From another bogus report from the same actor: https://hackerone.com/reports/180393

> Please read it and let me know and I'm very sorry for the last report :) also please don't close it as N/A and please don't publish it without my confirm to do not harm my Reputation on hacker on community

I was 90% sure it was a troll too, but based on this second report I'm not so sure anymore.

poorman•1h ago
There is definitely a miss-alignment of incentives with the bug bounty platforms. You get a very large number of useless reports which tends to create a lot of noise. Then you have to sift through a ton of noise to once in a while get a serious report. So the platforms up-sell you on using their people to sift through the reports for you. Only these people do not have the domain knowledge expertise to understand your software and dig into the vulnerabilities.

If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.

The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.

Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...

wslh•1h ago
The best thing you can do is to include an exploit when it is possible, so this can be validated automatically and clear the noise.
miohtama•1h ago
The useless reports are because there are a lot of useless people
Rygian•1h ago
Here's an idea, from a parallel universe: Cloudflare should have been forced, by law, to engage a third party neutral auditor/pentester, and fix or mitigate each finding, before being authorised to expose the CIRCL lib in public.

After that, any CVE opened by a member of the public, and subsequently confirmed by a third party neutral auditor/pentester, would result in 1) fines to Cloudflare, 2) award to the CVE opener, and 3) give grounds to Cloudflare to sue their initial auditor.

But that's just a mental experiment.

trklausss•1h ago
What do you mean, practices from safety-critical industries applied to security? Unpossible! (end /s)

For that you need regulation that enforces it. On a global scale it is pretty difficult, since it's a country-by-country thing... If you say e.g. for customers in the US, then US Congress needs to pass legislation on that. Trend is however to install backdoors everywhere, so good luck with that.

jjk7•1h ago
The license reads: 'THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"'.
Rygian•23m ago
If you bought a car and your dealer had you sign an EULA with that sentence in it (pertaining specifically to the security features of your car), would you feel safe to ride it at highway speeds?
stonemetal12•4m ago
Every used car sold outside of the major brand's certified used car programs is "As Is". So yeah, I would.
jonathanstrange•53m ago
What? We're talking about a free open source library (that I happen to use). Nobody who writes and publishes software for free should be subject to any such regulations. That's why the licenses all contain some "provided as is, no warranty" clause.

Otherwise, nobody would ever write non-commercial cryptographic libraries any longer. Why take the risk? (And good luck with finding bugs in commercial, closed source cryptographic libraries and getting them fixed...)

Rygian•19m ago
Taking the parallel-universe idea a bit further: for-profit actors must accept financial accountability for the open source software they engage with, whereas not-for-profit actors are exempt or even incentivised.

Build an open-source security solution as an individual? Well done you, and maybe here's a grant to be able to spend more of your free time on it, if you choose to do so.

Use an open-source security solution to sell stuff to the public and make a profit? Make sure you can vouch for the security, otherwise no profit for you.

semiquaver•35m ago
Seems like you want open source software to die.
Rygian•25m ago
A more charitable interpretation could be "seems like you want large corporations, which have the financial means, to take security seriously and build a respectable process before publishing security solutions whatever the license".
csmantle•1h ago
User-supplied EC point validation is one of the most basic yet crucial steps in a sound implementation. I wonder why no one (and no tests) at CloudFlare caught these carelessnesses pre-signoff and pre-release.
bri3d•1h ago
The article's deep dive into the math does it a disservice IMO, by making this seem like an arcane and complex issue. This is an EC Cryptography 101 level mistake.

Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.

tptacek•15m ago
It's more subtle than that and is not actually that simple (though the attack is). The "modern" curve constructions pioneered by Bernstein are supposed to be misuse-resistant in this regard; Bernstein popularized both Montgomery and Edwards curves. His two major curve implementations are Curve25519 and Ed25519, which are different mathematical representations of the same underlying curve. Curve25519 famously isn't vulnerable to this attack!
tptacek•28m ago
Oh, my God, I'm just now remembering why this curve was called FourQ.