frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Filing the corners off my MacBooks

https://kentwalters.com/posts/corners/
318•normanvalentine•3h ago•194 comments

Artemis II safely splashes down

https://www.cbsnews.com/live-updates/artemis-ii-splashdown-return/
298•areoform•1h ago•94 comments

Chimpanzees in Uganda locked in eight-year 'civil war', say researchers

https://www.bbc.com/news/articles/cr71lkzv49po
225•neversaydie•6h ago•116 comments

1D Chess

https://rowan441.github.io/1dchess/chess.html
649•burnt-resistor•9h ago•124 comments

Installing Every* Firefox Extension

https://jack.cab/blog/every-firefox-extension
118•RohanAdwankar•3h ago•20 comments

WireGuard makes new Windows release following Microsoft signing resolution

https://lists.zx2c4.com/pipermail/wireguard/2026-April/009561.html
395•zx2c4•9h ago•108 comments

Industrial design files for Keychron keyboards and mice

https://github.com/Keychron/Keychron-Keyboards-Hardware-Design
301•stingraycharles•9h ago•92 comments

AI assistance when contributing to the Linux kernel

https://github.com/torvalds/linux/blob/master/Documentation/process/coding-assistants.rst
156•hmokiguess•6h ago•123 comments

Italo Calvino: A Traveller in a World of Uncertainty

https://www.historytoday.com/archive/portrait-author-historian/italo-calvino-traveller-world-unce...
17•lermontov•1h ago•4 comments

JSON formatter Chrome plugin now closed and injecting adware

https://github.com/callumlocke/json-formatter
142•jkl5xx•6h ago•75 comments

Sam Altman's response to Molotov cocktail incident

https://blog.samaltman.com/2279512
155•jack_hanford•2h ago•280 comments

Helium is hard to replace

https://www.construction-physics.com/p/helium-is-hard-to-replace
254•JumpCrisscross•10h ago•170 comments

CPU-Z and HWMonitor compromised

https://www.theregister.com/2026/04/10/cpuid_site_hijacked/
258•pashadee•12h ago•83 comments

Watgo – A WebAssembly Toolkit for Go

https://eli.thegreenplace.net/2026/watgo-a-webassembly-toolkit-for-go/
77•ibobev•6h ago•5 comments

What is RISC-V and why it matters to Canonical

https://ubuntu.com/blog/risc-v-101-what-is-it-and-what-does-it-mean-for-canonical
92•fork-bomber•2d ago•55 comments

Launch HN: Twill.ai (YC S25) – Delegate to cloud agents, get back PRs

https://twill.ai
51•danoandco•9h ago•46 comments

Nowhere is safe

https://steveblank.com/2026/04/09/nowhere-is-safe/
119•sblank•6h ago•158 comments

Show HN: FluidCAD – Parametric CAD with JavaScript

https://fluidcad.io/
105•maouida•6h ago•20 comments

PGLite Evangelism

https://substack.com/home/post/p-193415720
19•surprisetalk•1d ago•1 comments

The Bra-and-Girdle Maker That Fashioned the Impossible for NASA

https://thereader.mitpress.mit.edu/the-bra-and-girdle-maker-that-fashioned-the-impossible-for-nasa/
27•sohkamyung•1d ago•2 comments

Vinyl Cache and Varnish Cache

https://vinyl-cache.org/organization/on_vinyl_cache_and_varnish_cache.html
25•Foxboron•2d ago•2 comments

Intel 486 CPU announced April 10, 1989

https://dfarq.homeip.net/intel-486-cpu-announced-april-10-1989/
138•jnord•13h ago•138 comments

Bild AI (YC W25) Is Hiring a Founding Product Engineer

https://www.ycombinator.com/companies/bild-ai/jobs/dDMaxVN-founding-product-engineer
1•rooppal•8h ago

A compelling title that is cryptic enough to get you to take action on it

https://ericwbailey.website/published/a-compelling-title-that-is-cryptic-enough-to-get-you-to-tak...
161•mooreds•8h ago•86 comments

Clojure on Fennel Part One: Persistent Data Structures

https://andreyor.st/posts/2026-04-07-clojure-on-fennel-part-one-persistent-data-structures/
129•roxolotl•3d ago•10 comments

OpenClaw’s memory is unreliable, and you don’t know when it will break

https://blog.nishantsoni.com/p/ive-seen-a-thousand-openclaw-deploys
57•sonink•6h ago•79 comments

You can't trust macOS Privacy and Security settings

https://eclecticlight.co/2026/04/10/why-you-cant-trust-privacy-security/
430•zdw•10h ago•149 comments

Show HN: Eve – Managed OpenClaw for work

https://eve.new/login
28•zachdive•8h ago•25 comments

Show HN: A WYSIWYG word processor in Python

https://codeberg.org/chrisecker/miniword
63•chrisecker•6h ago•26 comments

Simulating a 2D Quadcopter from Scratch

https://mrandri19.github.io/2026/04/03/2d-quadcopter-simulation.html
19•daww•2d ago•8 comments
Open in hackernews

Has Mythos just broken the deal that kept the internet safe?

https://martinalderson.com/posts/has-mythos-just-broken-the-deal-that-kept-the-internet-safe/
35•jnord•2h ago

Comments

theamk•2h ago
> According to Anthropic, Mythos Preview successfully generates a working exploit for Firefox's JS shell in 72.4% of trials

Why are AI people so dramatic? Ok, there is yet another JS sandbox escape - not the first one, not the last one. It will be patched, and the bar will be raised for a bit... at least until the next exploit is found.

If anything, AI will make _weaponized_ exploits less likely. Before, one had to find a talented person, and get pretty lucky too. If this AI is as good as promised, you can have dependabot-style exploit finder running 24/7 for the 1/10th cost of a single FTE. If it's really that good, I'd expect that all browser authors adopt those into their development process.

PunchyHamster•1h ago
> Before, one had to find a talented person, and get pretty lucky too. If this AI is as good as promised, you can have dependabot-style exploit finder running 24/7 for the 1/10th cost of a single FTE

Not you. EVERYONE doing ANY kind of software will have to, because else attacker can just pick and choose targets to point their exploit-bot

rcxdude•1h ago
Which has always been the case. Attackers only have to find one exploit in the weakest part of the system, and usually that's more a function of grunt work than it is being particularly sophisticated.
fleebee•1h ago
Well, you can only do that if you have access to the model. We're setting a precedent for the AI labs getting to pick and choose.
themafia•1h ago
> doing ANY kind of software

That's not at all clear. JS escape exploits have high value in our current Internet so there's going to be a lot of prior art. It's not surprising at all that this is what their model found and it's not a statistic that immediately suggest it has any broader implications.

theamk•32m ago
Not "ANY" kind of software, only the software that handles untrusted data in a non-trivial way. A lot of software, like local tools, does not.
mingus88•1h ago
All software has bugs. What this tells me is that the actors with the best models (and Anthropic apparently has one so good and expensive it is outstripping compute supply) they will find the exploits first and probably the ones that are hardest to find

So yeah, dependabot, but the richest actors will have the best bits and they probably won’t share the ones they can find that nobody else’s models can

Shank•1h ago
> What this tells me is that the actors with the best models (and Anthropic apparently has one so good and expensive it is outstripping compute supply) they will find the exploits first and probably the ones that are hardest to find

Presumably we would not give the AI models to the "good guys" because then they would also find and patch these vulnerabilities?

c0balt•14m ago
Someone's "good guys" are just someone "bad guys". Access to a valuable resource/tool that provides some sort of power and utility will be just another contended item.
p-e-w•1h ago
You’re asking why people are being “dramatic” about an automated system that can do what highly specialized experts get paid hundreds of thousands of dollars to do?

It’s just fascinating to see how AI’s accomplishments are being systematically downplayed. I guess when an AI proves that P!=NP, I’m going to read on this forum “so what, mathematicians prove conjectures all the time, and also, we pretty much always knew this was true anyway”.

Shank•1h ago
> I guess when an AI proves that P!=NP,

What would be the practical impacts of this discovery?

nine_k•1h ago
Likely all existing cryptography would become crackable, possibly some of it, very readily.
fwip•1h ago
I think you read it backwards - that's a possible consequence of P==NP, not P!=NP.
nine_k•1h ago
Yes, I meant the equality.

We already operate on the assumption that P ≠ NP, so little would change if that were proved.

jannyfer•1h ago
Isn’t it the opposite?
rogerrogerr•1h ago
(Assuming you mean P==NP)

Would it become crackable, or just theoretically crackable?

E.g. it's one thing to show it's possible to fly to Mars, it's another thing to actually do it.

localuser13•1h ago
Not really:

* It's possible - very likely even - that even if somehow P=NP, the fastest algorithm for any NP problem turns out to be something like n^1000, which is technically P, but not practical in any way.

* The proof may not be constructive, so we may just know that P=NP but it won't help us actually create an algorithm in P (nitpick: technically if P=NP there's a construction to create an algorithm that solves any NP problem in P time, but it's extremely slow - for example it involves iterating over all possible programs).

layer8•1h ago
It would be warranted if Mythos could jailbreak an up-to-date iPhone. (Maybe it can?) That would actually also be nice, “please rewrite without Liquid Glass”.
localuser13•1h ago
I am sceptical because AI companies, and anthropic in particular, like to overplay their achievements and build undeserved hype. I also don't understand all the caveats (maybe official announcement is more clear what this really means).

But yeah, if their model can reliably write an exploit for novel bugs (starting from a crash, not a vulnerable line of code) then it's very significant. I guess we'll see, right?

edit: Actually the original post IS dramatic: "Has Mythos just broken the deal that kept the internet safe? For nearly 20 years the deal has been simple: you click a link, arbitrary code runs on your device, and a stack of sandboxes keeps that code from doing anything nasty". Browser exploits have existed before, and this capability helps defenders as much as it helps attackers, it's not like JS is going anywhere.

SkyPuncher•1h ago
Further, Opus identified most of the vulnerabilities itself already. It just couldn’t exploit them.

Mythos seems much, much more creative and self directed, but I’m not yet convinced the core capabilities are significantly higher than what’s possible today.

The full price of finding the vulnerabilities was also something like $20k. That’s a price point that brings a skilled professional in to accomplish the same task.

ryeights•1h ago
Remember, that's the most expensive this capability will ever be.
paulryanrogers•1h ago
If it's model is opened up and can run on commodity hardware. Otherwise price could go up as RAM and silicon prices climb.
svnt•1h ago
Ding ding ding, and this is why you are hearing about it. It is marketing for enterprise to pay a premium for the next model, with maybe a wakeup call to enforcement agencies as well (which is also marketing).

Codegen for many companies is much less continuous. Security is always on, and always a motivator.

imperio59•1h ago
This whole thing has just been a huge PR stunt the whole time. Even the original leak of the blog post was just more fuel to the hype.
SpicyLemonZest•1h ago
Anthropic is saying exactly what you're saying. They don't believe that software security is permanently ruined. They just want to ensure that good defensive techniques like the ones you describe are developed before large numbers of attackers get their hands on the technology.
ece•1h ago
Standard disclosure rules should apply, give security stake holders 90-days of advance access, then release the model.
riknos314•1h ago
By that logic though the model would release 90 days from the last vulnerability it finds, so never.
ece•5m ago
I am talking about red teams being able to use the model for 90 days before everyone has access, since it's the model that's finding vulnerabilities.
heliumtera•1h ago
>Anthropic just launched a model so good it scapes every know sandboxed.

No, they launched a card with that capability written on.

signatoremo•49m ago
And the companies such as Google and Nvidia are just happy to trust them and lent their names to Anthropic because? Maybe a big conspiracy?
heliumtera•24m ago
token usage go up, said companies $TICKER price go up, shareholder value delivered.

when shareholders are basically the same, and this companies have a legal obligation to fulfill their interests...is it a conspiracy? shareholders certainly conspire to achieve their goals, smarty

0xbadcafebee•1h ago
No, you have not been safe all this time. Every security person I know has known for ages that you need to run NoScript to block all javascript if you want to be remotely secure on the web. We also know about all the 0days found on all browsers every year. Same for mobile devices. You have always been insecure. AI just makes it slightly faster to do what hackers have been doing for ages.

BTW: Mythos is not new. OpenAI literally released a press release 1 month ago talking about GPT 5.4's redteaming features being so powerful they require ID verification to use it, and will use heuristics to downgrade you if you look like you're doing something shady. I guess everyone's got a short-term memory, or Anthropic's PR is so good that people genuinely don't understand that OpenAI's models are superior to Anthropic's.

Rekindle8090•1h ago
Noscript breaks 99% of websites these days, entirely. Netsec freaks are so disingenous

"BTW: Mythos is not new. OpenAI literally released a press release 1 month ago " these two sentences make no semantic sense together

p-e-w•1h ago
The claim that you need to disable JavaScript to be secure is bullshit anyway, otherwise disabling JavaScript would be standard practice in any security-critical environment such as high-level government offices, which it most certainly isn’t.
rootusrootus•1h ago
> Anthropic's PR is so good that people genuinely don't understand that OpenAI's models are superior to Anthropic's

That is a provocative statement that would be especially interesting if you were to add some supporting evidence.

CuriouslyC•1h ago
I've been mystified by how sticky Anthropic's marketing is for a while. It's really surprising given how poorly they run community relations compared with OAI, and how they're just now starting to be transparent.
rcxdude•1h ago
I dunno, it's not obvious to me that it shift the balance that way. It's always kind of been the case that a sufficiently determined attacker is going to be able to spend way more effort than you put into secure a system to break into it. If anyone can find the holes that includes the people defending the system. This might actually make the state-level threats are less scary than they were before.
sharts•1h ago
Meh the cybersecurity risk isn’t LLMs. It’s the already fundamentally broken systems that it easily can exploit.

Are folks going to actually go back and fix things that were only secure because they were or buried in layers of obfuscation and obscurity?

Probably not. And that’s the real cyber security risk. Short term profit always wins.

bloppe•1h ago
If only there were some way to patch vulnerabilities once they are discovered.
readthenotes1•1h ago
I tried to read the article and what I got out of it was that the author believes that the deal that keeps the internet safe is that we just don't try to break it hard enough. Ignoring all the state actors who do that all the time.

Seems something of a unusual take on the state of the world

Morromist•1h ago
This is how a lot of the world works. Certain things aren't done very much because it takes a lot of human effort to do those things and that creates a status-quo.

For example a lot more people would sue eachother for petty things if it suddenly became very easy and cost efficiant. Its not, so they dont.

Another example of AI doing this exact type of thing in another realm: In the past convincing someone you were somebody they should give money to for a scam was very possible to do, but also difficult and not very cost efficiant. You could try to impersonate someone's daughter or a police officer, but it took a lot of effort to get it right.

Now, with voice mimicking ai, deepfakes, social media to mine for personal info, etc its not as difficult and so, very likely, its becoming a bigger problem than it was.

NooneAtAll3•1h ago
(efficient)
SpicyLemonZest•57m ago
Really? I think that's pretty much accurate. If you've ever visited a website whose authors you don't know and trust, you've exposed yourself to potential attacks and trusted in sandboxing to keep your computer safe.
rvz•1h ago
This is instead another great advertisement for Rust. Anthropic really got the Mythos marketing scarecrows out once again.

Dario is trying to scare you to buying into his IPO and you're over-estimating the capability of Mythos...because he said so? With no independent reviews on the research and with many security researchers and experts accusing them of blatant scaremongering.

This is Anthropic's latest attempt to frame local models and to get them banned as they stand to be a threat against their business model.

jMyles•1h ago
> the deal has been simple: you click a link, arbitrary code runs on your device, and a stack of sandboxes keeps that code from doing anything nasty.

At most, Mythos has reminded us that this "deal" is subject to frequent cycles of being compromised-and-patched.

From time to time, I have run browsers configured for opt-in javascript (eg, umatrix), but man it's a lot of work to live that way.

ajross•1h ago
I honestly think this argument (that cheap vulnerabilities means more zero days) is backwards. Making vulnerability detection cheaper shifts the balance in favor of the good guys, because it dilutes the size of the black market that the discoverers might otherwise be tempted to sell into.

Stated differently: right now black hat hacking is a valuable skill that can be turned into money easily. Once everyone can do it the incentives shift and the black hats will disappear. And that leaves the next most incentivized group in control of the market, who are presumably the software vendors.

Basically Microsoft and Google and company used to have to pay bug bounties and pray. Now it's practical just to throw a few million dollars at Anthropic instead.

jauntywundrkind•57m ago
I see this as a Brandolini's Law 2.0, a software supplemental really. Where-as before it was:

> The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

Now the energy needed to secure against exploits is orders of magnitude bigger than the effort needed to secure it.

The combination of deep expertise + infinite patience of the LLM meeting the vastly increasing surface of software has a certain apocalyptic chaos gods ruin to it all, just as well known bias for mistruth to unfairly propogate itself bedevils this good planet.

firer•3m ago
Security efforts are not evenly distributed, even within a single project. This includes both the thinking that the developers put in, and the scrutiny given to a piece of code by researchers.

The initial batch of publicly disclosed vulnerabilities by Mythos demonstrates that perfectly. None of the bugs themselves are especially interesting or complex, in my opinion. They were found by applying effort to a very large amount of code which included under-scrutinized areas, where bugs hid. Yes, even in projects like Linux and OpenBSD there are many pieces of code that aren't that properly vetted, because of the finite amount of developer/researcher time allotted.

The fact that this effort is much cheaper does indeed change things. But really strong sandboxing solutions, such as gvisor or firecracker, do a really good job of having very little attack surface, all of which is heavily scrutinized.

Until we see more of the bugs that were found, it remains to be seen whether or not the post's premise about sandboxes is correct.