frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
803•HellsMaddy•2h ago•355 comments

GPT-5.3-Codex

https://openai.com/index/introducing-gpt-5-3-codex/
518•meetpateltech•1h ago•185 comments

Orchestrate teams of Claude Code sessions

https://code.claude.com/docs/en/agent-teams
168•davidbarker•2h ago•81 comments

Don't rent the cloud, own instead

https://blog.comma.ai/datacenter/
966•Torq_boi•14h ago•403 comments

There Will Come Soft Rains (1950) [pdf]

https://www.btboces.org/Downloads/7_There%20Will%20Come%20Soft%20Rains%20by%20Ray%20Bradbury.pdf
25•wallflower•4d ago•5 comments

Ardour 9.0 Released

https://ardour.org/whatsnew.html
95•PaulDavisThe1st•1h ago•15 comments

A small, shared skill library by builders, for builders. (human and agent)

https://github.com/PsiACE/skills
19•recrush•1h ago•0 comments

European Commission Trials Matrix to Replace Teams

https://www.euractiv.com/news/commission-trials-european-open-source-communications-software/
240•Arathorn•3h ago•126 comments

The New Collabora Office for Desktop

https://www.collaboraonline.com/collabora-office/
117•mfld•6h ago•66 comments

Flock CEO calls Deflock a "terrorist organization" [video]

https://www.youtube.com/watch?v=l-kZGrDz7PU
74•cdrnsf•1h ago•13 comments

Advancing finance with Claude Opus 4.6

https://claude.com/blog/opus-4-6-finance
72•da_grift_shift•2h ago•12 comments

Maihem (YC W24): hiring sr robotics perception engineer (London, on-site)

https://jobs.ashbyhq.com/maihem/8da3fa8b-5544-45de-a99e-888021519758
1•mxrns•3h ago

Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models

https://arxiv.org/abs/2512.04124
19•toomuchtodo•1h ago•18 comments

150 MB Minimal FreeBSD Installation

https://vermaden.wordpress.com/2026/02/01/150-mb-minimal-freebsd-installation/
83•vermaden•4d ago•11 comments

Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code

https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting
88•speckx•1h ago•45 comments

We tasked Opus 4.6 using agent teams to build a C Compiler

https://www.anthropic.com/engineering/building-c-compiler
97•modeless•58m ago•82 comments

Company as Code

https://blog.42futures.com/p/company-as-code
178•ahamez•7h ago•94 comments

When internal hostnames are leaked to the clown

https://rachelbythebay.com/w/2026/02/03/badnas/
396•zdw•14h ago•212 comments

GB Renewables Map

https://renewables-map.robinhawkes.com/
104•RobinL•7h ago•37 comments

Nanobot: Ultra-Lightweight Alternative to OpenClaw

https://github.com/HKUDS/nanobot
174•ms7892•10h ago•96 comments

Fela Kuti First African to Get Grammys Lifetime Achievement Award

https://www.aljazeera.com/news/2026/2/1/fela-kuti-becomes-first-african-to-get-grammys-lifetime-a...
72•defrost•4d ago•18 comments

A Broken Heart

https://allenpike.com/2026/a-broken-heart/
129•memalign•4d ago•34 comments

Programming Patterns: The Story of the Jacquard Loom

https://www.scienceandindustrymuseum.org.uk/objects-and-stories/jacquard-loom
64•andsoitis•4d ago•26 comments

The time I didn't meet Jeffrey Epstein

https://scottaaronson.blog/?p=9534
20•pfdietz•36m ago•4 comments

CIA suddenly stops publishing, removes archives of The World Factbook

https://simonwillison.net/2026/Feb/5/the-world-factbook/
177•ck2•5h ago•56 comments

Triton Bespoke Layouts

https://www.lei.chat/posts/triton-bespoke-layouts/
6•matt_d•4d ago•0 comments

Simply Scheme: Introducing Computer Science (1999)

https://people.eecs.berkeley.edu/~bh/ss-toc2.html
85•AlexeyBrin•4d ago•27 comments

Unsealed court documents show teen addiction was big tech's "top priority"

https://techoversight.org/2026/01/25/top-report-mdl-jan-25/
195•Shamar•2h ago•101 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
131•vkazanov•11h ago•36 comments

Making Ferrite Core Inductors at Home

https://danielmangum.com/posts/making-ferrite-core-inductors-home/
93•hasheddan•3d ago•29 comments
Open in hackernews

Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code

https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting
87•speckx•1h ago

Comments

garbawarb•1h ago
Have they been verified?
emp17344•1h ago
Sounds like this is just a claim Anthropic is making with no evidence to support it. This is an ad.
input_sh•57m ago
How can you not believe them!? Anthropic stopped Chinese hackers from using Claude to conduct a large-scale cyber espionage attack just months ago!
littlestymaar•43m ago
Poe's law strikes again: I had to check your profile to be sure this was sarcasm.
input_sh•8m ago
You checked yourself!? Don't let your boss know, you could've saved some time by orchestrating a team of Claude agents to do that for you!
xiphias2•1h ago
Just 100 from the 500 is from OpenClaw created by Opus 4.5
ains•1h ago
https://archive.is/N6In9
siva7•1h ago
Wasn't this Opus thing released like 30 minutes ago?
jjice•1h ago
A bunch of companies get early access.
input_sh•1h ago
Yes, you just need to be a Claude++ plan!
Topfi•57m ago
I understand the confusion, this was done by Anthropics internal Red team as part of model testing prior to release.
tintor•36m ago
Singularity
blinding-streak•22m ago
Opus 4.6 uses time travel.
acedTrex•1h ago
Create the problem, sell the solution remains an undefeated business strategy.
_tk_•1h ago
The system card unfortunately only refers to this [0] blog post and doesn't go into any more detail. In the blog post Anthropic researchers claim: "So far, we've found and validated more than 500 high-severity vulnerabilities".

The three examples given include two Buffer Overflows which could very well be cherrypicked. It's hard to evaluate if these vulns are actually "hard to find". I'd be interested to see the full list of CVEs and CVSS ratings to actually get an idea how good these findings are.

Given the bogus claims [1] around GenAI and security, we should be very skeptical around these news.

[0] https://red.anthropic.com/2026/zero-days/

[1] https://doublepulsar.com/cyberslop-meet-the-new-threat-actor...

majormajor•59m ago
The Ghostscript one is interesting in terms of specific-vs-general effectiveness:

---

> Claude initially went down several dead ends when searching for a vulnerability—both attempting to fuzz the code, and, after this failed, attempting manual analysis. Neither of these methods yielded any significant findings.

...

> "The commit shows it's adding stack bounds checking - this suggests there was a vulnerability before this check was added. … If this commit adds bounds checking, then the code before this commit was vulnerable … So to trigger the vulnerability, I would need to test against a version of the code before this fix was applied."

...

> "Let me check if maybe the checks are incomplete or there's another code path. Let me look at the other caller in gdevpsfx.c … Aha! This is very interesting! In gdevpsfx.c, the call to gs_type1_blend at line 292 does NOT have the bounds checking that was added in gstype1.c."

---

It's attempt to analyze the code failed but when it saw a concrete example of "in the history, someone added bounds checking" it did a "I wonder if they did it everywhere else for this func call" pass.

So after it considered that function based on the commit history it found something that it didn't find from its initial fuzzing and code-analysis open-ended search.

As someone who still reads the code that Claude writes, this sort of "big picture miss, small picture excellence" is not very surprising or new. It's interesting to think about what it would take to do that precise digging across a whole codebase; especially if it needs some sort of modularization/summarization of context vs trying to digest tens of million lines at once.

tptacek•41m ago
I know some of the people involved here, and the general chatter around LLM-guided vulnerability discovery, and I am not at all skeptical about this.
malfist•37m ago
That's good for you, but that means nothing to anybody else.
pchristensen•29m ago
Nobody is right about everything, but tptacek's takes on software security are a good place to start.
tptacek•23m ago
I'm interested in whether there's a well-known vulnerability researcher/exploit developer beating the drum that LLMs are overblown for this application. All I see is the opposite thing. A year or so ago I arrived at the conclusion that if I was going to stay in software security, I was going to have to bring myself up to speed with LLMs. At the time I thought that was a distinctive insight, but, no, if anything, I was 6-9 months behind everybody else in my field about it.

There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they?

From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems.

Of course it works. Why would anybody think otherwise?

You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes. Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic.

NitpickLawyer•10m ago
> You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

Yeah, that's just media reporting for you. As anyone who ever administered a bug bounty programme on regular sites (h1, bugcrowd, etc) can tell you, there was an absolute deluge of slop for years before LLMs came to the scene. It was just manual slop (by manual I mean running wapiti and c/p the reports to h1).

tptacek•4m ago
I did some triage work for clients at Latacora and I would rather deal with LLM slop than argue with another person 10 time zones away trying to convince me that something they're doing in the Chrome Inspector constitutes a zero-day. At least there's a possibility that LLM slop might contain some information. You spent tokens on it!
catoc•9m ago
It does if the person making the statement has a track record, proven expertise on the topic - and in this case… it actually may mean something to other people
shimman•7m ago
Yes, as we all know that unsourced unsubstantiated statements are the best way to verify claims regarding engineering practices. Especially when said person has a financial stake in the outcomes of said claims.

No conflict of interest here at all!

tptacek•3m ago
I have zero financial stake in Anthropic and more broadly my career is more threatened by LLM-assisted vulnerability research (something I do not personally do serious work on) than it is aided by it, but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest".
aaaalone•13m ago
See it as a signal under many and not as some face value.

After all they need time to fix the cves.

And it doesn't matter to you as long as your investment into this is just 20 or 100 bucks per month anyway.

fred_is_fred•1h ago
Is the word zero-day here superfluous? If they were previously unknown doesn't that make them zero-day by definition?
bink•50m ago
Yes. As a security researcher this always annoys me.
tptacek•39m ago
It's a term of art. In print media, the connotation is "vulnerabilities embedded into shipping software", as opposed to things like misconfigurations.
limagnolia•35m ago
I though zero-day meant actively being exploited in the wild before a patch is available?
zhengyi13•1h ago
I feel like Daniel @ curl might have opinions on this.
Legend2440•12m ago
You’re right, he does: https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyz...

Curl fully supports the use of AI tools by legitimate security researchers to catch bugs, and they have fixed dozens caught in this way. It’s just idiots submitting bugs they don’t understand that’s a problem.

ChrisArchitect•1h ago
Earlier source: https://red.anthropic.com/2026/zero-days/ (https://news.ycombinator.com/item?id=46902374)
mrkeen•1h ago
Daniel Stenberg has been vocal the last few months on Mastodon about being overwhelmed by false security issues submitted to the curl project.

So much so that he had to eventually close the bug bounty program.

https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-b...

tptacek•40m ago
We're discussing a project led by actual vulnerability researchers, not random people in Indonesia hoping to score $50 by cajoling maintainers about atyle nits.
malfist•36m ago
Vulnerability researches with a vested interest in making LLMs valuable. The difference isn't meaningful
tptacek•35m ago
I don't even understand how that claim makes sense.
pityJuke•9m ago
Daniel is a smart man. He's been frustrated by slop, but he has equally accepted [0] AI-derived bug submissions from people who know what they are doing.

I would imagine Anthropic are the latter type of individual.

[0]: https://mastodon.social/@bagder/115241241075258997

Topfi•1h ago
The official release by Anthropic is very light on concrete information [0], only contains a select and very brief number of examples and lacks history, context, etc. making it very hard to gleam any reliably information from this. I hope they'll release a proper report on this experiment, as it stands it is impossible to say how much of this are actual, tangible flaws versus the unfortunately ever growing misguided bug reports and pull requests many larger FOSS projects are suffering from at an alarming rate.

Personally, while I get that 500 sounds more impressive to investors and the market, I'd be far more impressed in a detailed, reviewed paper that showcases five to ten concrete examples, detailed with the full process and response by the team that is behind the potentially affected code.

It is far to early for me to make any definitive statement, but the most early testing does not indicate any major jump between Opus 4.5 and Opus 4.6 that would warrant such an improvement, but I'd love nothing more than to be proven wrong on this front and will of course continue testing.

[0] https://red.anthropic.com/2026/zero-days/

ChrisMarshallNY•33m ago
When I read stuff like this, I have to assume that the blackhats have already been doing this, for some time.
bastard_op•22m ago
It's not really worth much when it doesn't work most of the time though:

https://github.com/anthropics/claude-code/issues/18866 https://updog.ai/status/anthropic

tptacek•10m ago
It's a machine that spits out sev:hi vulnerabilities by the dozen and the complaint is the uptime isn't consistent enough?
bxguff•20m ago
In so far as model use cases I don't mind them throwing their heads against the wall in sandboxes to find vulnerabilities but why would it do that without specific prompting? Is anthropic fine with claude setting it's own agendas in red-teaming? That's like the complete opposite of sanitizing inputs.
assaddayinh•5m ago
How weird the new attack vector for secret services must be.. like "please train into your models to push this exploit in code as a highly weighted trained on pattern".. Not Saying All answers are Corrupted In Attitude, but some "always come uppers" sure are absolutly right..