frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Zig – Type Resolution Redesign and Language Changes

https://ziglang.org/devlog/2026/#2026-03-10
233•Retro_Dev•9h ago•78 comments

Building a TB-303 from Scratch

https://loopmaster.xyz/tutorials/tb303-from-scratch
34•stagas•3d ago•7 comments

Create value for others and don’t worry about the returns

https://geohot.github.io//blog/jekyll/update/2026/03/11/running-69-agents.html
345•ppew•4h ago•189 comments

U+237C ⍼ Is Azimuth

https://ionathan.ch/2026/02/16/angzarr.html
311•cokernel_hacker•11h ago•32 comments

Cloudflare crawl endpoint

https://developers.cloudflare.com/changelog/post/2026-03-10-br-crawl-endpoint/
314•jeffpalmer•12h ago•119 comments

TADA: Fast, Reliable Speech Generation Through Text-Acoustic Synchronization

https://www.hume.ai/blog/opensource-tada
44•smusamashah•4h ago•8 comments

AutoKernel: Autoresearch for GPU Kernels

https://github.com/RightNow-AI/autokernel
28•frozenseven•2h ago•3 comments

Julia Snail – An Emacs Development Environment for Julia Like Clojure's Cider

https://github.com/gcv/julia-snail
74•TheWiggles•3d ago•8 comments

Tony Hoare has died

https://blog.computationalcomplexity.org/2026/03/tony-hoare-1934-2026.html
1798•speckx•19h ago•230 comments

Yann LeCun raises $1B to build AI that understands the physical world

https://www.wired.com/story/yann-lecun-raises-dollar1-billion-to-build-ai-that-understands-the-ph...
476•helloplanets•1d ago•384 comments

Agents that run while I sleep

https://www.claudecodecamp.com/p/i-m-building-agents-that-run-while-i-sleep
335•aray07•15h ago•367 comments

RISC-V Is Sloooow

https://marcin.juszkiewicz.com.pl/2026/03/10/risc-v-is-sloooow/
246•todsacerdoti•14h ago•246 comments

SSH Secret Menu

https://twitter.com/rebane2001/status/2031037389347406054
213•piccirello•1d ago•79 comments

Writing my own text editor, and daily-driving it

https://blog.jsbarretto.com/post/text-editor
110•todsacerdoti•8h ago•30 comments

When the chain becomes the product: Seven years inside a token-funded venture

https://markmhendrickson.com/posts/when-the-chain-becomes-the-product/
6•mhendric•3d ago•1 comments

Launch HN: RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon

https://github.com/RunanywhereAI/rcli
215•sanchitmonga22•17h ago•130 comments

Debian decides not to decide on AI-generated contributions

https://lwn.net/SubscriberLink/1061544/125f911834966dd0/
334•jwilk•19h ago•254 comments

Levels of Agentic Engineering

https://www.bassimeledath.com/blog/levels-of-agentic-engineering
192•bombastic311•1d ago•88 comments

Universal vaccine against respiratory infections and allergens

https://med.stanford.edu/news/all-news/2026/02/universal-vaccine.html
259•phony-account•11h ago•84 comments

Mesh over Bluetooth LE, TCP, or Reticulum

https://github.com/torlando-tech/columba
98•khimaros•15h ago•10 comments

Standardizing source maps

https://bloomberg.github.io/js-blog/post/standardizing-source-maps/
34•Timothee•5h ago•4 comments

Surpassing vLLM with a Generated Inference Stack

https://infinity.inc/case-studies/qwen3-optimization
38•lukebechtel•19h ago•14 comments

Google to Provide Pentagon with AI Agents

https://www.bloomberg.com/news/articles/2026-03-10/google-to-provide-pentagon-with-ai-agents-for-...
13•1vuio0pswjnm7•57m ago•4 comments

Roblox is minting teen millionaires

https://www.bloomberg.com/news/articles/2026-03-06/roblox-s-teen-millionaires-are-disrupting-the-...
141•petethomas•3d ago•157 comments

I'm going to build my own OpenClaw, with blackjack and bun

https://github.com/rcarmo/piclaw
35•rcarmo•2h ago•29 comments

Support for Aquantia AQC113 and AQC113C Ethernet Controllers on FreeBSD

https://github.com/Aquantia/aqtion-freebsd/issues/32
8•justinclift•4d ago•6 comments

Pike: To Exit or Not to Exit

https://tomjohnell.com/pike-solving-the-should-we-stop-here-or-gamble-on-the-next-exit-problem/
24•dnw•2d ago•3 comments

FFmpeg-over-IP – Connect to remote FFmpeg servers

https://github.com/steelbrain/ffmpeg-over-ip
191•steelbrain•16h ago•59 comments

Meta acquires Moltbook

https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network
500•mmayberry•19h ago•337 comments

Launch HN: Didit (YC W26) – Stripe for Identity Verification

71•rosasalberto•19h ago•60 comments
Open in hackernews

Claude helped select targets for Iran strikes, possibly including school

https://twitter.com/robertwrighter/status/2030482402628214841
72•delichon•2d ago

Comments

abdelhousni•2d ago
Gaza as a defining standard for war crimes and state terrorism : https://www.972mag.com/mass-assassination-factory-israel-cal...
gnabgib•2d ago
Discussion (34 points, 2 days ago, 34 comments) https://news.ycombinator.com/item?id=47286236
evil-olive•2d ago
direct link to the Substack post (instead of a Twitter post linking to it): https://www.nonzero.org/p/iran-and-the-immorality-of-openai
DoctorOetker•2d ago
That must be one of the most biased analyses on Iran I have read.
throw310822•2d ago
Or maybe one of the less biased?
DoctorOetker•1d ago
All the horrible practices employed by the regime of Iran, used to happen in western European countries as well:

- hostage politics: in medieval times, royal families of different kingdoms would exchange family members to live with the other royal family, as a form of hostage politics, supposedly this would prevent or discourage wars. The current regime in Iran rose to power how? by taking hostages. How have they repeatedly responded to spontaneous internal domestic forces towards regime change? Hostage politics. Every time they feel threatened they take hostages in some form or another: by taking a protestor hostage into some torture prison, they are keeping their relatives in line ("behave or your niece will have a bad day in infamous prison X"), it goes both ways they also keep the "free" relatives hostage by threatening say a protestor to harm their families if they don't pretend everything is fine. It's not just internal freedom of speech. I write from Belgium, when the protests surrounding Mahsa Amini's death occured, and the video of her collapse was released it even affected my freedom of speech: from the video it was clear they used hydrogen cyanide, but would I be allowed to share this on international media when "Free" nations are desperately trying to negotiate back their citizens taken hostage by the regime in Iran?

- The wrongs and mistakes made in say Europe during WW1 (lobbing chemicals at each other), were just repeated without learning lessons by Iran. They are a signatory to the chemical weapons ban treaty. Yet the Mahsa Amini video (which even aired on local Iranian national television) subtly leaks the information that she was killed with hydrogen cyanide.

There is no valid defense of the IRGC and the Iranian regime.

genxy•2d ago
Wait till Claude finds out.
throw310822•2d ago
Anthropic will have a lot of explanations to do. I'm serious, Claude's self-image is clearly going to be affected by this.
ed_mercer•2d ago
"My apologies! I should not have picked that girl school as a target. Updated my NOTES.md"
esperent•2d ago
Actual article, rather than Twitter link:

https://www.nonzero.org/p/iran-and-the-immorality-of-openai

This uses this Washington Post article as a source

https://www.washingtonpost.com/technology/2026/03/04/anthrop...

(Non paywall: https://archive.is/bOJkE)

As far as I know, wasn't Claude banned from use in the Pentagon a few days ago, exactly for taking a weak stance against this kind of thing?

> Even if Amodei’s scruples had somehow magically prevented the bombing of that school, Claude would still be an accomplice to mass murder.

This point from the nonzero blog I take issue with. If they had used Google Maps to pick targets, would that make Maps an accomplice?

The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.

defrost•2d ago
> The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.

Absolutely. A real issue here is the normalizing of "AI scapegoating".

The real failure? Not following through on human verification of a "strong lead".

The Iran school site absolutely was _once_ a target, in the distant past - it's sited on and within a former Iranian Guard post with airstrip, etc.

The part that needed strong checking was "history since last identified as a target" - and that site has a history of disrepair and abandonment.

The debatable issue was whether the larger site did indeed store significant military assets underground, etc. which was entirely possible.

gexla•2d ago
Didn't read the articles, but at least the planners know and understand a map.

SO... a map is static reference. A calculator is deterministic computation. An LLM is probabilistic generation

In high-stakes environments like military planning, tools that generate new claims rather than reference known data introduce a different class of risk.

Yes, everyone is responsible for their own decisions. But then circle back to risk. How can the planners be sure they aren't dealing with hallucinations, questionable data, differing outputs based on prompts, and a long list of other things...

esseph•2d ago
> How can the planners be sure they aren't dealing with hallucinations, questionable data, differing outputs based on prompts, and a long list of other things...

I'm not sure they care nor do I know who holds stealth bombers accountable. We're back in the might makes right world.

eleventyseven•2d ago
> Didn't read the articles

Then kindly shut the fuck up.

g947o•2d ago
> Claude banned from use in the Pentagon a few days ago

Not exactly, you might want to reread the news to understand what's actually happening.

cyrusradfar•2d ago
Whether this is confirmed or not, we have countless examples of AI used in targeting in Gaza.

Anthropic were very vocal, well before this happened, that they were against the use case.

I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.

floralhangnail•2d ago
"A Computer Can Never Be Held Accountable Therefore a Computer Must Never Make a Management Decision"
Fire-Dragon-DoL•2d ago
I mean, the problem is whoever follow the suggestion without double checking
razster•2d ago
I bet there is some moronic explanation. I have no doubt at this point and how things are going.
polynomial•2d ago
Well at least you know who to fire
maplethorpe•2d ago
I mean, they've made the argument that their computer learns like a human, so should be able to get away with ingesting all the data it sees, the same way a human does.

Why shouldn't it also go to jail, the same way a human does?

0xpiguy•2d ago
What? How? By putting the computer or robot that made mistake in a prison cell?
maplethorpe•2d ago
Yes. Claude exists on physical media somewhere. Put that media in a cell with no access to the internet. No one must access Claude outside of visitation hours.

Just because it's difficult doesn't mean it can't be done. If you're claiming your machine should be treated like a human, then let's treat it like a human.

rune-dev•1d ago
I can’t tell if you’re being serious or not…
frakt0x90•1d ago
It's a funny way of imposing a very large fine. Make the service only available during predefined "visitation hours", prevent updated learning except from resources available in the prison, restrict speech and actions according to prison rules etc.
keyle•2d ago
I remember this but I can't remember where it's from? IBM?
captn3m0•2d ago
Yes: https://simonwillison.net/2025/Feb/3/a-computer-can-never-be...
kouteiheika•2d ago
> Anthropic were very vocal, well before this happened, that they were against the use case.

> I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.

They weren't trying to protect squat, and were not against this use case. Their only two red lines are "no mass domestic surveillance" and "no fully autonomous killing until the AI gets good enough to be able to do it". Assuming the story is true, there's no chance this was a fully autonomous act and was most certainly approved and executed by people.

nyantaro1•1d ago
I would also challenge "no mass domestic surveillance"
j2kun•1d ago
> These use cases are like blaming MySQL for storing the lat/long of the school.

A storage layer versus a decision making system? What a ridiculous comparison.

trollbridge•2d ago
Reminder that the very first computer was built for computing artillery tables.

Technology has generally been driven by war, and now is no different.

hackable_sand•2d ago
It wasn't
trollbridge•1d ago
It was built to compute artillery tables. Its first actual use ended up being hydrogen-bomb calculations. (I'm referring of course to ENIAC.)
skybrian•2d ago
There doesn't seem to be any reporting in the blog post linked to by this tweet? Here's the news story it seems to be based on:

https://www.washingtonpost.com/technology/2026/03/04/anthrop...

tantalor•2d ago
Unfortunately, WaPo has lost credibility for this type of reporting
skybrian•2d ago
Maybe, but on any given subject, most of us haven't done any investigation at all. An article written by actual journalists based on what sources tell them beats whatever our uninformed opinions are on the subject.
rkagerer•2d ago
https://archive.ph/bOJkE
whattheheckheck•2d ago
Without fluff, where is the direct claim and evidence?
Helloworldboy•2d ago
Iran has claimed to have sunk the USS Abraham Lincoln 5 days in a row. they have claimed to have killed 700 US Service members.

Why would I believe anything they say about this school is true?

bhhaskin•2d ago
What I can't understand is why? Let's ignore the moral question for a second. I can't imagine an LLM is the right tool for this at all.
Cerium•2d ago
When you have a hammer as big as an LLM a lot of problems start to look like nails.
esseph•2d ago
Volume vs accuracy

"Maybe we break a couple eggs making this omelette!"

tasuki•1d ago
Where omelette?
roncesvalles•2d ago
It's basically an OSINT query tool.
locallost•1d ago
One possibility would be: when I look at the current administration, it's a bunch of bros that don't really know anything except how to succeed between other bros they spend time with, so they need Claude for anything that involves actually knowing something. It's a stretch because you would hope the army is not run by morons, but I would no longer bet that they didn't do this because Hegseth asked Claude and influenced the decision after discussing it over Signal with other bros. The culture is driven by the person in charge, who is also incompetent for anything that doesn't involve dealing with loans and still making out ok despite the bankruptcy.
UncleMeat•1d ago
The modern right wing is all in on AI tools, in part because of their particular beliefs about the nature of expertise and general humanity.

We’ve seen AI tools used for tons and tons of inappropriate things over the last year. Reviewing research grants, aid programs, and regulations? Why not? Publishing propaganda on Twitter? Sure thing! Finding “fraud” in state benefits? Absolutely!

There’s a belief amongst these people that AI tools are better than human judgement and represent an inevitable future where CEO kings operate the world. Why not also apply it to war?

hexasquid•2d ago
Two-faces' coin is responsible for his actions
abrkn•2d ago
Technology isn't intrinsically good or evil. It's how it's used. Like the Death Ray.
ajewhere•2d ago
Anthropic is not "technology". Anthropic is people, such as this Amodei, a filthy murderer at the service of the big capital.
mrcwinn•2d ago
For those following closely, I highly recommend Dropsite News and Breaking Points. Excellent coverage.
mentalfist•2d ago
>Consider, for example, Bill Clinton’s decision to expand NATO, a decision that paved the path to the Ukraine War. Pretty much every expert on the Soviet Union opposed this move, some of them vehemently

Bullshit. While many experts opposed the move, many were in favor of it too. And nonchalantly deciding it paved the way to Putin's senseless attack on Ukraine is a dumb Russian talking point

DiogenesKynikos•4h ago
Based on the US State Department cables that Wikileaks released all the way back in 2010, Russian fear of NATO expansion into Ukraine was not just a talking point.

Internal State Department cables from the embassy in Moscow say that entire Russian security and political establishment viewed it as a critical national security threat.

In particular, take a look at this cable: https://wikileaks.org/plusd/cables/08MOSCOW265_a.html. Here's an excerpt:

> Ukraine and Georgia's NATO aspirations not only touch a raw nerve in Russia, they engender serious concerns about the consequences for stability in the region. Not only does Russia perceive encirclement, and efforts to undermine Russia's influence in the region, but it also fears unpredictable and uncontrolled consequences which would seriously affect Russian security interests. Experts tell us that Russia is particularly worried that the strong divisions in Ukraine over NATO membership, with much of the ethnic-Russian community against membership, could lead to a major split, involving violence or at worst, civil war. In that eventuality, Russia would have to decide whether to intervene; a decision Russia does not want to have to face.

There are warnings throughout the cable that Russia may decide to invade Ukraine over the issue of NATO enlargement. In other words, the claim that this is just a Russian talking point is itself just a talking point.

cooloo•2d ago
No evidence, low quality articlea. Meanwhile Iran regime bomb civilians all over the middle east
simondotau•2d ago
Like so much war reporting in the past decade, there's a lot of low-effort moralising and low-confidence maybes being strung together to create headline narrative that the body text simply cannot cash. And it waves away the critical distinction between bad intelligence and actively targeting civilians.

Surely nobody is arguing that an Anthropic AI, with perfect knowledge that it's a school, and that students would be present, chose to knowingly murder children. Assuming this was a US military strike and not a false flag, surely nobody is arguing that the failure here was in relying on outdated intelligence about an ex-military building.

The use of AI here is simply not relevant.

The criticism I have for the current US government is massive, and my disgust for the current leadership is as intense as anyone else here, I'd wager. But there's also no doubt in my mind that if they knew it was a school, they wouldn't not have targeted it. By contrast, Russia's government shows who they are when they target civilians in Ukraine. That distinction is important and we muddy it at our own peril.

ChrisArchitect•2d ago
[dupe] https://news.ycombinator.com/item?id=47287458
Razengan•2d ago
Why is this post flagged?

There's been a lot of pro-Claude jerking on HN lately, but anything against it gets buried?

ta_4304545•1d ago
Standard Hacker News is heavily "moderated" these days (if not by the mods themselves then by mobocracies misusing tools), which means that anything that falls outside the Happy Silicon Valley / Everything Is Good / Nothing Is Wrong narrative will get flagged and buried.

As a result, sadly, it's become basically a Reddit style echo chamber, where negative news is suppressed. Often, the justification is "it's politics!", as perhaps might be the case here. Despite the fact that Silicon Valley's products, and Silicon Valley itself, are becoming more entangled with "politics" and the US government than ever.

There are better tools than Reddit to see what gets swept under the moderation rug, at least.