frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI got the blame for the Iran school bombing. The truth is more worrying

https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying
108•cptroot•1h ago

Comments

jameskilton•1h ago
Something that a lot of tech people, especially in Silicon Valley, seem to want to forget, is that at every level you still have people making decisions. AI is suggesting but someone, somewhere, still has to make the decision to act on that suggestion.

It's still people doing people things.

idle_zealot•59m ago
The immediate concern isn't really fully autonomous systems, it's that the nature and design of recommender/suggestion systems prompt humans to sleepwalk through their responsibilities.
oceansky•15m ago
Which is already happening
pixl97•56m ago
https://en.wikipedia.org/wiki/Computer_says_no
ognav•58m ago
The Guardian carrying water for the AI industry. The distinction between Maven and Claude is futile. We get that Maven is Palantir, but it integrates Claude:

https://www.reuters.com/technology/palantir-faces-challenge-...

Going into a generic rant about anti-AI people after missing sources and believing the Department of War is just extremely poor journalism from the newspaper that destroyed evidence after a command from GCHQ.

I hope this is a single "journalist" and that the Guardian has not been bought.

CamperBob2•51m ago
Better than carrying water for people who blame inanimate tools for their own personal and professional failures.
phillipcarter•48m ago
I assume you actually read the article and didn't just post this after a quick skim, yes? Because saying this:

> The distinction between Maven and Claude is futile

Doesn't make any sense at all when you read the article and understand what Claude actually does in this equation. From the article:

> Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.

The whole point here is that whether an LLM is involved or not is immaterial to the system as a whole, and it's a disservice to the public to focus on LLMs here.

niam•40m ago
The article you're responding to is making specific operational claims about Claude's (basically non-) relevance. I'd be interested to hear if you're directionally correct, but forgive me if I need more details from your counterargument than "but it integrates Claude".
sailfast•35m ago
This is not a correct take at all given the contents of the article.
tunesmith•53m ago
Really fascinating article. Bits of bias here and there, like "The US military has been trying to close the gap between seeing something and destroying it for as long as that gap has existed" -- you can respond to seeing and understanding something without destroying it -- but it underscores, to me at least, how much denser the "fog of war" has become. The fog of media reporting in general. Those first few paragraphs felt like a breath of fresh air.
phillipcarter•46m ago
Worth mentioning that the author wrote about this first on his substack: https://artificialbureaucracy.substack.com/p/kill-chain
machinecontrol•46m ago
Interesting article. Seems like AI-washing isn't just for layoffs anymore.
glouwbug•26m ago
What AI does best is remove accountability and ownership
Lerc•42m ago
"the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target."

This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.

FartyMcFarter•28m ago
It's definitely been reported before that Claude was used for Iran attacks, at the beginning of March or earlier:

https://www.theguardian.com/technology/2026/mar/01/claude-an...

Edit: Also, https://www.washingtonpost.com/technology/2026/03/04/anthrop...

gowld•25m ago
"The U.S. used Anthropic's Claude to support Operation Epic Fury against Iran yesterday, sources familiar with the Pentagon's operations tell Axios."

OK. The US probably also used telephones and Diet Coke.

Nothing cited said that Claude was selecting targets or informing target selection.

FartyMcFarter•22m ago
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
doctorpangloss•20m ago
there is a lot of confusion about all this stuff

you, today, can use Claude in Amazon Bedrock, and the way that works is, if you want it to be this way: the piece of code and model weights and whatever other artifacts are involved, they are run on Bedrock. Bedrock is not a facade against Claude's token-based-billing RESTful API, where Anthropic runs its own stuff. In the strictest sense, Bedrock can be used as a facade over lower level Amazon services that obey non-engineering, real world concerns like geographic boundaries / physical boundaries, like which physical data center hardware is connected by what where / jurisdictional boundaries, whatever. It's multi-tenancy in the sense that Amazon has multiple customers, but it's not multi-tenancy in the sense that, because you want to pay for these requirements, Amazon has sorted out how to run the Claude model weights, as though it were an open-weights model you downloaded off Hugging Face, without giving you the weights, but letting you satisfy all these other IP and jurisdictional and non-technical requirements that you are willing to pay for, in a way that Anthropic has also agreed.

This is what the dispute with the Pentagon is about, and what people mean when they say Claude is used in government (it is used in Elsa for the FDA for example too). Anthropic doesn't have telemetry, like the prompts, in this agreement, so they have the contract that says what you can and cannot use the model for, but they cannot prove how you use the model, which of course they can if you used their RESTful API service. They can't "just" paraphrase your user data and train on it, like they do on the RESTful API service. There are reasons people want this arrangement ($$$).

The vendor (Palantir) can use, whatever model it wants right? It chose Claude via "Bedrock." I don't know if they use Claude via Bedrock. Ask them. But that's what they are essentially saying, that's what this is about. Palantir could use Qwen3 and run it on datacenter hardware. Do you understand? It matters, but it also doesn't matter.

It's a bunch of red herrings in my opinion, and this sort of stuff being a red herring is what the article is mostly about.

kace91•18m ago
I have heard the claim everywhere.
rnab147•41m ago
WaPO writes that Claude selected targets:

https://www.washingtonpost.com/technology/2026/03/04/anthrop...

This unknown Guardian contributor writes a missive against "Luddites" while using the typical AI booster arguments that always turn around anti AI arguments.

Just like two five year olds: "You have a big nose." "No, you have a big nose."

We learn from this clown that anti AI people suffer from AI psychosis because they are reading WaPo and Reuters.

simonw•36m ago
Both the Washington Post and the Guardian articles agree that the system used here was Maven.

The key sentence in that Washington Post article appears to be:

> The Pentagon began to integrate Anthropic’s Claude chatbot into Maven in late 2024, according to public announcements.

As far as I can tell this is the public announcement - a press release from November 2024: https://www.businesswire.com/news/home/20241107699415/en/Ant...

> Anthropic and Palantir Technologies Inc. (NYSE: PLTR) today announced a partnership with Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to the Claude 3 and 3.5 family of models on AWS. This partnership allows for an integrated suite of technology to operationalize the use of Claude within Palantir’s AI Platform (AIP) while leveraging the security, agility, flexibility, and sustainability benefits provided by AWS.

491827-17182•26m ago
We know that Palantir used AI for target selection in Gaza:

https://www.972mag.com/lavender-ai-israeli-army-gaza/

We know that it integrated Claude and Claude was deemed to be a supply chain risk just before the Iran war. So it is not a huge mental leap to assume what it is being used for.

You won't get an answer from Hegseth. This Guardian "article" is by a Substack blogger who also does not have answers.

simonw•5m ago
That article you are quoting there is from April 2024. The Claude + Palantir deal was announced in November 2024.

The "supply chain risk" claims came from a deeply non-serious executive team who don't like "woke AI". They're not credible.

ck2•40m ago
You know how that was done with a Tomahawk

They've now burnt though almost ONE THOUSAND of those

They cost $4 million each, so that's another $4 BILLION that has to be replaced too

Imagine several more months of that or even through 2029

ceejayoz•35m ago
We'll run out long before 2029. The 850 fired so far is about a quarter of the entire supply.

https://www.reuters.com/business/aerospace-defense/us-uses-h...

sailfast•34m ago
So far, they are not funded to do this for that long. They have floated a $200B bill to congress, which made national news coverage. It would start a huge, prolonged fight over the war and actually force them to ask permission from congress to fight it (barring totally disregarding the constitution which is still a possibility).

Unfortunately I can very well imagine several more months and years of this. We are still fighting a forever war that started in 2001. This is all a generation of Americans will know, and that is sad.

tomasphan•27m ago
It’s a tale as old as time: start a war to support the military industrial complex. Imagine a $4 billion investment into public transportation or parks. Every 10 years we can invest into a new city instead of bombing some kids overseas (whose siblings, fueled by hatred, then commit terror attacks on the west).
O3marchnative•24m ago
The Royal United Services Institute (RUSI) has an updated tally on defensive and offensive munition expenditures. It's likely not 100% accurate due to the sensitive nature of those figures.

> 11,294 munitions in the first 16 days of the conflict, at a cost of approximately $26 billion.

Several detailed tables are in the link below.

https://www.rusi.org/explore-our-research/publications/comme...

shykes•33m ago
You can't have a serious discussion of this bombing without addressing the information warfare component. To this day we don't know what actually happened. Between the general public and the facts, there are many middlemen, all with their own distorting factor: the IRGC; the US government; western press outlets such as the Guardian; and the people quoted by the press.

IRGC is making claims that no other party can verify first-hand. Everything from the number of explosions, the extent of the physical damage, the number of wounded and dead, the number of civilians wounded and dead - these are all unverified claims and should be treated as such. Not only is the IRGC obviously biased and incentivized to maximize media pressure on the US and Israel: they are known for information warfare of exactly this nature. To take their statements at face value, and present them as established facts in the opening paragraph, as this article does, is journalistic malpractice.

Again, the basic facts on the ground are not known, yes all parties are projecting narratives with a certainty that we should all be suspicious of.

Without this stable foundation of knowing what actually happened, and why, the very premise of this article collapses on itself.

EDIT: the flurry of responses to this post illustrate the problem. It's difficult to even have a respectful, fact-driven discussion on this topic, because everyone is tempted (and encouraged) to rush to their political battle stations. Nobody wants to discuss information warfare, because they're too busy engaging in it. I think that's worrying and problematic. No matter which "side" you're on, it should be possible to distinguish what is known and what is not; and implementing basic information hygiene. Or do you think you are uniquely immune to disinformation?

20k•31m ago
Everyone acknowledges that the US killed a whole bunch of kids, including the US
shykes•13m ago
This is incorrect. The US government (via Secretary Hegseth) has only confirmed that they are investigating the incident.

What the US has NOT confirmed:

- that they are responsible for the bombing - who hit the school - whether the school was an intended target of US strikes - whether it was struck intentionally - that it was mistaken for a military site - any casualty count - whether there were civilians or children in the casualty count

The US has explicitly DENIED:

- That they deliberately target civilian targets

These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.

Sources:

- https://www.war.gov/News/Transcripts/Transcript/Article/4421...

- https://www.war.gov/News/Transcripts/Transcript/Article/4434...

dede2026•29m ago
Holy gaslighting bootlicking
ElevenLathe•28m ago
I think its fair to treat things that the Trump administration and the Iranian military agree on as facts. If they were distortions that favored one side, we would see pushback from the other. Maybe there are distortions that somehow benefit both of these parties, but it seems unlikely. At minimum, then, this was a school, the Americans bombed it, and children died as a result.
shykes•9m ago
No. The only thing that the US government and IRGC agree on, at the moment, is that there was an explosion at the site of the school.

The US did NOT confirm that they are responsible for the bombing, or that children (or anyone) died as a result. This is a verifiable fact.

So, applying your own principle: the only thing you should treat as fact, is that there was an explosion at a school.

embedding-shape•27m ago
> To this day we don't know what actually happened.

I feel like we know enough already. A school was bombed, the ones who did it sucks big time and should be held responsible. Currently, the US and Israel is waging a war against Iran, and one of them dropped the bomb(s), unless suddenly Iran got their hands on American weapons, then that needs to be investigated too, because someone surely dropped the ball at that point.

The basics remain the same, investigations have to be launched to figure out where exactly in the chain of command, someone made a mistake, and then hold that person(s) responsible for their fuck up.

Have those investigations been launched?

shykes•5m ago
I think it's likely that the explosion was caused by a US strike. But we don't actually know for sure that that's what happened - the US government has not confirmed it.

We also don't know anything about casualties - we only have the IRGC statements, and they are not reliable.

> Have those investigations been launched?

Yes, according to the US government, an investigation is underway. But its starting point is determining what caused the explosion.

applfanboysbgon•26m ago
You are the one engaging in "information warfare", intentionally trying to spread doubt about an event that was confirmed by both Iran and US. What does it feel like to deny the murder of 150+ children out of nationalistic pride? Do you simply have no conscience? No sense of guilt, no concept of morality?
shykes•11m ago
The US government (via Secretary Hegseth) has only confirmed that they are investigating the incident.

What the US has NOT confirmed:

- that they are responsible for the bombing - who hit the school - whether the school was an intended target of US strikes - whether it was struck intentionally - that it was mistaken for a military site - any casualty count - whether there were civilians or children in the casualty count

The US has explicitly DENIED:

- That they deliberately target civilian targets

These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.

Sources:

- https://www.war.gov/News/Transcripts/Transcript/Article/4421...

- https://www.war.gov/News/Transcripts/Transcript/Article/4434...

WarmWash•9m ago
I feel like an intellectual god to have been gifted the brain power to recognize that 150 kids being killed is a awful tragedy, and that converting a building on a military base to a school is recklessly stupid and borderline purposely done as a trap. It's like letting your child play in the road at night, and then being upset when a drunk driver hits them.

Anyone can look at the satellite images from the bombing and see how ridiculous whatever Iran was doing was.[1]

[1]https://npr.brightspotcdn.com/dims3/default/strip/false/crop...

nahuel0x•31m ago
Israel and the US are bombing lots of schools and hospitals and civilian infrastructure, this is not the only case. This is intentional genocide, not a software/organizational/human error.
lukifer•28m ago
Sufficiently advanced negligence is indistinguishable from malice.

This is not to say that this administration is definitely not targeting civilians or infrastructure on purpose; just that the end result, and the moral culpability, are the same in either case.

amarant•31m ago
>The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.

Would it be poor taste to make joke about gradle being superior here? The dad in me really wants to make that joke...

20after4•15m ago
Replacing one java tool with another doesn't solve anyone's problems. If they'd only used Rust then lives would have been saved.
amarant•13m ago
Meh, that sounds like a cargo-cult to me ;)
beloch•30m ago
"Three clicks convert a data point on the map into a formal detection and move it into a targeting pipeline. These targets then move through columns representing different decision-making processes and rules of engagement. The system recommends how to strike each target – which aircraft, drone or missile to use, which weapon to pair with it – what the military calls a “course of action”. The officer selects from the ranked options, and the system, depending on who is using it, either sends the target package to an officer for approval or moves it to execution."

----------------

Maven is a tool for use in the middle of a war. When both sides are firing, minutes saved can mean lives saved for your side. Those lives, at least partly, balance the risks of hitting a bad target.

This was not a strike made in the middle of a war. If Maven was used in the strike that took out a school, it was being used as part of a sneak attack. Nobody was shooting back while this was being planned. Minutes saved were not lives saved. There should have been a priority placed on getting the targets right. Humans should have been double and triple checking every target by other means. This clearly didn't happen. The school was obviously a school that even had its own website. Humans would have spotted this if they had done more than make their three clicks and move on to the next target.

Whoever made the choice to use Maven to plan a sneak attack without careful checking made an unforced error when they had all the time in the world to prevent it. Whether it was overconfidence in their tools or a complete disregard for the lives of civilians that caused this lapse, they are directly responsible for the deaths of those little girls. I sincerely hope there are (although I doubt there will be) consequences for this person beyond taking that guilt to their grave.

jvanderbot•24m ago
I recommend looking closely at the New York Times analysis. There were factors that might have mitigated this as a strike target, but it also really did look like a part of the compound (and it originally was!). Yes, with hindsight, we can definitively know, and with sufficient time each target could probably have been positively ID'd, but there was precisely one mis-strike in 1000s of sorties, so this already is a low error rate. TFA discusses 50 specific strikes all of which missed via automated analysis. That doesn't seem the same.

I don't disagree there. But this is not a case of hallucination, and an existing website is a signal, not a determinant, of the real situation on the ground. However, you have made a very, very strong assumption that these targets were not carefully evaluated. One that does not seem to be present in TFA or any analysis that I've read. In fact, the article itself quotes those in the know who believe this should have been eliminated as a target.

SlinkyOnStairs•13m ago
> Yes, with hindsight, we can definitively know, and with sufficient time each target could probably have been positively ID'd, but there was precisely one mis-strike in 1000s of sorties, so this already is a low error rate.

This is giving them too much credit.

Hegseth has already shown himself to entirely disregard the notion of War Crime, even by the US military's own already controversial standards. The double strike on the boats in the caribbean are literally the textbook example in US military textbooks of what not to do, and that it is a warcrime.

This was no mistake. It was the obvious outcome of a pattern of reckless action.

dwa3592•3m ago
>>I recommend looking closely at the New York Times analysis. There were factors that might have mitigated this as a strike target, but it also really did look like a part of the compound (and it originally was!).

What a ridiculous take. What does "originally was" mean? Maybe you wanna say "previously was"? That building was converted to a school 10 years ago! The intelligence they relied on is 10 years old!!!!! It's recklessness and stupidity dressed as bravery and courage.

embedding-shape•23m ago
I agree with your overall sentiment, but how realistic is it? Israel/US says they've been hitting thousands of targets (so reality might mean ~hundreds, still a lot), how are they supposed to verify this at all?

> Humans should have been double and triple checking every target by other means.

How practically would this happen? The US/Israel don't want people on the ground, and people on the ground is exactly the only way you can actually verify stuff like this, not every place in the world is on Google Maps or have a web presence at all, so the only realistic way to verify this would be to visually inspect it in person, something neither parties who started this war want to do.

Even better, don't make attacks against other soverign nations that don't pose an immediately and critical threat to you, and this whole conflict could have been avoided in the first place.

But no, the president has to be involved in some sort of child-trafficking scheme, so pulling the country into a war seemed preferable to being held responsible, and now we're here, arguing about fucking details that don't matter.

ok_dad•19m ago
In this case, they would have discovered it was a school with a Google search, basically. There’s no excuse.
Tostino•17m ago
Or the vast satellite network we run. Pretty easy to see it's school children going in and out of the area.
jdross•16m ago
I'm pretty sure this is the school that was on the corner of a military base, and the school building hit was previously part of the military base.
jmye•6m ago
Does that make it not a school, somehow? Or are we cool with killing kids just because their parents might be in the military? I'm not clear what the excuse being made actually is.
free_bip•13m ago
The school literally had its own website. If the AI involved was as smart as the media hype machine makes them out to be, it would have found the website and marked it as a non-target. It never even would have made it to human review.
btown•11m ago
I agree with everything you said - but it's also the case that a set of parameters were created that, instead of requiring multi-person validation of target validity and provenance, prioritized speed to provide decision makers with options.

This certainly doesn't absolve the person implementing those parameters, but it is equally the responsibility of the very top of the decision-making structure.

YZF•6m ago
I couldn't find a web site for the school when I searched for one and I also noticed that while schools are generally marked on Google Maps in Iran this school was not. Both are IMO not really relevant or reliable sources of targeting data anyways. I found very little evidence searching online for the school but I did find something that looked like a blog about a school trip. Again though the Internet is not a reliable source of data for targeting - should be obvious.

The main way targets should/would be selected is by direct intelligence. E.g. the targets should be identified through satellite or other observations. It's hard to imagine that a building that has operated for some length of time as a school would not have patterns that are visible from satellite vs. military facilities with different patterns. You also don't just randomly attack structures in this sort of surprise attack, you're presumably aiming for some specific people or equipment with some priority/military goal in mind, so you really want to have observed the targets and patterns and have up to date information on their usage.

I think what likely happened here is that the entire base was the "unit" of targeting and the mistake was in identifying which buildings were part of the base. In the satellite view the military buildings and the school look very similar (since the building as I understand it used to be part of the base but was repurposed as a school).

It's not true that whoever made the error had all the time in the world. Presumably once the order was given there was time pressure given that the strike was to be timed with the other intelligence.

In theory the US military should/is supposed to have good processes around this stuff. So we are told. Obviously failed in this case. It is a tragedy.

gowld•28m ago
A portion of a military base was converted to a school.

This was a tragic disaster waiting to happen from the very start.

This isn't an "AI or not" issue at all.

This was a choice to use children as human shields, and a choice to make war on a foreign sovereign nation.

Let's suppose the US accurately bombed the center of the military base, and the explosion destroyed the adjacent school and killed the children inside. Would that change anything of import? I don't think so.

Ylpertnodi•23m ago
American bases in europe have schools on them. Fair targets?
netsharc•17m ago
The WTC complex had defense department offices: https://www.govexec.com/federal-news/2001/09/federal-agencie...

By your logic it's the federal government's fault those 3000 people died on 9/11, they were being used as human shields.

paganel•17m ago
They (the Americans) should have also marked the schools on said military maps of theirs, and hence they could have made a value judgment of "is it worth killing some IRGC men in the middle of nowhere vs. the international backslash of killing school-going children?". It looks like they most probably didn't do that, probably because their "advanced" AI systems didn't bother with marking schools on their military maps.
burnte•23m ago
When AI gets something wrong, it's the operator's fault, IMO.
keiferski•8m ago
Before it was the gods, then God, then Nature, and now AI. Human beings really have a fundamental issue with accepting responsibility for their actions.

From a certain angle, the entire industrial and computer age looks like a massive effort to remove all responsibility for our actions, permanently.

Why your sales team hates your highest-margin product

https://gtmplaybook.substack.com/p/why-sales-is-ignoring-your-best-product
1•anoop4bhat•18s ago•0 comments

Lecture Notes: Cosmology [pdf]

https://www.thphys.uni-heidelberg.de/~amendola/teaching/cosmology.pdf
1•joebig•1m ago•0 comments

Are we over-engineering solutions by dumping agents at everything?

https://old.reddit.com/r/AI_Agents/comments/1s1o0k6/25_agents_built_heres_the_uncomfortable_truth/
1•Lws803•2m ago•1 comments

AI Adoption and Firms' Job-Posting Behavior

https://www.federalreserve.gov/econres/notes/feds-notes/ai-adoption-and-firms-job-posting-behavio...
1•toomuchtodo•4m ago•0 comments

PyPI package telnyx has been compromised in yet another supply chain attack

https://www.aikido.dev/blog/telnyx-pypi-compromised-teampcp-canisterworm
2•overflowy•5m ago•0 comments

UK startup ignites plasma inside nuclear fusion rocket

https://www.euronews.com/next/2026/03/26/world-first-uk-startup-ignites-plasma-inside-nuclear-fus...
1•supermdguy•6m ago•0 comments

Don't Wait for Claude

https://jeapostrophe.github.io/tech/jc-workflow/
5•jeapostrophe•6m ago•0 comments

Risks of automation in medicine – a review article for the obstetrics case

https://ugeskriftet.dk/dmj/risks-automation-medicine-review-article-obstetrics-case
1•tokai•8m ago•0 comments

Released Jid v1.1.0

https://github.com/simeji/jid/releases/tag/v1.1.0
2•jamslater•12m ago•0 comments

Tell HN: Pangram is easily-defeatable with Claude

2•nunez•12m ago•0 comments

Statistics Canada to Replace Food and Gas with "Huge TVs" in Inflation Reporting

https://lzon.ca/posts/series/duck/tv-inflation/
1•jpmitchell•13m ago•0 comments

Skip the TSA Line: Where to Find Travel by Bus, Train, and Boat

https://www.wired.com/story/skip-the-tsa-line-where-to-find-travel-by-bus-train-and-boat/
1•joozio•15m ago•0 comments

Building Tanks While the Ukrainians Master Drones

https://www.theatlantic.com/national-security/2026/03/who-needs-tanks-age-drones/686540/
1•PanMan•16m ago•0 comments

ICE Agents Frustrate Airport Workers as Shutdown Drags On

https://www.wired.com/story/ice-agents-frustrate-airport-employees-as-shutdown-drags-on/
1•joozio•17m ago•0 comments

The Download: the internet's best weather app, and why people freeze their brai

https://www.technologyreview.com/2026/03/27/1134755/the-download-best-weather-forecasting-app-why...
1•joozio•19m ago•0 comments

Shoofly – intercepts your AI agent's tool calls before they execute

https://shoofly.dev
2•evanvuckovic•19m ago•1 comments

GLM 5.1 Is Out

https://old.reddit.com/r/LocalLLaMA/comments/1s51id3/glm_51_is_out/
2•morenatron•20m ago•0 comments

Do you need Forward Deployed Engineers?

https://www.aienablementinsider.com/p/do-you-really-need-forward-deployed-engineers
3•dylancollins•20m ago•0 comments

Telnyx Python SDK: Supply Chain Security Notice

https://telnyx.com/resources/telnyx-python-sdk-supply-chain-security-notice-march-2026
3•KomoD•21m ago•0 comments

A real-time HUD (heads-up display) for OpenClaw power users

https://github.com/just-claw-it/claw-hud
2•just-claw-it•21m ago•1 comments

Has Anyone Tried Deerflow?

https://github.com/bytedance/deer-flow
2•udayan_w•22m ago•0 comments

Chats with sycophantic AI make you less kind to others

https://www.nature.com/articles/d41586-026-00979-x
4•jyunwai•22m ago•0 comments

Show_HN: ClawHub Skills Benchmarking – Find Bugs, Drift, and Slowdowns

https://github.com/just-claw-it/claw-bench
2•just-claw-it•23m ago•0 comments

We broke 92% of SHA-256 – you should start to migrate from it

https://stateofutopia.com/papers/2/we-broke-92-percent-of-sha-256.html
10•logicallee•24m ago•1 comments

Stephen Wolfram and Matt Mullenweg Talk About AI [video]

https://www.youtube.com/watch?v=T_tALmMq7Ok
2•chilipepperhott•24m ago•0 comments

Show HN: Control Codex via WhatsApp using a Codex plugin

https://github.com/abuiles/codex-whatsapp-relay
2•abuiles•29m ago•0 comments

Prompt Repetition Improves Non-Reasoning LLMs

https://arxiv.org/abs/2512.14982
3•wslh•30m ago•0 comments

Stop Embedding Your Corpus Blindly

https://decompressed.io/learn/sample-first-rag
2•zacole•32m ago•0 comments

Toolcast – Turn any API into an AI agent tool with one command

https://github.com/Djsand/toolcast
2•Djsand•34m ago•0 comments

US warns EU to pass trade deal or risk losing 'favourable' access to LNG

https://www.ft.com/content/6bf153e4-11af-44d5-9d1c-48b5c7ad26ef
5•geox•36m ago•1 comments