https://www.reuters.com/technology/palantir-faces-challenge-...
Going into a generic rant about anti-AI people after missing sources and believing the Department of War is just extremely poor journalism from the newspaper that destroyed evidence after a command from GCHQ.
I hope this is a single "journalist" and that the Guardian has not been bought.
> The distinction between Maven and Claude is futile
Doesn't make any sense at all when you read the article and understand what Claude actually does in this equation. From the article:
> Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.
The whole point here is that whether an LLM is involved or not is immaterial to the system as a whole, and it's a disservice to the public to focus on LLMs here.
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
https://www.theguardian.com/technology/2026/mar/01/claude-an...
Edit: Also, https://www.washingtonpost.com/technology/2026/03/04/anthrop...
OK. The US probably also used telephones and Diet Coke.
Nothing cited said that Claude was selecting targets or informing target selection.
you, today, can use Claude in Amazon Bedrock, and the way that works is, if you want it to be this way: the piece of code and model weights and whatever other artifacts are involved, they are run on Bedrock. Bedrock is not a facade against Claude's token-based-billing RESTful API, where Anthropic runs its own stuff. In the strictest sense, Bedrock can be used as a facade over lower level Amazon services that obey non-engineering, real world concerns like geographic boundaries / physical boundaries, like which physical data center hardware is connected by what where / jurisdictional boundaries, whatever. It's multi-tenancy in the sense that Amazon has multiple customers, but it's not multi-tenancy in the sense that, because you want to pay for these requirements, Amazon has sorted out how to run the Claude model weights, as though it were an open-weights model you downloaded off Hugging Face, without giving you the weights, but letting you satisfy all these other IP and jurisdictional and non-technical requirements that you are willing to pay for, in a way that Anthropic has also agreed.
This is what the dispute with the Pentagon is about, and what people mean when they say Claude is used in government (it is used in Elsa for the FDA for example too). Anthropic doesn't have telemetry, like the prompts, in this agreement, so they have the contract that says what you can and cannot use the model for, but they cannot prove how you use the model, which of course they can if you used their RESTful API service. They can't "just" paraphrase your user data and train on it, like they do on the RESTful API service. There are reasons people want this arrangement ($$$).
The vendor (Palantir) can use, whatever model it wants right? It chose Claude via "Bedrock." I don't know if they use Claude via Bedrock. Ask them. But that's what they are essentially saying, that's what this is about. Palantir could use Qwen3 and run it on datacenter hardware. Do you understand? It matters, but it also doesn't matter.
It's a bunch of red herrings in my opinion, and this sort of stuff being a red herring is what the article is mostly about.
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
This unknown Guardian contributor writes a missive against "Luddites" while using the typical AI booster arguments that always turn around anti AI arguments.
Just like two five year olds: "You have a big nose." "No, you have a big nose."
We learn from this clown that anti AI people suffer from AI psychosis because they are reading WaPo and Reuters.
The key sentence in that Washington Post article appears to be:
> The Pentagon began to integrate Anthropic’s Claude chatbot into Maven in late 2024, according to public announcements.
As far as I can tell this is the public announcement - a press release from November 2024: https://www.businesswire.com/news/home/20241107699415/en/Ant...
> Anthropic and Palantir Technologies Inc. (NYSE: PLTR) today announced a partnership with Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to the Claude 3 and 3.5 family of models on AWS. This partnership allows for an integrated suite of technology to operationalize the use of Claude within Palantir’s AI Platform (AIP) while leveraging the security, agility, flexibility, and sustainability benefits provided by AWS.
https://www.972mag.com/lavender-ai-israeli-army-gaza/
We know that it integrated Claude and Claude was deemed to be a supply chain risk just before the Iran war. So it is not a huge mental leap to assume what it is being used for.
You won't get an answer from Hegseth. This Guardian "article" is by a Substack blogger who also does not have answers.
The "supply chain risk" claims came from a deeply non-serious executive team who don't like "woke AI". They're not credible.
They've now burnt though almost ONE THOUSAND of those
They cost $4 million each, so that's another $4 BILLION that has to be replaced too
Imagine several more months of that or even through 2029
https://www.reuters.com/business/aerospace-defense/us-uses-h...
Unfortunately I can very well imagine several more months and years of this. We are still fighting a forever war that started in 2001. This is all a generation of Americans will know, and that is sad.
> 11,294 munitions in the first 16 days of the conflict, at a cost of approximately $26 billion.
Several detailed tables are in the link below.
https://www.rusi.org/explore-our-research/publications/comme...
IRGC is making claims that no other party can verify first-hand. Everything from the number of explosions, the extent of the physical damage, the number of wounded and dead, the number of civilians wounded and dead - these are all unverified claims and should be treated as such. Not only is the IRGC obviously biased and incentivized to maximize media pressure on the US and Israel: they are known for information warfare of exactly this nature. To take their statements at face value, and present them as established facts in the opening paragraph, as this article does, is journalistic malpractice.
Again, the basic facts on the ground are not known, yes all parties are projecting narratives with a certainty that we should all be suspicious of.
Without this stable foundation of knowing what actually happened, and why, the very premise of this article collapses on itself.
EDIT: the flurry of responses to this post illustrate the problem. It's difficult to even have a respectful, fact-driven discussion on this topic, because everyone is tempted (and encouraged) to rush to their political battle stations. Nobody wants to discuss information warfare, because they're too busy engaging in it. I think that's worrying and problematic. No matter which "side" you're on, it should be possible to distinguish what is known and what is not; and implementing basic information hygiene. Or do you think you are uniquely immune to disinformation?
What the US has NOT confirmed:
- that they are responsible for the bombing - who hit the school - whether the school was an intended target of US strikes - whether it was struck intentionally - that it was mistaken for a military site - any casualty count - whether there were civilians or children in the casualty count
The US has explicitly DENIED:
- That they deliberately target civilian targets
These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.
Sources:
- https://www.war.gov/News/Transcripts/Transcript/Article/4421...
- https://www.war.gov/News/Transcripts/Transcript/Article/4434...
The US did NOT confirm that they are responsible for the bombing, or that children (or anyone) died as a result. This is a verifiable fact.
So, applying your own principle: the only thing you should treat as fact, is that there was an explosion at a school.
I feel like we know enough already. A school was bombed, the ones who did it sucks big time and should be held responsible. Currently, the US and Israel is waging a war against Iran, and one of them dropped the bomb(s), unless suddenly Iran got their hands on American weapons, then that needs to be investigated too, because someone surely dropped the ball at that point.
The basics remain the same, investigations have to be launched to figure out where exactly in the chain of command, someone made a mistake, and then hold that person(s) responsible for their fuck up.
Have those investigations been launched?
We also don't know anything about casualties - we only have the IRGC statements, and they are not reliable.
> Have those investigations been launched?
Yes, according to the US government, an investigation is underway. But its starting point is determining what caused the explosion.
What the US has NOT confirmed:
- that they are responsible for the bombing - who hit the school - whether the school was an intended target of US strikes - whether it was struck intentionally - that it was mistaken for a military site - any casualty count - whether there were civilians or children in the casualty count
The US has explicitly DENIED:
- That they deliberately target civilian targets
These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.
Sources:
- https://www.war.gov/News/Transcripts/Transcript/Article/4421...
- https://www.war.gov/News/Transcripts/Transcript/Article/4434...
Anyone can look at the satellite images from the bombing and see how ridiculous whatever Iran was doing was.[1]
[1]https://npr.brightspotcdn.com/dims3/default/strip/false/crop...
This is not to say that this administration is definitely not targeting civilians or infrastructure on purpose; just that the end result, and the moral culpability, are the same in either case.
Would it be poor taste to make joke about gradle being superior here? The dad in me really wants to make that joke...
----------------
Maven is a tool for use in the middle of a war. When both sides are firing, minutes saved can mean lives saved for your side. Those lives, at least partly, balance the risks of hitting a bad target.
This was not a strike made in the middle of a war. If Maven was used in the strike that took out a school, it was being used as part of a sneak attack. Nobody was shooting back while this was being planned. Minutes saved were not lives saved. There should have been a priority placed on getting the targets right. Humans should have been double and triple checking every target by other means. This clearly didn't happen. The school was obviously a school that even had its own website. Humans would have spotted this if they had done more than make their three clicks and move on to the next target.
Whoever made the choice to use Maven to plan a sneak attack without careful checking made an unforced error when they had all the time in the world to prevent it. Whether it was overconfidence in their tools or a complete disregard for the lives of civilians that caused this lapse, they are directly responsible for the deaths of those little girls. I sincerely hope there are (although I doubt there will be) consequences for this person beyond taking that guilt to their grave.
I don't disagree there. But this is not a case of hallucination, and an existing website is a signal, not a determinant, of the real situation on the ground. However, you have made a very, very strong assumption that these targets were not carefully evaluated. One that does not seem to be present in TFA or any analysis that I've read. In fact, the article itself quotes those in the know who believe this should have been eliminated as a target.
This is giving them too much credit.
Hegseth has already shown himself to entirely disregard the notion of War Crime, even by the US military's own already controversial standards. The double strike on the boats in the caribbean are literally the textbook example in US military textbooks of what not to do, and that it is a warcrime.
This was no mistake. It was the obvious outcome of a pattern of reckless action.
What a ridiculous take. What does "originally was" mean? Maybe you wanna say "previously was"? That building was converted to a school 10 years ago! The intelligence they relied on is 10 years old!!!!! It's recklessness and stupidity dressed as bravery and courage.
> Humans should have been double and triple checking every target by other means.
How practically would this happen? The US/Israel don't want people on the ground, and people on the ground is exactly the only way you can actually verify stuff like this, not every place in the world is on Google Maps or have a web presence at all, so the only realistic way to verify this would be to visually inspect it in person, something neither parties who started this war want to do.
Even better, don't make attacks against other soverign nations that don't pose an immediately and critical threat to you, and this whole conflict could have been avoided in the first place.
But no, the president has to be involved in some sort of child-trafficking scheme, so pulling the country into a war seemed preferable to being held responsible, and now we're here, arguing about fucking details that don't matter.
This certainly doesn't absolve the person implementing those parameters, but it is equally the responsibility of the very top of the decision-making structure.
The main way targets should/would be selected is by direct intelligence. E.g. the targets should be identified through satellite or other observations. It's hard to imagine that a building that has operated for some length of time as a school would not have patterns that are visible from satellite vs. military facilities with different patterns. You also don't just randomly attack structures in this sort of surprise attack, you're presumably aiming for some specific people or equipment with some priority/military goal in mind, so you really want to have observed the targets and patterns and have up to date information on their usage.
I think what likely happened here is that the entire base was the "unit" of targeting and the mistake was in identifying which buildings were part of the base. In the satellite view the military buildings and the school look very similar (since the building as I understand it used to be part of the base but was repurposed as a school).
It's not true that whoever made the error had all the time in the world. Presumably once the order was given there was time pressure given that the strike was to be timed with the other intelligence.
In theory the US military should/is supposed to have good processes around this stuff. So we are told. Obviously failed in this case. It is a tragedy.
This was a tragic disaster waiting to happen from the very start.
This isn't an "AI or not" issue at all.
This was a choice to use children as human shields, and a choice to make war on a foreign sovereign nation.
Let's suppose the US accurately bombed the center of the military base, and the explosion destroyed the adjacent school and killed the children inside. Would that change anything of import? I don't think so.
By your logic it's the federal government's fault those 3000 people died on 9/11, they were being used as human shields.
From a certain angle, the entire industrial and computer age looks like a massive effort to remove all responsibility for our actions, permanently.
jameskilton•1h ago
It's still people doing people things.
idle_zealot•59m ago
oceansky•15m ago
pixl97•56m ago