All these providers are the same.
That's option A, option B is pure halo effect. I.e. Claude is so good that people misascribe positive attributes to Anthropic.
Otherwise it really is mind boggling to see people laud Dario's posts which are tone blind to Europeans at least.
OpenAI didn't object to anything.
They're all bad, but some are worse than others.
Apart from anything else, Anthropic don't want to be used for this.
My generation feels more replacable than ever and this leads to ethics being lost. Ethics can be diluted very easily if you make people wonder about food on the table.
I am in school and ethics aren't an concern when we discuss and I am not sure treating it as a subject could help either. Perhaps but I do feel at some point, it has to have with people feeling a sense of job security.
As a society as well, we have to probably do something to reward ethics. Especially when not following ethics sometimes leads to so much financial gains.
To me, the way I see it, people sometimes start doing immoral things because they have to put food on the table and then greed takes over.
But that being said, I am not sure how job insecurity/this culture can be fixed by a single measure but I just wanted to point out that there's more nuance to it. The only way to meaningfully solve is with having discussions on this topic and having actual change take place.
We feel like we go grease ourselves in studies and try to get a job and even when we do but many of us are still not able to afford a house at times :<
If that's the case, why did they need help selecting targets? I can only imagine that the military bases and targets are well known and well studied. What would they have actually needed AI assistance for?
Just to clarify, I don't condone the use of AI for guessing targets, but I think that's what may be going on here.
"For instance, Israel has bombed a park in Tehran called "Police park." It has nothing to do with the police."[1]
The shift toward war crime whataboutism is a new low for HN, capping off a week of aggressive warmongering and intellectualized cruelty in every comment thread about Iran.
Have Iran and their terrorist partners ever restricted what they attack?
https://www.nbcnews.com/world/iran/iran-school-strike-us-mil...
In this case it seems plausible that the military would have an outdated database, and that an LLM would have "known" it wasn't a base anymore, assuming the LLM was trained on documents/maps with this up to date information.
None of these companies are clean and I think it’s hilarious HN and the rest of SV has been duped by Dario. He’s playing the game better than Sam is, imo. Nothing Dario has said has indicated he is regretful about their partnership with Palantir or any of the stuff they’ve done with the DoD in the past 2.5 years.
" Frankfurt determines that bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false."
This quote comes to mind whenever Dario Amodei opens his mouth.
That website is broken on mobile and I can’t even scroll to see the source
I can’t be the only one who couldn’t see it on ff/safari
They hit 1000 targets in 24 hours. And yet, a week later, the Iranian regime is intact, American allies are still under constant bombardment, interceptor stocks are running low, and half of America's long-range, high-altitude transportable radar have been destroyed.
This looks like shooting the broad side of a barn, and then painting bullseyes around every bullet hole.
This is the same kind of claim you’ve all seen before about AI systems doing something amazing and it’s really just a bunch of people sitting in a call center in a third world country controlling the system remotely.
Only in this case it’s a bunch of senior airmen and staff sergeants sitting in an intel shop doing all the work. Sure, Palantir made a UI but it just plain sucks. And Claude probably fixed some typos in the targeting packages. But let’s not believe that either system was influential to target selection. CENTCOM created a similar number of targets at the beginning of the Syrian civil war before any of these LLMs existed and it took a similar amount of time. We ended up not striking them, but the plans were made after Assad used chemical weapons. All the fixed locations in Iran had packages written and sitting on the shelf before Trump was even elected the first time. The AI in this war added basically no value.
Any claim that Palantir did something useful for the government should immediately be viewed as suspect. I’ve used their software, and it sucks. I cannot understand how they got such big contracts to make a shitty UI that poorly integrates other systems’ data.
I worked briefly in defense-tech and there is a huge blindspot in this field. While I worked with a ton of thoughtful, ethical, and talented people from the military, there is a veritable blind spot when it comes to support of the "warfighter." It is certainly noble and worthwhile work to protect soldiers from harm through technology, but I got some sense some people (actually especially the tech people who were never in the military) didn't think enough about the ethical concerns when dealing with people attached to the US's "enemies." And further, what about when the US itself is the aggressor? While active warfighters have to follow chain of command, companies can and should apply ethical constraints--but they often don't because DoD contracts are lucrative and (especially if you're not a prime) hard won.
I've had a lot of fun playing with Claude 4.6, but it is entirely unacceptable that this technology is being used in this conflict with Iran. I will cancel my account once this month's subscription is up in 2 weeks. The US is the aggressor here, that is certain. Support of this conflict as a private company that supposedly is oriented toward ethics is extremely illuminating.
Now with that, I have thought a tremendous amount about whether someone like Dario could even steer the ship away from support of a conflict like this at this point. We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training. There is certainly an argument to be made that if he did so, he might lose confidence of investors and lose control entirely. This begs the question: is shareholder/capital optimization the best way to organize our society?
zppln•1h ago
Can I get off this train, please?