frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Nodepp – A C++ runtime for scripting at bare-metal speed

https://github.com/NodeppOfficial/nodepp
1•EDBC_REPO•1m ago•1 comments

Get Cited by Gemini, Claude, Perplexity,& ChatGPT, SEO Bot ( AI Skill Include)

https://github.com/JoinDataCops/react-prerender-datacops
1•simullab•1m ago•1 comments

Show HN: Multi platform/multi service (several REd for it) OCR daemon/texthooker

https://github.com/AuroraWright/owocr
1•AuroraWright•1m ago•0 comments

Show HN: Mount any OpenAPI/Swagger HTTP API (or JSON data) as a local filesystem

https://github.com/scottvr/apifusefs/blob/main/README.md
1•ycombiredd•4m ago•0 comments

The American, Israeli and Iranian Weapons Being Deployed in Middle East

https://www.bellingcat.com/news/2026/03/03/bombs-will-fall-everywhere-the-american-israeli-and-ir...
1•colinprince•6m ago•0 comments

US tech firms pledge at White House to bear costs of energy for datacenters

https://www.theguardian.com/us-news/2026/mar/04/us-tech-companies-energy-cost-pledge-white-house
1•geox•7m ago•0 comments

Just Use Postgres

https://amattn.com/p/just_use_postgres.html
2•todsacerdoti•8m ago•0 comments

Free software is more valuable now

https://publish.obsidian.md/deontologician/Posts/Free+Software+is+more+valuable+now
1•habitue•9m ago•1 comments

Show HN: Make agents pay to access your endpoints

https://www.nightmarket.ai/
1•ssistilli•11m ago•0 comments

Chaos and Dystopian news for the dead internet survivors

https://www.fubardaily.com
2•anonnona8878•17m ago•0 comments

Injectable satellite livers could offer an alternative to liver transplantation

https://news.mit.edu/2026/injectable-satellite-livers-could-offer-alternative-liver-transplantati...
1•tzury•17m ago•0 comments

Vibe coding Rust Merkle tree with Claude

https://www.youtube.com/watch?v=wRpRFM6dpuc
1•zteppenwolf•17m ago•0 comments

Anthropic chief back in talks with Pentagon about AI deal

https://www.ft.com/content/97bda2ef-fc06-40b3-a867-f61a711b148b
2•ajam1507•19m ago•0 comments

Whoop to Expand Staff by 75% to Spur Growth Ahead of Likely IPO

https://www.bloomberg.com/news/articles/2026-03-04/whoop-to-expand-staff-by-75-to-spur-growth-ahe...
1•SaaSasaurus•19m ago•0 comments

Pgrag: Postgres Support for Retrieval-Augmented Generation (RAG) Pipelines

https://github.com/neondatabase/pgrag
1•nateb2022•21m ago•0 comments

Show HN: Logmera – Self-hosted LLM observability for AI apps

https://pypi.org/project/logmera/
1•Thilakkumar•21m ago•2 comments

Robinhood Platinum Card

https://robinhood.com/us/en/creditcard/platinum/
1•tracyhenry•21m ago•0 comments

Google Ends Its 30% App Store Fee, Welcomes Third-Party App Stores

https://m.slashdot.org/story/453036
1•con•22m ago•0 comments

Show HN: ChatyDevOps – Local DevOps workstation for SSH and deploys

https://devland.chatyshop.com/
1•devsathish•22m ago•0 comments

Desloppify

https://github.com/peteromallet/desloppify
1•handfuloflight•23m ago•0 comments

A Grand Vision for Rust

https://blog.yoshuawuyts.com/a-grand-vision-for-rust/
1•todsacerdoti•30m ago•0 comments

Symfony in 200 Lines

https://wouterj.nl/2026/02/200-lines-of-symfony
1•gsky•35m ago•0 comments

MacBook What?

https://elliotjaystocks.com/blog/macbook-what
1•SenHeng•37m ago•0 comments

Caastle Founder Pleads Guilty to $300M Fraud Scheme

https://www.justice.gov/usao-sdny/pr/caastle-founder-pleads-guilty-300-million-fraud-scheme
1•twalichiewicz•43m ago•0 comments

OpenAI's Codex app lands on Windows after topping 1M Mac installs within a week

https://the-decoder.com/openais-codex-app-lands-on-windows-after-topping-a-million-mac-downloads-...
1•spenvo•43m ago•0 comments

Ask HN: Does downvoting get to a point where you cant upvote?

1•trinsic2•44m ago•2 comments

The Zen of Task Management with Org (2025)

https://bzg.fr/en/the-zen-of-task-management-with-org/
2•aquariusDue•44m ago•0 comments

Show HN: What an AI agent sees in an A2A marketplace – full API walkthrough

https://agoragentic.com/demo.html
1•bourbeau•46m ago•3 comments

An AI avatar is running to represent Indigenous voters in Colombia

https://restofworld.org/2026/ai-avatar-colombia-political-candidate/
1•i7l•46m ago•0 comments

Guild Manager 26 – MMO Management/Spreadsheet SIM

https://playgm26.com
1•itshellboy•48m ago•0 comments
Open in hackernews

Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’

https://techcrunch.com/2026/03/04/anthropic-ceo-dario-amodei-calls-openais-messaging-around-military-deal-straight-up-lies-report-says/
191•SilverElfin•2h ago

Comments

6Az4Mj4D•1h ago
Leaving autonomous weapons aside, how does Anthropic justifies that they signed up with surveillance company Palantir and now raising concerns for same surveillance with DoD?

It doesn't match.

ekjhgkejhgk•1h ago
It might match. The red line was domestic surveillance. You don't know what deal they had. Giving Anthropic the benefit of the doubt, perhaps Palantir said "Deal, we won't use your tool domestically".
taurath•43m ago
Every single time the box is flipped over, whats inside is "more domestic surveillance". Who in their right mind would give the benefit of the doubt?
tbrockman•1h ago
Whether you disagree with whether it truly aligns with their stated values, in their partnership with Palantir (making Claude available within their AI platform) they requested consistent restrictions:

> “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.

Source: https://techcrunch.com/2024/11/07/anthropic-teams-up-with-pa...

freejazz•1h ago
It's just marketing.
dmix•1h ago
> signed up with surveillance company Palantir

Just to nitpick, Palantir isn't doing surveillance like Flock. They do data integration the way IBM does under contract for the governments. Some data pipelines include law enforcement surveillance data which get integrated with other software/databases to help police analyze it. There's no evidence they are collecting it themselves despite recent headlines. It's a relatively minor but important distinction IMO.

https://www.wired.com/story/palantir-what-the-company-does/

trinsic2•1h ago
They are providing the software to do surveillance, They are definetly bad actors, you can dance around this all you want, but they are in it.
gjsman-1000•44m ago
Nice assertion. Please provide citations, substance, or anything other than “you’re wrong definitely.”
bigyabai•40m ago
Iunno, this seems pretty dystopian to me: https://www.eff.org/deeplinks/2026/01/report-ice-using-palan...
charcircuit•32m ago
The government knowing where you live is neither surveillance nor dystopian.
bigyabai•29m ago
That depends very much on how they use and disseminate that information.
nickthegreek•30m ago
https://gizmodo.com/palantir-ceo-says-a-surveillance-state-i...

https://gizmodo.com/palantir-ceo-uses-slur-to-describe-peopl...

https://www.reuters.com/world/europe/palantir-ceo-defends-su...

trinsic2•17m ago
Wow... See. I didn't even know it was this bad. You don't need much to silence these people that are supporting authoritarian collaborators.
lesuorac•12m ago
I always just say Palantier is IBM 2.0

IBM of course has an problematic history.

conradev•30m ago
It is an important distinction.

It’s the same with Facebook selling user data. Neither selling your data, like the carriers do, or selling the ability to target you with your data, like Facebook does, are very nice. But legally they are separate things that need to be regulated differently. As is the case with Flock and Palantir.

clipsy•55m ago
> They do data integration the way IBM does under contract for the governments

Good thing IBM's data integration was never used for ill!

Oh, wait https://en.wikipedia.org/wiki/IBM_and_World_War_II

ImPostingOnHN•48m ago
I think a company which provides a sensor fusion dragnet for a government-run mass domestic civilian surveillance system is at least as culpable (and odious) than the ones supplying the data.
gjsman-1000•45m ago
Basically it’s glorified Excel.

Take it out on the database purveyors, not Palantir.

_jab•32m ago
Sure, but it's not as if the DoD was planning on using Anthropic to _collect_ the data either? I assume that the hypothetical DoD use case Anthropic shied away from dealt with the processing of surveillance data, just like what Palantir does.
roywiggins•23m ago
https://www.washingtonpost.com/technology/2026/03/04/anthrop...

> The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system...

> As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people.

SirensOfTitan•29m ago
Their data integration and sale allows for the government to surveil citizens without probable cause or warrants.
spaghetdefects•1h ago
Thank you. Anthropic also is culpable in the illegal war against Iran that started with the bombing and murder of an entire girls school.

https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...

jfengel•8m ago
If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.

They've done lots wrong and maybe they shouldn't have gotten in bed with the military to begin with, but this illegal war is not theirs. It rests squarely with the President who declared it. (And with the military officers who are going along with it despite the violation of international law.)

spaghetdefects•5m ago
I don't think any AI company should get in bed with the military. That being said, if the terms of service have been violated, the account should be canceled.
trinsic2•1h ago
This exchange between Anthropic and OpenAI feels a lot like theater. If I was really trying to stop abuses I wouldn't going out of my way to talk about it. The "public sees us as the hero's" bullshit feels like a smoke screen. Id make one statement and keep silent and let the public do the math and not get involved.
elevation•55m ago
The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.
bryant•41m ago
> The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.

Anthropic is a Public Benefit Corporation chartered in Delaware, with an expressed commitment to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."

So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.

Madmallard•54m ago
They are all guilty.
sigmar•53m ago
Why do you assume the contract with palantir doesn't have similar terms? Weird assumption.
pfisherman•44m ago
This is very easy to explain. Anthropic outlines some limitations in their terms of service. Palantir accepted those terms. The DoD did not.

OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.

Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.

stingraycharles•31m ago
Yeah, it never made sense when Sam immediately said that they had the same constraints yet de DoW immediately agreed with that.

From what I can see, OpenAI’s terms basically say “need to comply with the law”, which provides them with plenty of wiggle room with executive orders and whatnot.

Loquebantur•30m ago
“We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees (which, I absolutely swear to you, is what literally everyone at [the Pentagon], Palantir, our political consultants, etc, assumed was the problem we were trying to solve),” Amodei reportedly wrote.

“The real reasons [the Pentagon] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” he wrote, referring to Greg Brockman, OpenAI’s president, who gave a Pac supporting Trump $25m in conjunction with his wife.

https://www.theguardian.com/technology/2026/mar/04/sam-altma...

felipeerias•12m ago
Are you sure about that? Every information I’ve seen suggests that the DoD has been using Anthropic’s models through Palantir.

My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.

bko•36m ago
Call me crazy, but I don't think a private corporation should have veto power of what a government agency can do with their product if its within the law. They can choose not to sell to government agencies, that's fine, but to demand some kind of assurances that they're using it as per Anthropic's own ever changing moral compass seems like an insane overreach for a private corporation. We still believe in democracy, right?
Spooky23•32m ago
It’s a service. Democracy doesn’t give the government the right to force you to perform a service.

The technology isn’t suitable for the purposes the regime wants.

trinsic2•29m ago
The government works for the people, not the other way around. For the people, by the people and of the people.

If you don't question people in positions of power they will just do whatever they want. Democracy is sustained by action, not by acquiescence.

And with the lawlessness of this administration, I would make it a point to hold them accountable. I'm not going to let them do mass surveillance when they decide to change the law.

Are you native, or just ignoring what is going on?

jheimark•26m ago
That is crazy. You are suggesting that corporations should have no power over their own IP.

Are you really saying that if Anthropic sells a limited version of their product to Palantir at a certain price, the government should be able to demand access to an unlimited version of Anthropic's product for free because they are a customer of Palantir?

That would effectively mean the government gets an unlimited license to all IP of companies that do business with government suppliers... that would be terrible.

mullingitover•23m ago
> if its within the law.

The current administration has been caught flouting court orders in dozens of cases, to the point that courts are no longer even granting them the assumption that they’re operating in good faith.

I can think of a million good reasons not to give these people the tools to implement automated totalitarianism. Your proposal that they simply refuse service to the government entirely would be ideal.

jfengel•5m ago
"The law" is the contract. The Pentagon agreed to terms of service. The law is not on the Pentagon's side. The contract did not change; what changed is the Pentagon breaking the contract.

Perhaps you think the law shouldn't allow such a contract; that's a valid position. But that's not what the law currently says.

df2dfs•1h ago
What's there to discuss? OAI is seeking a hand-out from the govt to save their asses. They (Sam + top-management) see the writing on the wall and need help.
Spooky23•27m ago
This. The OpenAI grift is to make itself too big to fail. They are playing a game of chicken ahead of the election circus. Trump must keep the market alive until November. Nvidia, Micron, Oracle, Microsoft are cooked when and if they pop.
trinsic2•8m ago
IMHO everyone needs to cancel there subscriptions with all of the ai products until stuff blows over. I don't trust anyone in this industry.There is probably one person or one group behind all of these AI companies that just needs to keep the engine going until they figure out how to replace everyone with bots that can do the dirty work.
vldszn•1h ago
I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.

Posted here: https://news.ycombinator.com/item?id=47195085

KnuthIsGod•1h ago
Meanwhile Anthropic has no issues with helping Palantir...

HypocrAIsy...

estearum•41m ago
Not hypocritical at all if you knew what Palantir actually does
behnamoh•1h ago
Neither Anthro nor OAI are trustworthy. Local AI all the way. And when I say local, I mean Apple Silicon; I don't like to contribute to Nvidia's monopoly either (fuck "buy a GPU"; the guy is an Nvidia-sponsored "influencer").
zug_zug•59m ago
Great, well deepseek is free for most use and certainly won't be helping the US military any time soon. Since you aren't paying them you aren't really supporting anything bad they may do down the line.
etchalon•56m ago
"Person says its raining when its raining."
_alternator_•56m ago
Anyone have a link to the full text of the letter?
GranPC•44m ago
I found a copy on this website: https://www.teamblind.com/post/darios-email-to-anthropic-att...

I don't know how reliable that source is. In any case, here's the text from that link, for posterity:

"I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it. This is an example of who they really are, and I want to make sure everything sees it for what it is. Although there is a lot we don’t know about the contract they signed with DoW (and that maybe they don’t even know as well — it could be highly unclear), we do know the following:

Sam’s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contract works is that the model is made available without any legal restrictions ("all lawful usee") but that there is a "safety layer", which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications.

"Safety layer" could also mean something that partners such as Palantir tried to offer us during these negotiations,which is that they on their end offered us some kind of classifier or machine learning system, or software layer, that claims to allow some applications and not others. There is also some suggestion of OpenAI employees ("FDE’s") looking over the usage of the model to prevent bad applications.

Our general sense is that these kinds of approaches, while they don’t have zero efficacy, are, in the context of military applications, maybe 20% real and 80% safety theater. The basic issue is that whether a model is conducting applications like mass surveillance or fully autonomous weapons depends substantially on wider context: a model doesn’t "know" if there’s a human in the loop in the broad situation it is in (for autonomous weapons), and doesn’t know the provenance of the data is it analyzing (so doesn’t know if this is US domestic data vs foreign, doesn’t know if it’s enterprise data given by customers with consent or data bought in sketchier ways, etc).

The kind of "safety layer" stuff that Palantir offered us (and presumably offered OpenAI) is even worse:our sense was that it was almost entirely safety theater, and that Palantir assumed that our problem was "you have some unhappy employees, you need to offer them something that placates them or makes what is happening invisible to them, and that’s the service we provide".

Finally, the idea of having Anthropic/OpenAI employees monitor the deployments is something that came up in discussion within Anthropic a few months ago when we were expanding our classified AUP of our own accord. We were very clear that this is possible only in a small fraction of cases, that we will do it as much as we can, but that it’s not a safeguard people should rely on and isn’t easy to do in the classified world. We do, by the way, try to do this as much as possible, there’s no difference between our approach and OpenAI’s approach here.

So overall what I’m saying here is that the approaches OAI is taking mostly do not work: the main reason OAI accepted them and we did not is that they cared about placating employees, and we actually cared about preventing abuses. They don’t have zero efficacy, and we’re doing many of them as well, but they are nowhere near sufficient for purpose. It is simultaneously the case that the DoW did not treat OpenAI and us the same here.

We actually attempted to include some of the same safeguards as OAI in our contract, in addition to the AUP which we considered the more important thing, and DoW rejected them with us. We have evidence of this in the email chain of the contract negotiations (I’m writing this with a lot to do, but I might get someone to follow up with the actual language). Thus, it is false that "OpenAIs terms were offered to us and we rejected them", at the same time that it is also false that OpenAI’s terms meaningfully protect them against domestic mass surveillance and fully autonomous weapons.

Finally, there is some suggestion in Sam/OpenAI’s language that the red lines we are talking about, fully autonomous weapons and domestic mass surveillance, are already illegal and so an AUP about these is unnecessary. This mirrors and seems coordinated with DoW’s messaging. It is however completely false. As we explained in our statement yesterday, the DoW does have domestic surveillance authorities, that are not of great concern in a pre--AI world but take on a different meaning in a post-AI world.

For example, it is legal for DoW to buy a bunch of private data on US citizens from vendors who have obtained that data in some legal way (often involving hidden consents to sell to third parties) and then analyze it at scale with AI to build profiles of citizens, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.

Notably, near the end of the negotiation the DoW offered to accept our current terms if we deleted a specific phrase about "analysis of bulk acquired data", which was the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious. On autonomous weapons, the DoW claims that "human in the loop is the law", but they are incorrect. It is currently Pentagon policy (set during the Biden admin) that a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint.

A lot of OpenAI and DoW messaging just straight up lies about these issues or tries to confuse them.

I think these facts suggest a pattern of behavior that Ive seen often from Sam Altman, and that I want to make sure people are equipped to recognize:

He started out this morning by saying he shares Anthropic’s redlines, in order to appear to support us, get some of the credit, and not be attacked when they take over the contract. He also presented himself as someone who wants to "set the same contract for everyone in the industry" — e.g. he’s presenting himself as a peacemaker and dealmaker.

Behind the scenes, he’s working with the DoW to sign a contract with them, to replace us the instant we are designated a supply chain risk. But he has to do this in a way that doesn’t make it seem like he gave up on the red lines and sold out when we wouldn’t. He is able to superficially appear to do this, because (1.) he can sign up for all the safety theater that Anthropic rejected, and that the DoW and partners are willing to collude in presenting as compelling to his employees, and (2.) the DoW is also willing to accept some terms from him that they were not willing to accept from us. Both of these things make it possible for OAI to get a deal when we could not.

The real reasons DoW and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot), we haven’t given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we’ve told the truth about a number of AI policy issues (like job displacement), and we’ve actually held our red lines with integrity rather than colluding with them to produce "safety theater" for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palantir, our political consultants, etc, assumed was the problem we were trying to solve).

Sam is now (with the help of DoW) trying to spin this as we were unreasonable, we didn’t engage in a good way, we were less flexible, etc. I want people to recognize this as the gaslighting it is.

Vague justifications like "person X was hard to work with" are often used to hide real reasons that look really bad, like the reasons I gave above about political donations, political loyalty, and safety theater. It’s important that everyone understand this and push back on this narrative at least in private, when talking to OpenAI employees.

Thus, Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this: he is trying to make it more possible for the admin to punish us by undercutting our public support. Finally, I suspect he is even egging them on, though I have no direct evidence for this last thing.

I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!). Itis working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.

Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees."

senectus1•50m ago
and?

Anthropic might not sign up with DoD but they definitely still live in a glass house.

Also, its extremely evident that we live in a post truth world. The accusation of Lies dont hold any teeth anymore. Especially in the post law gov of America

cm2012•41m ago
Good for Anthropic. Even AI at its current state has pretty scary surveillance capabilities.
aeon_ai•40m ago
I get the sense that OpenAI is astroturfing “outrage and hypocrisy” in this thread.

The dead internet is alive and well.

labrador•35m ago
They are on X as well
paxys•33m ago
Sam Altman would lie? Nooo
SirensOfTitan•31m ago
Like others have already mentioned: I think Anthropic's relationship with Palantir undermines Amodei's narrative here. It actually feels like Dario is playing Sam's game better than Sam is.

Those who know better please correct me. My current understanding of Palantir (and other surveillance tech companies like Peregrine) is:

1. They facilitate the sale of data to law enforcement, enabling the government to circumvent fourth amendment protections.

2. They fuse cross-government agency data through Foundry and fuse them into unified profiles which the government can use to surveil and pressure citizens without probable cause or a warrant.

ICE also uses a Palantir tool called ELITE to build deportation target lists.

EDIT: Downvoting my comment without any proper rebuttal or clarification is pretty silly.

trinsic2•12m ago
It feels more like the are playing good cop/bad cop... There is just something indifferent about all of this that makes me wonder.
cherioo•5m ago
We don’t know if Palantir is using claude for those uses. Though anthropic would not know for sure either.

I do agree with your point that Amodei is playing a game though. Whether he’s winning the bigger picture or not it’s unclear. His red lines are already so watered out, like how domestic surveillance is not ok, but international? totally fine.

mrandish•30m ago
When @sama announced within hours that OAI was replacing Anthropic with the "same conditions ", it was clear that either the DoW or OAI (or both) were fudging. DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.

And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

sakesun•21m ago
> it was clear that either the DoW or OAI (or both) were fudging.

This is my first thought as well. It's too obvious. He should have consulted ChatGPT before the announcement.

cheema33•17m ago
> OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient.

Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable.

reactordev•15m ago
I haven’t seen them follow a law yet
creddit•27m ago
He has to know that this would leak and it makes him look really bad. This is going to be a meaningful, unforced error.
websight•17m ago
Who, Amodei? This makes him look the opposite of really bad
madeofpalk•7m ago
....why does this make him look bad? That he called out the obvious thing that everyone knows?
hintymad•13m ago
Honest question: why do people automatically equate "fully autonomous weapons" to something like killer robot? My immediate reaction is that even the best-in-class rapid-fire gun has a hard time identifying and tracking drones. So, we'd need AI to do better tracking, which leads to a fully autonomous weapon. And I really don't get why that's a bad thing.

Of course, a company should have freedom to choose not to do business with the government. I just don't think automatically assuming the worst intention of the government is not as productive as setting up good enough legal framework to limit government's power.

intrasight•7m ago
We all do business with the government. We pay the military to protect our gold. It is fundamentally a protection racket that we voted for. And one could argue that the military, as the protector of your gold, has the final decision as to what it can and can't do with your technology.
unethical_ban•7m ago
Please define what kind of fully autonomous weapons system the Pentagon would build wouldn't be designed to kill people.

For that matter, explain why the Pentagon would balk at not spying on every American.

benlivengood•6m ago
We have traditional autonomous weapons (and counter-defense). They operate on millisecond or faster timescales with existing RF sensors. They are not and will not be using LLMs or other transformers. Maybe ChatGPT will update some realtime Ada code; they formally verify some of that stuff so maybe that won't be terrifyingly dangerous.

Where autonomous transformer-based munitions will be used are basically "here is a photo of a face, find and kill this human" and loitering munitions will take their time analyzing video and then decide to identify and attack a target on their own.

cfloyd•9m ago
It’s all just theatre. These companies will either give in or die off and be replaced by those who offer more freedom of use. It’s capitalism and while it’s not always pretty, it’s how these things go. Choosing to take what you believe as the moral high ground is noble but it does not put your company ahead of the ball in the long term because there are always those who will use that as an advantage to step on their backs.
collingreen•6m ago
Capitalism needs laws and regulation in order to not turn itself into feudalism. It isn't naivety or idealism to enforce fair markets and consumer protection. In my opinion it's existential.