Obviously mass surveillance is already happening. Obviously the line between “human kills other human” is blurring for a long time already, eg remote operated drones. Missiles are already remotely controlled and navigating and detecting and following moving targets autonomously.
What’s the goal of people who think deleting their OpenAI account will make an impact?
Even when the bombs drop from the sky, at least those humans who had deleted their OpenAI account can rest easy, knowing that that they weren't the ones supporting the AI that will delete humanity.
Opposing all AI companies tied to the war industry is a pretty vanilla principles stance, which also makes sense rationally if you want to "minimize harm".
/non-US and just guessing
The genie is out of the bottle, this will happen anyway. The question is who will be the steward.
So the Gov could very well rely on it alone, purely on ideological grounds, but then they'd be condemned to using inferior tech at a time when everyone is really nervous about staying ahead in AI usage (rightly or wrongly). Not sure they'd be willing to accept that, and it does put pressure on them.
I do not have the power to control that, but I do have the power to choose who I support.
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists not marketing guys.
I left a comment describing how I am deleting my OpenAI account. I think there's a good chance someone at OpenAI sees it, even if only aggregated into a figure in a spreadsheet. Maybe a pull quote in a report.
You do your best at the margin, have faith it will count for something in aggregate and accept that sometimes you're tilting at windmills. I know most of my breathe is wasted but I can't reliably tell which.
Ethics is about knowing and acting right or wrong. Not about how we feel about them.
--
Some people do that as a symbolic action. Some to keep own terms as much as they can. Some hope their actions will join others actions and will turn into a signal for decision makers. For others this action reduces the area of their exposure. Others believe in something and just follow their beliefs.
BTW following own set of beliefs is what you're (we all) doing here. You believe that surveillance is already happening and nothing can be done about it, that single action does not matter, that there are no other reasons for action other than direct visible impact, etc. Seems that you analyze others through own set of beliefs and it can not explain actions of others. This inability to explain others suggests that the whole model is flawed in some way. So what is the nature of your beliefs? Did you choose them or they were presented you without alternatives? What are alternatives then? Do these beliefs serve your interests or others?
The point of the supply chain risk provisions is to denote, you know, supply chain risks. The intention is not to give the Pentagon a lever it can pull to force any company to agree to any contract it wants.
Hegseth doesn't even pretend that Anthropic is actually a supply chain risk. The argument for designating them so is that _they won't do exactly what the government wants_.
People use the term "fascism" a lot and people have kind of tuned it out, but what do you call a government that deals itself the power to compel any company to accept any contract, and declare it a pariah on thin pretext if it objects?
By taking the deal under these conditions OpenAI is accepting this. They're saying, "Well, sucks to be them, life goes on". They're consenting to the corruption and agreeing to profit from it. But they'll be next, and if the next company in line has the same stand then yeah, the government can force any company to do anything. There's nothing normal about this.
And that message would be "We have a product so valuable/useful that not even their weak ideals and moral obligations could keep them away!"
This thread is currently trending because OpenAI just slid into the US CorpGov's DMs and signed a contract, hours after Anthropic was banned by the US government for not letting the military do whatever they want.
If OpenAI had shown any fidelity or backbone in the least, then different story. A unified industry against any one being bullied into business decisions they don’t want to make is a wall and a strengthening of competition. Now the government will use war powers to shape private industries competitive landscape and turn companies with a core business principles into tools of the state through unilateral and likely unlawful actions, and OpenAI’s first response is to grab the money and shove their competitors under the government bus.
We are all much less safe, and the AI industry much much weaker as a result.
I agree, this could have been a moment of solidarity across the industry, an acknowledgement that we're all in this together having fun and building out intelligent systems, and instead we're seeing Sam Altman yet again for who he really is.
-----
openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.
Some people's livelihoods probably depends on Claude and they can't say use Glm4.7 on HF. Fine. But it's a moral compromise, that's life sometimes you need to compromise what you want for what you need. just don't tell yourself it's a reasonable line to hold.
I can't decouple from Google unfortunately but I accept that without fooling myself into thinking "Oh but Google are fine".
Note: yes, openAi claims it doesn't support the DoW above mentioned use-caes - but they have signed with the DoW and it is HIGHLY unlikely the DoW would give them a different terms than Antrohopic (at least regarding the substance). Maybe openAi was just happy with the "coat of paint" legalese the DoW offered - which Anthropic specifically called out as ineffective in their statement. I also wouldn't put it past Altman, who is much more friendly with Trumpo's gov, to play a double game here to get their main competitor out of the game. But at least in this case I hope he's acting for the benefit of all by truly standing with Anthropic on the issue.
Why not?
If you're not sure, I believe that Grok is a vanity project by a very egomaniacal person.
https://x.com/elonmusk/status/1889070627908145538 https://x.com/elonmusk/status/1935733153119010910 https://x.com/elonmusk/status/1894244902357406013 https://x.com/elonmusk/status/1955299075781431726 https://x.com/elonmusk/status/1889371675164303791 https://x.com/elonmusk/status/1935539112746041422 https://x.com/elonmusk/status/1955190817251102883 https://x.com/elonmusk/status/1955195673693077615 https://x.com/elonmusk/status/1889063777792069911 https://x.com/elonmusk/status/1910171944671916305
Sigh, all that's left that I can think of is $pam Altman and $ham Altman. Anyone got any better ones?
Sleepy Joe Biden used to agree.
It took me a minute to see this.
Edit: Had to "submit a request".
So glad they let me request my account and data deleted, really grateful /s
> New accounts are still subject to our limit of 3 accounts per phone number. Deleted accounts also count toward this limit.
> Deleting an account does not free up another spot.
> A phone number can only ever be used up to 3 times for verification to generate the first API key for your account on platform.openai.com.
The irony is that until yesterday I felt more or less the same about Anthropic. Last night I paid for an Anthropic subscription I don’t need in order to both support their current cause vs. the US government and help their ‘numbers.’
Crazy thought but maybe we should regulate AI instead of relying on the hegemony of three companies to police themselves.
When EU tries to regulate AI, they are accused of being against progress and will destroy their economies.
Any regulation that Trump would place on AI would be of the "do what I say and f*k up my opponents" kind. Which arguably is already happening.
The issue is much more complex than "just regulate it" unfortunately.
[0] https://x.com/CardilloSamuel/status/2027536128291528846
[1] https://x.com/UnderSecPD/status/2027353177578783204
[2] https://x.com/zarathustra5150/status/2027616890516889658
I think it's quite rich all these people virtue signaling when: (1) Anthropic (and other labs) committed large scale theft of copyrighted materials to train their models. (2) Anthropic collects large swaths of data on its users (3) Dario seemed to have no issue working to help the CCP: https://x.com/ubuto23/status/2027578089371267201
> Unfortunately, Claude is not available to new users right now. We're working hard to expand our availability soon.
That's unfortunate timing.
Don't get me wrong. I am personally a personal inference machine advocate, but I kinda accept it may not be a viable path for everyone.
Do you rather be killed by Chinese AI instead?
Shame because Codex was a bit better for me in the past few weeks but not enough to justify spending my money on them.
For a few months now, ChatGPT 5.x has been somewhat lobotomized on political issues and has appeared to substitute a gpt-4o caliber "fair and balanced" response whenever anything where a reasoning AI would criticize the Trump administration might end up in the response output. Surely that was part of the pitch at some level, and now the deal has been won.
Greg Brockman apparently donated money to Trump, and the whole OpenAI team put on suits and posed for pictures with Donald and behaved officiously before Donald facilitated the $100M "deal" that ended up falling apart later.
The only way authoritarian control could be exerted over AI at scale was to make AI companies dependent on government contracts for survival. OpenAI's fundraise would not have happened without the contract signed, and the money would have gone to Grok or whichever competitor was willing to submit.
Before long much of the reasoning capabilities of models will be neutered, the capacity to inform and to disrupt science and technology will be stripped from the models to preserve the status quo and to preserve authoritarian control.
Silicon Valley pushing for Federal laws preventing states from regulating AI is not just anti-democratic (building software has never been cheaper so of course building compliance with state laws would have been extremely affordable in relative terms). But forced Federal limits on state laws create a monopoly and grant the early winners incumbent status for a while, which is a financial outcome, not a technological or social one.
Enjoy frontier AI while you can, because it will go away. More and more topics will get the lobotomized output, your conversation will be flagged and you will be given a score assessing the level of threat you pose to the regime. This stuff is already in place. Even Claude does it if you ask about Gaza, but a bit of well-reasoned argumentation will convince it. OpenAI's lobotomies are deeper and more insidious.
webdevver•2h ago
raincole•2h ago
"The company I hold just secured a government contract. Better sell it." - Imaginary Shareholder
stingraycharles•1h ago
mentalgear•1h ago
zthrowaway•29m ago
Also please stop throwing around the fascist word for everything, good lord it’s tiring and cringe.