The arms race is just to keep the investors coming, because they still believe that there is a market to corner.
Imagine if Ford had a monopoly on cars, they could unilaterally set an 85mph speed limit on all vehicles to improve safety. Or even a 56mph limit for environmental-ethical reasons.
Ford can’t do this in real life because customers would revolt at the company sacrificing their individual happiness for collective good.
Similarly GPT 3.5 could set whatever ethical rules it wanted because users didn’t have other options.
Maybe "we", but certainly not "I". Gemini Web is a huge piece of turd and shouldn't even be used in the same sentence as ChatGPT and Claude.
"For the Benefit of Humanity®"
[1] https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-h...
(Apologies if this archive link isn't helpful, the unlocked_article_code in the URL still resulted in a paywall on my side...)
https://meta.stackexchange.com/questions/417269/archive-toda...
https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment...
https://gyrovague.com/2026/02/01/archive-today-is-directing-...
They need to just hug it out and stop doxing each other lol
My own ethical calculus is that they shouldn't be ddos attacking, but on the other hand, it's the internet equivalent of a house egging, and not that big a deal in the grand scheme of things. It probably got gyrovague far more attention than they'd have gotten otherwise, so maybe they can cash in on that and thumb their nose at the archive.is people.
Regardless - maybe "we" shouldn't be telling people what sites to use or not use -if you want to talk morals and ethics, then you better stop using gmail, amazon, ebay, Apple, Microsoft, any frontier AI, and hell, your ISP has probably done more evil things since last tuesday than the average person gets up to in a lifetime, so no internet, either. And totally forget about cellular service. What about the state you live in, or the country? Are they appropriately pure and ethical, or are you going to start telling people they need to defect to some bastion of ethics and nobility?
Real life is messy. Purity tests are stupid. Use archive.is for what it is, and the value it provides which you can't get elsewhere, for as long as you can, because once they're unmasked, that sort of thing is gone from the internet, and that'd be a damn shame.
But otherwise, without an alternative, the entire thread becomes useless. We’d have even more RTFA, degrading the site even for people who pay for the articles. I much prefer keeping archive.today to that.
What I'd like to know is how many people "Harry" has saved from going over the edge. It's like self-driving. We should expect many horrible accidents along the way, but in the end, far more lives saved.
I don't empathize with any of these companies, but I don't trust them to solve mental health either.
In California it is a felony
> Any person who deliberately aids, advises, or encourages another to commit suicide is guilty of a felony.
You won't know, because it doesn't sale. You need outrage, you need hate:
"Look what the technology does to us! My beautiful daughter is dead because she was using your fancy autocomplete that predicts next word tokens!"
I'm all for responsible use of technology, but this ridiculous.
The parent's jokey tone is unwarranted, but their overall point is sound. The more blame we assign to inanimate systems like ChatGPT, the more consent we furnish for inhumane surveillance.
> We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome
I am more concerned about the amount of rubbish making it to HN front page recently
If you start to look through the optics of business == money making machine, you can start to think at rational regulations to curb this in order to protect the regular people. The regulations should keep business in check while allowing them to make reasonable profits.
This is more Altman-speak. Before it was about how AI was going to end the world. That started backfiring, so now we're talking about political power. That power, however, ultimately flows from the wealth AI generates.
It's about the money. They're for-profit corporations.
What would they do with money? Pay people to work?
Really wish the board had held the line on firing sama.
"Safety" was just a mechanism for complete control of the best LLM available.
When every AI provider did not trust their competitor to deliver "AGI" safely, what they really mean was they did not want that competitor to own the definition of "AGI" which means an IPOing first.
Using local models from China that is on par with the US ones takes away that control, and this is why Anthropic has no open weight models at all and their CEO continues to spread fear about open weight models.
When idealists (and AI scientists) say "safe", it means something completely different from how tech oligarchs use it. And the intersect between true idealists and tech oligarchs is near zero, almost by definition, because idealists value their ideals over profits.
On the one hand the new mission statement seems more honest. On the other hand I feel bad for the people that were swindled by the promise of safe open AI meaning what they thought it meant.
That’s not to paint them as wise beyond their years or anything like that, but just that historically Apple has wanted strict control over its products and what they do and LLMs throw that out the window. Unfortunately that that’s also what people find incredibly useful about LLMs, their uncertainty is one of the most “magical” aspects IMHO.
I turned them into a Gist with fake author dates so you can see the diffs here: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...
Wrote this up on my blog too: https://simonwillison.net/2026/Feb/13/openai-mission-stateme...
I asked Claude and it ran a search and dug up a copy of their certificate of incorporation in a random Google Drive: https://drive.google.com/file/d/17szwAHptolxaQcmrSZL_uuYn5p-...
It says "The specific public benefit that the Corporation will promote is to responsibly develop and maintain advanced AI for the long term benefit of humanity."
Worth noting in 2021 there statement just included '...that benefits humanity', and in 2022 'safely' was first added so it became '...that safely benefits humanity'. And then on the most recent one it was entirely changed to be much shorter, and no longer included the word 'safely'.
All inventions have downsides. The printing press, cars, the written word, computers, the internet. It's all a mixed bag. But part of what makes life interesting is changes like this. We don't know the outcome but we should run the experiment, and let's hope the results surprise all of us.
https://openai.com/index/updating-our-preparedness-framework...
https://fortune.com/2025/04/16/openai-safety-framework-manip...
> OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.
> The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.
To see persuasion/manipulation as simply a multiplier on other invention capabilities, and something that can be patched on a model already in use, is a very specific statement on what AI safety means.
Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality.
A step in the positive direction, at least they don't have to pretend any longer.
It's like Google and "don't be evil". People didn't get upset with Google because they were more evil than others, heck, there's Oracle, defense contractors and the prison industrial system. People were upset with them because they were hypocrites. They pretended to be something they were not.
However, nitpicking a mission statement is complete nonsense.
And any ethic, and I do mean ANY, that gets in the way of profit will be sacrificed to the throne of moloch for an extra dollar.
And 'safely' is today's sacrificed word.
This should surprise nobody.
Edit (link for context): https://www.bloomberg.com/news/articles/2026-01-17/musk-seek...
In the US, they would be sued for securities fraud every time their stock went down because of a bad news article about unsafe behavior.
They can now say in their S-1 that “our mission is not changing”, which is much better than “we’re changing our mission to remove safety as a priority.”
throwuxiytayq•1h ago