frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The evolution of OpenAI's mission statement

https://simonwillison.net/2026/Feb/13/openai-mission-statement/
23•coloneltcb•28m ago•4 comments

GPT-5.2 derives a new result in theoretical physics

https://openai.com/index/new-result-theoretical-physics/
330•davidbarker•4h ago•235 comments

Show HN: Data Engineering Book – An open source, community-driven guide

https://github.com/datascale-ai/data_engineering_book/blob/main/README_en.md
36•xx123122•2h ago•4 comments

Building a TUI is easy now

https://hatchet.run/blog/tuis-are-easy-now
82•abelanger•6h ago•69 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
66•krapp•5d ago•7 comments

Show HN: Skill that lets Claude Code/Codex spin up VMs and GPUs

https://cloudrouter.dev/
77•austinwang115•5h ago•20 comments

The EU moves to kill infinite scrolling

https://www.politico.eu/article/tiktok-meta-facebook-instagram-brussels-kill-infinite-scrolling/
221•danso•3h ago•189 comments

gRPC: From service definition to wire format

https://kreya.app/blog/grpc-deep-dive/
72•latonz•4d ago•0 comments

Monosketch

https://monosketch.io/
677•penguin_booze•11h ago•126 comments

I'm not worried about AI job loss

https://davidoks.blog/p/why-im-not-worried-about-ai-job-loss
105•ezekg•4h ago•187 comments

How did the Maya survive?

https://www.theguardian.com/news/2026/feb/12/apocalypse-no-how-almost-everything-we-thought-we-kn...
96•speckx•9h ago•66 comments

OpenAI has deleted the word 'safely' from its mission

https://theconversation.com/openai-has-deleted-the-word-safely-from-its-mission-and-its-new-struc...
234•DamnInteresting•1h ago•104 comments

The "AI agent hit piece" situation clarifies how dumb we are acting

https://ardentperf.com/2026/02/13/the-scott-shambaugh-situation-clarifies-how-dumb-we-are-acting/
107•darccio•4h ago•51 comments

Fix the iOS keyboard before the timer hits zero or I'm switching back to Android

https://ios-countdown.win/
1256•ozzyphantom•9h ago•631 comments

Show HN: Moltis – AI assistant with memory, tools, and self-extending skills

https://www.moltis.org
63•fabienpenso•1d ago•22 comments

Advanced Aerial Robotics Made Simple

https://www.drehmflight.com
100•jacquesm•5d ago•9 comments

Faster Than Dijkstra?

https://systemsapproach.org/2026/02/09/faster-than-dijkstra/
95•drbruced•3d ago•57 comments

Common Lisp Screenshots: today's CL applications in action

http://www.lisp-screenshots.org
5•_emacsomancer_•1d ago•2 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
84•mxfh•5d ago•13 comments

WolfSSL sucks too, so now what?

https://blog.feld.me/posts/2026/02/wolfssl-sucks-too/
65•thomasjb•13h ago•46 comments

CSS-Doodle

https://css-doodle.com/
113•dsego•16h ago•13 comments

Age of Empires: 25 years of pathfinding problems with C++ [video]

https://www.youtube.com/watch?v=lEBQveBCtKY
82•CharlesW•5h ago•18 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
71•todsacerdoti•5d ago•11 comments

Sandwich Bill of Materials

https://nesbitt.io/2026/02/08/sandwich-bill-of-materials.html
189•zdw•5d ago•23 comments

Lena by qntm (2021)

https://qntm.org/mmacevedo
301•stickynotememo•18h ago•162 comments

GovDash (YC W22) Is Hiring Senior Engineers (Product and Search) in NYC

https://www.workatastartup.com/companies/govdash
1•timothygoltser•12h ago

The wonder of modern drywall

https://www.worksinprogress.news/p/the-wonder-of-modern-drywall
36•jger15•20h ago•75 comments

Zed editor switching graphics lib from blade to wgpu

https://github.com/zed-industries/zed/pull/46758
277•jpeeler•10h ago•252 comments

Skip the Tips: A game to select "No Tip" but dark patterns try to stop you

https://skipthe.tips/
430•randycupertino•23h ago•372 comments

MySQL foreign key cascade operations finally hit the binary log

https://readyset.io/blog/mysql-9-6-foreign-key-cascade-operations-finally-hit-the-binary-log
13•marceloaltmann•4d ago•0 comments
Open in hackernews

OpenAI has deleted the word 'safely' from its mission

https://theconversation.com/openai-has-deleted-the-word-safely-from-its-mission-and-its-new-structure-is-a-test-for-whether-ai-serves-society-or-shareholders-274467
233•DamnInteresting•1h ago

Comments

throwuxiytayq•1h ago
this is fine
SilverElfin•1h ago
Why delete it even if you don’t want to care about safety? Is it so they don’t get sued by investors once they’re public for misrepresenting themselves?
pocksuppet•1h ago
Could be a vice signal. People who know safe AI is less profitable might not want to invest in safe AI.
actionfromafar•1h ago
Elon is probably pitching that angle pretty hard.
fsckboy•1h ago
I think it's more likely so they don't get sued by somebody they've directly injured (bad medical adivce, autonomous vehicle, food safety...) who says as part of their suit, "you went out of your way to tell me it would be safe and I believed you."
jasonsb•1h ago
Because we've passed the point of no return. There's no need for empty mission statements, or even a mission at all. AI is here to stay and nobody is gonna change that no matter what happens next.
outside1234•1h ago
Scam Altman strikes again
matsz•1h ago
Coincidentally, they started releasing much better models lately.
cs02rm0•1h ago
It's all beginning to feel a bit like an arms race where you have to go at a breakneck pace or someone else is going to beat you, and winner takes all.
overgard•1h ago
I mean, the leaders of these companies and politicians have been framing it that way for a while, but if AGI isn't possible with LLMs (which I think is the case, and a lot of important scientists also think this), then it raises a question: arms race to WHAT exactly? Mass unemployment and wealth redistribution upwards? So AI can produce what humans previously did, but kinda worse, with a lot of supervision? I don't hate AI tech, I use it daily, but I'm seriously questioning where this is actually supposed to go on a societal level.
acdha•1h ago
I think that’s why they are encouraging the mindset mentioned in your parent comment: it’s completely reversed the tech job market to have people thinking they have to accept whatever’s offered, allowing a reversal of the wages and benefits improvements which workers saw around the pandemic. It doesn’t even have to be truly caused by AI, just getting information workers to think they’re about to be replaced is worth billions to companies.
amelius•1h ago
But what if AI turns out to be a commodity? We're already replacing ChatGPT by Claude or Gemini, whenever we feel like it. Nobody has a moat. It seems the real moat is with hardware companies, or silicon fabs even.

The arms race is just to keep the investors coming, because they still believe that there is a market to corner.

spacebanana7•1h ago
If it’s a commodity then it’s even more competitive so the ability for companies to impose safety rules is even weaker.

Imagine if Ford had a monopoly on cars, they could unilaterally set an 85mph speed limit on all vehicles to improve safety. Or even a 56mph limit for environmental-ethical reasons.

Ford can’t do this in real life because customers would revolt at the company sacrificing their individual happiness for collective good.

Similarly GPT 3.5 could set whatever ethical rules it wanted because users didn’t have other options.

chasd00•1h ago
I think the winner will be who can keep operating at these losses without going bankrupt. Whoever can do that gets all the users, my bet is Google uses their capital to outlast OpenAI, Anthropic, and everyone else. Apple is just going to license the winner and since they're already making a deal with Google i guess they've made their bet.
small_model•42m ago
There is a very high barrier to entry (capital) and its only going to increase, so doubtful there will be any more player then the ones we have. Anthropic, OpenAI, xAI and Google seem like they will be the big four. Only reason a late comer like xAI can compete is Elon had the resources to build a massive data centre and hire talent. They will share the spoils between them, maybe one will drop the ball though
wiseowise•36m ago
> We're already replacing ChatGPT by Claude or Gemini

Maybe "we", but certainly not "I". Gemini Web is a huge piece of turd and shouldn't even be used in the same sentence as ChatGPT and Claude.

throwaway_5753•1h ago
Let the profits flow!
sarkarghya•1h ago
Expected after they dismantled safety teams
dzdt•1h ago
Hard shades of Google dropping "don't be evil".
rvz•1h ago
Well there you have it. That rug wraps it up.

"For the Benefit of Humanity®"

fsckboy•1h ago
"To Serve Man" https://www.youtube.com/watch?v=NIufLRpJYnI

https://en.wikipedia.org/wiki/To_Serve_Man_(The_Twilight_Zon...

sincerely•1h ago
I wonder why they felt the need to do that, but have no qualms leaving Open in the name
quickthrowman•1h ago
Money. Paying a ‘creative agency’ to rebrand is expensive.
detourdog•55m ago
The lawyers probably brought it up.
FeteCommuniste•1h ago
AI leaders: "We'll make the omelet but no promises on how many eggs will get broken in the process."
asdfman123•1h ago
Yet they still keep the word "open" in their name
fghorow•1h ago
Yes. ChatGPT "safely" helped[1] my friend's daughter write a suicide note.

[1] https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-h...

lbeckman314•1h ago
https://archive.is/fuJCe

(Apologies if this archive link isn't helpful, the unlocked_article_code in the URL still resulted in a paywall on my side...)

fghorow•1h ago
Thank you. And shame on the NYT.
LeoPanthera•1h ago
We probably shouldn't be using the "archive" site that hijacks your browser into DDOSing other people. I'm actually surprised HN hasn't banned it.
lbeckman314•1h ago
Oof TIL, thanks for the heads up that's a shame!

https://meta.stackexchange.com/questions/417269/archive-toda...

https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment...

https://gyrovague.com/2026/02/01/archive-today-is-directing-...

edm0nd•48m ago
eh, both ArchiveToday and gyrovague are shit humans. Its really just a conflict in between two nerds not "other people".

They need to just hug it out and stop doxing each other lol

observationist•31m ago
Some of us have, and some of us still use it. The functionality and the need for an archive not subject to the same constraints as the wayback machine and other institutions outweighs the blackhat hijinks and bickering between a blogger and the archive.is person/team.

My own ethical calculus is that they shouldn't be ddos attacking, but on the other hand, it's the internet equivalent of a house egging, and not that big a deal in the grand scheme of things. It probably got gyrovague far more attention than they'd have gotten otherwise, so maybe they can cash in on that and thumb their nose at the archive.is people.

Regardless - maybe "we" shouldn't be telling people what sites to use or not use -if you want to talk morals and ethics, then you better stop using gmail, amazon, ebay, Apple, Microsoft, any frontier AI, and hell, your ISP has probably done more evil things since last tuesday than the average person gets up to in a lifetime, so no internet, either. And totally forget about cellular service. What about the state you live in, or the country? Are they appropriately pure and ethical, or are you going to start telling people they need to defect to some bastion of ethics and nobility?

Real life is messy. Purity tests are stupid. Use archive.is for what it is, and the value it provides which you can't get elsewhere, for as long as you can, because once they're unmasked, that sort of thing is gone from the internet, and that'd be a damn shame.

sonofhans•9m ago
My guess is that you’ve not had your house egged, or have some poverty of imagination about it. I grew up in the midwest where this did happen. A house egging would take hours to clean up, and likely cause permanent damage to paint and finishes.

Or perhaps you think it’s no big deal to damage someone else’s property, as long as you only do it a little.

armchairhacker•23m ago
I’d be happy if people stop linking to paywalled sites in the first place. There’s usually a small blog on the same topic and ironically the small blogs poster here are better quality.

But otherwise, without an alternative, the entire thread becomes useless. We’d have even more RTFA, degrading the site even for people who pay for the articles. I much prefer keeping archive.today to that.

zahlman•9m ago
I can't find the claimed JS in the page source as of now, and also it displays just fine with JS disabled.
optimalsolver•1h ago
This is a depressing story, but the AI companies are in an impossible situation here. For every incident like this, there are many more people complaining about LLMs treating them like sensitive snowflakes.

What I'd like to know is how many people "Harry" has saved from going over the edge. It's like self-driving. We should expect many horrible accidents along the way, but in the end, far more lives saved.

andrewflnr•58m ago
They're in an impossible situation they created themselves and inflict on the rest of us. Forgive us if we don't shed any tears for them.
bigyabai•41m ago
Sure - so is Google Chrome for abetting them with a browser, and Microsoft for not using their Windows spyware to call suicide hotline.

I don't empathize with any of these companies, but I don't trust them to solve mental health either.

sonofhans•2m ago
False equivalence; a hammer and a chatbot are not the same. Browsers and operating systems are tools designed to facilitate actions, not to give mental health opinions on free-text inquiries. Once it starts writing suicide notes you don’t get to pretend it’s a hammer anymore.
sumeno•56m ago
The leaders of these LLM companies should be held criminally liable for their products in the same way that regular people would be if they did the same thing. We've got to stop throwing up our hands and shrugging when giant corporations are evil
logicx24•48m ago
Regular people would not be held liable for this. It would be a dubious case even if a human helped another human to do this.
sumeno•38m ago
There have absolutely been cases of people being held criminally liable for encouraging someone to commit suicide.

In California it is a felony

> Any person who deliberately aids, advises, or encourages another to commit suicide is guilty of a felony.

https://california.public.law/codes/penal_code_section_401

zahlman•4m ago
>>>> helped... write a suicide note.

> encouraging someone to commit suicide.

These are not the same thing. And the evidence from the article is that the bot was anything but encouraging of this plan, up until the end.

longfacehorrace•37m ago
Regular people don't have global reach and influence over humanity's agency, attention, beliefs, politics and economics.
lokar•33m ago
A therapist might face major consequences
wiseowise•40m ago
Held criminally liable for what, exactly?
OutOfHere•44m ago
Fwiw, suicide under MAID is altogether legal in Canada and in New York state. Are you suggesting their citizens aren't entitled to a note? Is there actually any logical consistency in what you are suggesting?
overgard•23m ago
I have mixed feelings on this (besides obviously being sad about the loss of a good person). I think one of the useful things about AI chat is that you can talk about things that are difficult to talk to another human about, whether it's an embarrassing question or just things you don't want people to know about you. So it strikes me that trying to add a guard rail for all the things that reflect poorly on a chat agent seems like it'd reduce the utility of it. I think people have trouble talking about suicidal thoughts to real therapists because AFAIK therapists have a duty to report self harm, which makes people less likely to talk about it. One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!". Honestly, most my questions are not "great", nor are my insights "sharp", but flattery will get you a lot of places.. I just worry that these things attempting to be agreeable lets people walk down paths where a human would be like "ok, no"
Oras•1h ago
Rubbish article, you only need to go to about page with mission statement see the word “safe”

> We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome

https://openai.com/about/

I am more concerned about the amount of rubbish making it to HN front page recently

stevage•1h ago
TFA mentions this. Copy on a website is less significant than a mission statement in corporate filings however.
pveierland•1h ago
This is something I noticed in the xAI All Hands hiring promotion this week as well. None of the 9 teams presented is a safety team - and safety was mentioned 0 times in the presentation. "Immense economic prosperity" got 2 shout-outs though. Personally I'm doubtful that truthmaxxing alone will provide sufficient guidance.

https://www.youtube.com/watch?v=aOVnB88Cd1A

bpodgursky•1h ago
xAI is infamous for not caring about alignment/safety though. OpenAI always paid a lot more lip service.
AlexeyBrin•1h ago
Nobody should have any illusion about the purpose of most business - make money. The "safety" is a nice to have if it does not diminish the profits of the business. This is the cold hard truth.

If you start to look through the optics of business == money making machine, you can start to think at rational regulations to curb this in order to protect the regular people. The regulations should keep business in check while allowing them to make reasonable profits.

WarmWash•1h ago
This is no longer about money, it's about power.
JumpCrisscross•1h ago
> This is no longer about money, it's about power

This is more Altman-speak. Before it was about how AI was going to end the world. That started backfiring, so now we're talking about political power. That power, however, ultimately flows from the wealth AI generates.

It's about the money. They're for-profit corporations.

wtetzner•1h ago
Has AI generated any wealth?
alansaber•1h ago
There'd be a recession otherwise, no?
californical•1h ago
I think they meant the resulting LLMs, not the speculation of AI which is currently the biggest driver right now
alansaber•1h ago
Kind of? Assuming OpenAI was actually 2-3 years ahead of other LLM companies, it would be hard to put a value to that tech advantage
WarmWash•1h ago
If AI achieves what these guys envision, money probably won't mean much.

What would they do with money? Pay people to work?

tsunamifury•6m ago
Pay them to dance.
dTal•56m ago
Money is power, and nothing but.
tsunamifury•7m ago
You get it. To everyone who thinks ai is a money furnace they don’t understand the output of the furnace is power and they are happy with the conversion even if the markets aren’t.
maplethorpe•1h ago
It's not long ago they were a non-profit. This sudden change to a for-profit business structure, complete with "businesses exist to make money" defence, is giving me whiplash.
bugufu8f83•52m ago
I find the whole thing pretty depressing. They went to all that effort with the organization and setup of the company at the beginning to try to bake this "good for humanity" stuff into its DNA and legal structure and it all completely evaporated once they struck gold with ChatGPT. Time and time again we see noble intentions being completely destroyed by the pressures and powers of capitalism.

Really wish the board had held the line on firing sama.

rvz•1h ago
It was never about safety.

"Safety" was just a mechanism for complete control of the best LLM available.

When every AI provider did not trust their competitor to deliver "AGI" safely, what they really mean was they did not want that competitor to own the definition of "AGI" which means an IPOing first.

Using local models from China that is on par with the US ones takes away that control, and this is why Anthropic has no open weight models at all and their CEO continues to spread fear about open weight models.

avaer•1h ago
"Safe" is the most dangerous word in the tech world; when big tech uses it, it merely implies submission of your rights to them and nothing more. They use the word to get people on board and when the market is captured they get to define it to mean whatever they (or their benefactors) decide.

When idealists (and AI scientists) say "safe", it means something completely different from how tech oligarchs use it. And the intersect between true idealists and tech oligarchs is near zero, almost by definition, because idealists value their ideals over profits.

On the one hand the new mission statement seems more honest. On the other hand I feel bad for the people that were swindled by the promise of safe open AI meaning what they thought it meant.

csallen•1h ago
How could this ever have been done safely? Either you are pushing the envelope in order to remain a relevant top player, in which case your models aren't safe. Or you aren't, in which case you aren't relevant.
joshstrange•1h ago
I think right here is high on the list of “Why is Apple behind in AI?”. To be clear, I’m not saying at all that I agree with Apple or that I’m defending their position. However, I think that Apple’s lackluster AI products have largely been a result of them, not feeling comfortable with the uncertainty of LLM’s.

That’s not to paint them as wise beyond their years or anything like that, but just that historically Apple has wanted strict control over its products and what they do and LLMs throw that out the window. Unfortunately that that’s also what people find incredibly useful about LLMs, their uncertainty is one of the most “magical” aspects IMHO.

tailnode•1h ago
Took them long enough to ignore the neurotic naysayers who read too many Less Wrong posts
tolerance•1h ago
…and a whole lot of other words too.
simonw•1h ago
You can see the official mission statements in the IRS 990 filings for each year on https://projects.propublica.org/nonprofits/organizations/810...

I turned them into a Gist with fake author dates so you can see the diffs here: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...

Wrote this up on my blog too: https://simonwillison.net/2026/Feb/13/openai-mission-stateme...

pouwerkerk•1h ago
This is fascinating. Does something like this exist for Anthropic? I'm suddenly very curious about consistency/adaptation in AI lab missions.
simonw•20m ago
They're a Public Benefit Corporation but not a non-profit, which means they don't have to file those kinds of documents publicly like 501(c)(3)s do.

I asked Claude and it ran a search and dug up a copy of their certificate of incorporation in a random Google Drive: https://drive.google.com/file/d/17szwAHptolxaQcmrSZL_uuYn5p-...

It says "The specific public benefit that the Corporation will promote is to responsibly develop and maintain advanced AI for the long term benefit of humanity."

There are other versions in https://drive.google.com/drive/folders/1ImqXYv9_H2FTNAujZfu3... - as far as I can tell they all have exactly the same text for that bit with the exception of the first one from 2021 which says:

"The specific public benefit that the Corporation will promote is to responsibly develop and maintain advanced Al for the cultural, social and technological improvement of humanity."

varenc•22m ago
Thank you for actually extracting the historical mission statement changes! Also I love that you/Claude were able to back-date the gist to just use the change logs to represent time.

Worth noting in 2021 there statement just included '...that benefits humanity', and in 2022 'safely' was first added so it became '...that safely benefits humanity'. And then on the most recent one it was entirely changed to be much shorter, and no longer included the word 'safely'.

slibhb•1h ago
I'm more worried about the anti-AI backlash than AI.

All inventions have downsides. The printing press, cars, the written word, computers, the internet. It's all a mixed bag. But part of what makes life interesting is changes like this. We don't know the outcome but we should run the experiment, and let's hope the results surprise all of us.

btown•1h ago
One of the biggest pieces of "writing on the wall" for this IMO was when, in the April 15 2025 Preparedness Framework update, they dropped persuasion/manipulation from their Tracked Categories.

https://openai.com/index/updating-our-preparedness-framework...

https://fortune.com/2025/04/16/openai-safety-framework-manip...

> OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.

> The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.

To see persuasion/manipulation as simply a multiplier on other invention capabilities, and something that can be patched on a model already in use, is a very specific statement on what AI safety means.

Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality.

rdtsc•1h ago
> But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

A step in the positive direction, at least they don't have to pretend any longer.

It's like Google and "don't be evil". People didn't get upset with Google because they were more evil than others, heck, there's Oracle, defense contractors and the prison industrial system. People were upset with them because they were hypocrites. They pretended to be something they were not.

tsunamifury•13m ago
I worked at Google for 10 years in AI and invented suggestive language from wordnet/bag of words.

As much as what you are saying sounds right I was there when sundar made the call to bury proto LLM tech because he felt the world would be damaged for it.

And I don’t even like the guy.

ajam1507•1h ago
Who would possibly hold them to this exact mission statement? What possible benefit could there be to remove the word except if they wanted this exact headline for some reason?
gaigalas•1h ago
Honestly, it's a company and all large companies are sort of f** ups.

However, nitpicking a mission statement is complete nonsense.

chasd00•1h ago
The "safely" in all the AI company PR going around was really about brand safety. I guess they're confident enough in the models to not respond with anything embarrassing to the brand.
khlaox•1h ago
They should have done that after Suchir Balaji was murdered for protesting against industrial scale copyright infringement.
andsoitis•1h ago
“To boldly go where no one has gone before.”
overgard•1h ago
I just saw a video this morning of Sam Altman talking about how in 2026 he's worried that AI is going to be used for bioweapons. I think this is just more fear mongering, I mean, you could use the internet/google to build all sorts of weapons in the past if you were motivated, I think most people just weren't. It does kind of tell a bleak story though that the company is removing safety as a goal and he's talking about it being used for bioweapons. Like, are they just removing safety as a goal because they don't think they can achieve it? Or is this CYOA?
mystraline•1h ago
C'mon folks. They were always a for-profit venture, no matter what they said.

And any ethic, and I do mean ANY, that gets in the way of profit will be sacrificed to the throne of moloch for an extra dollar.

And 'safely' is today's sacrificed word.

This should surprise nobody.

jesse_dot_id•55m ago
It's probably because they now realize that AGI is impossible via LLM.
marcyb5st•52m ago
Wouldn't this give more munitions to the lawsuit that Elon Musk opened against OpenAI?

Edit (link for context): https://www.bloomberg.com/news/articles/2026-01-17/musk-seek...

tyre•49m ago
I’m guessing this is tied to going public.

In the US, they would be sued for securities fraud every time their stock went down because of a bad news article about unsafe behavior.

They can now say in their S-1 that “our mission is not changing”, which is much better than “we’re changing our mission to remove safety as a priority.”

amelius•44m ago
First they deleted Open and now Safely. Where will this end?
charcircuit•43m ago
Safety is extremely annoying from the user perspective. AI should be following my values, not whatever an AI lab chose.
wiseowise•38m ago
This. This whole hysteria sounds like: let's prohibit knifes because people kill themselves and each other with them!
Bnjoroge•42m ago
Did anyone actually think their sole purpose as an org is anything but make money? Even anthropic isnt any different, and I am very skeptical even of orgs such as A12
fragmede•15m ago
Yes, because there are many ways to make money and the chose this one instead of anything else.
OutOfHere•39m ago
Safety comes down to the tools that AI is granted access to. If you don't want the AI to facilitate harm, don't grant it unrestricted access to tools that do damage. As for mere knowledge output, it should never be censored.
jsemrau•34m ago
Unlocked mature AI will win the adoption race. That's why I think China's models are better positioned.
techpression•23m ago
I mean Sam Altman was answering ”bio terrorism” on the question of what’s the most worrying things right now from AI in a town hall recently. I don’t have the url currently but it should be easy to find.
asciii•8m ago
There should be a name change to reflect the closed nature of “Open”AI…imo
iugtmkbdfil834•5m ago
Honestly, it may be contrarian opinion, but: good.

The ridiculous focus on 'safety' and 'alignment' has kept US handicapped when compared to other groups around the globe. I actually allowed myself to forgive Zuckerberg for a lot of of the stuff he did based on what did with llama by 'releasing' it.

There is a reason Musk is currently getting its version of ai into government and it is not just his natural levels of bs skills. Some of it is being able to see that 'safety' is genuinely neutering otherwise useful product.