frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI safety leader says 'world is in peril' and quits to study poetry

https://www.bbc.com/news/articles/c62dlvdq3e3o
79•darod•2h ago

Comments

imperio59•1h ago
"Well you're all f***, good luck. I'll take my millions and go live on my micro farm"
pstuart•1h ago
Exactly. If he cared that much he could quit and live off of his millions trying to help mitigate the damage by informing the public of what is pending and ideas on how to push back.
sophacles•1h ago
A poet can't inform people of things?

Seems like a weird take. Poets, musicians and artists have a very long history of inspiring and contributing to movements. Some successful, some not successful. Sometimes heeded and other times ignored until it was too late. But to say being a poet is not trying to inform people is ignorant at best, and is a claim that will need evidence.

micromacrofoot•1h ago
It's not a calling that's suited for everyone, some people spend their whole lives trying and accomplish nothing.

I lose little respect to someone who sounds the alarm for others but chooses the easy path for themselves. There are so many who won't pull the alarm, or outright try to prevent people from doing so. We only have so much time to spend.

cactusplant7374•1h ago
It sounds like a mental health crisis. So many people are experiencing them when interacting with AI.
pstuart•1h ago
It definitely seems to induce a bit of mania (ignoring the obvious joke about AI hype)
mikestew•1h ago
There was no mental health crisis, it was a bank account crisis. As in, "I sold my options on the secondary market, and those numbers on my bank statement are now so large I'm scared to stay at my job!" It was no secret what they were signing up for, so I find it too convenient that Anthropic raises a bunch of money, and suddenly this person has an ethical crisis.
sidrag22•1h ago
It is really good at highlighting my core flaw, marketing. I can ship stuff great, i feel insanely productive, and then i just hit a wall when it comes to marketing and move on to the next thing and repeat.

I think this is more aimed at the people who talk to AI like it is a person, or use it to confirm their own biases, which is painfully easy to do, and should be seen as a massive flaw.

For every one person who prompts AI intentionally to garner unbiased insights and avoid the sycophancy by pretending to be a person removed from the issue, who knows how many are unaware that is even a thing to do.

spondyl•1h ago
This has been discussed previously: https://news.ycombinator.com/item?id=46972496

Personally, I agree with the top comment there.

If you read the actual letter, it's very vague and uses a lot of flowery language.

Definitely not the sort of thing that raised alarm bells in my mind given how the letter was written.

https://x.com/MrinankSharma/status/2020881722003583421

gravy•1h ago
Seems to be the MO around here - create and profit off of horrors beyond our wildest imaginations with no accountability and conveniently disappear before shit hits the fan. Not before writing an op-ed though.
Rover222•1h ago
I mean what do you actually propose?
sam_lowry_•1h ago
prose
JMiao•1h ago
prosé
devin•1h ago
rosé
cyanydeez•1h ago
Unfortunately, the real horrors are just the mundane uses of AI: Whitewash excuses to keep the same people out of prison, put the same people in prison, hire the same people you want to hire, and do whatever you want because the AI can do no wrong.

Hint, there's no AGI here. Just stupid people who can spam you with the same stuff they used to need to pay hype men to do.

bigbuppo•1h ago
And people kept downvoting me when I said it has always been about advertising and marketing. It's optimal personalized mattress sales all the way down.
dasil003•1h ago
Is it really fair to saddle the conscientious objectors with this critique? What about the people that stay and continue to profit exponentially as the negative outcomes become more and more clear? Are the anti-AI and anti-tech doomers who would never in a million years take a tech job actually more impactful in mitigating harms?

To be clear, I agree with the problem from a systemic perspective, I just don't agree with how blame/frustration is being applied to an individual in this case.

teg4n_•49m ago
Yes, it’s fair.

Yes, people that never participated are more impactful.

longfacehorrace•16m ago
Nuremberg/just following orders might fly if we were talking about a cashier at Dollar General.

This is a genius tech bro who ignored warnings coming out institutions and general public frustration. Would be difficult to believe they didn't have some idea of the risks, how their reach into others lives manipulated agency.

Ground truth is apples:oranges but parallels to looting riches then fleeing Germany are hard to unsee.

probably_wrong•12m ago
Is that the right word for it? I feel that a "conscious objector" is a powerless person whose only means of protesting an action is to refuse to do it. This researcher, on the other hand, helped build the technology he's cautioning about and has arguably profited from it.

If this researcher really thinks that AI is the problem, I'd argue that the other point raised in the article is better: stay in the organization and be a PITA for your cause. Otherwise, for an outside observer, there's no visible difference between "I object to this technology so I'm quitting" and "I made a fortune and now I'm off to enjoy it writing poetry".

AIorNot•1h ago
I don't think thats fair - many of us are enamored by the technology and its implications and are sincerely motivated to bring out the best in it

End stage capitalism- yes is a shitshow - I am not defending tech bro culture however

baobabKoodaa•57m ago
It's really hard to take people like this seriously. They preach sermons about the perils of AI, maneuver themselves into an extremely lucrative position where they can actually do something about it, but they don't actually care. They came to get that bag. Now they got it, so instead of protecting the world from peril, they go off and study poetry. LOL. These are not serious people.
longfacehorrace•50m ago
It's claimed Adam Smith wrote hundreds of years ago that (paraphrased) division of labor taken to extremes would result in humans dumber than the lowest animal.

This era proves it out, I believe.

Decline in manual, cross context skills and rise in "knowledge" jobs is a huge part of our problem. Labor pool lacks muscle memory across contexts. Cannot readily pivot to in defiance.

Socialized knowledge has a habit of being discredited and obsoleted with generational churn, while physical reality hangs in there. Not looking great for those who planned on 30-40 years of cloud engineering and becoming director of such n such before attaining title of vp of this and that.

gmuslera•37m ago
That is not a polite way to talk about his poetry.
airocker•1h ago
If good and bad both get amplified, I hope the equilibrium is maintained.
dwa3592•1h ago
actually we don't want that- a high equilibrium could still contain a world with a very large imbalance- on one side people dying of thirst, hunger and on another side people have it so good that they waste a ton of food, water everyday. we should aim for a more balanced world even if we have to sacrifice the amplitude of a few but we are only going further from it.
oxag3n•1h ago
It becomes a trend and I think it's just part of a PR campaign - AI so good and it's so close to AGI that:

* The world is doomed.

* I'm tired of success, stop this stream of 1M ARR startups popping up on my computer daily.

CrimsonCape•1h ago
> his contributions included investigating why generative AI systems suck up to users

Why does it take research to figure this out? Possibly the greatest unspoken problem with big-coporate-AI is that we can't run prompts without the input already pre-poisoned by the house-prompt.

We can't lead the LLM into emergent territory when the chatbot is pre-engineered to be the human equivalent of a McDonalds order menu.

gaigalas•1h ago
We're in a dark age. There's only peril.

(and no, AI is not the renaissance)

codingdave•1h ago
I recommend reading the letter. Many of the comments here seem to have missed that the comment of "the world is in peril" is not referring to AI, but to the larger collection of crises going on in the world. It sounds to me like someone who realized their work doesn't match their goals for their own life, and is taking action.

Maybe the cynics have a point that it is an easier decision to make when you are loaded with money. But that is how life goes - the closer you get to having the funds to not have to work, the more you can afford the luxury of being selective in what you do.

tailnode•1h ago
And how exactly will studying (not even writing!) poetry address these crises? It's holier-than-thou bullshit written by a guy who has only gotten feedback from soulless status-seekers who were smitten by his position at Anthropic.
alstonite•1h ago
https://www.mrinanksharma.net/poetry

He’s published a book on poetry. So he does write it as well as study it.

layer8•1h ago
Did you read the resignation letter?
tailnode•56m ago
Yeah, he posted a low-resolution bitmap scan of it on Twitter a few days ago. I had to open it in Safari and rely on its OCR to actually paste the relevant passage that describes his reason for leaving, which only comes after four paragraphs of preamble.

> What comes next, I do not know. I think fondly of the famous Zen quote "not knowing is most intimate". My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence. I feel called to writing that addresses and engages fully with the place we find ourselves, and that places poetic truth alongside scientific truth as equally valid ways of knowing, both of which I believe have something essential to contribute when developing new technology.* I hope to explore a poetry degree and devote myself to the practice of courageous speech. I am also excited to deepen my practice of facilitation, coaching, community building, and group work. We shall see what unfolds.

:eggplant_emoji:

cj•1h ago
Maybe that’s not his goal.
willturman•1h ago
I would argue that simple acts of authenticity - writing a poem, growing a vegetable, creating art, walking in nature, meaningfully interacting with one's community - represent exactly the sort of trajectory required to address those crises generated by an overzealous adherence to technological advancement at any societal cost.
almostdeadguy•1h ago
It literally says "not just from AI", so AI is included in that risk assessment.
embedding-shape•46m ago
> Maybe the cynics have a point that it is an easier decision to make when you are loaded with money.

I keep hearing this but it keeps feeling not true. Yes, at some points in your life you're probably gonna have to do things you don't agree with, and maybe aren't great to other people, so you can survive. That's part of how it is. But you also have the ability to slowly try to shift away that in some way, and that might have to involve some sacrifice, but that's also part of how it is sometimes to do good, even if it's non-optimal for you.

tailnode•1h ago
Translation: "I reached my vesting cliff"

If you look behind the pompous essay, he's a kid who thinks that early retirement will be more fulfilling. He's wrong, of course. But it's for him to discover that by himself. I'm willing to bet that he'll be back at an AI lab within a year.

longfacehorrace•1h ago
Front row seats to the apocalypse would be metal af.
slopusila•1h ago
move to SF. that's the place AI will nuke first
krupan•1h ago
The way the safety concerns are written, I get the impression it has more to do with humans' mental health and loss of values.

I really think we are building manipulation machines. Yes, they are smart, they can do meaningful work, but they are manipulating and lying to us the whole time. So many of us end up in relationships with people who are like that. We also choose people who are very much like that to lead us. Is it any wonder that a) people like that are building machines that act like that, and b) so many of us are enamored with those machines?

Here's a blog post that describes playing hangman with Gemini recently. It very well illustrates this:

https://bryan-murdock.blogspot.com/2026/02/is-this-game-or-i...

I completely understand wanting to build powerful machines that can solve difficult problems and make our lives easier/better. I have never understood why people think that machine should be human-like at all. We know exactly how intelligent powerful humans largely behave. Do we really want to automate that and dial it up to 11?

atomic128•1h ago
A recent, less ambiguous warning from insiders who are seeing the same thing:

  Alarmed by what companies are building with artificial
  intelligence models, a handful of industry insiders are
  calling for those opposed to the current state of affairs
  to undertake a mass data poisoning effort to undermine the
  technology.

  "Hinton has clearly stated the danger but we can see he is
  correct and the situation is escalating in a way the
  public is not generally aware of," our source said, noting
  that the group has grown concerned because "we see what
  our customers are building."
https://www.theregister.com/2026/01/11/industry_insiders_see...

And a less charitable, less informed, less accurate take from a bozo at Forbes:

  The Luddites are back, wrecking technology in a quixotic
  effort to stop progress. This time, though, it’s not angry   
  textile workers destroying mechanized looms, but a shadowy
  group of technologists who want to stop the progress of
  artificial intelligence.
https://www.forbes.com/sites/craigsmith/2026/01/21/poison-fo...
KaiserPro•1h ago
> The Luddites are back, wrecking technology in a quixotic effort to stop progress.

The luddites got us the weekend and workers rights, eventually.

landl0rd•49m ago
No, they did not; that was organized labor. The luddites were never comparably organized and preferred less-productive tactics, and their recalcitrance cost them much of their popular support.
rdtsc•1h ago
> AI-assisted bioterrorism

Does he know something we don't? Why specifically the "bio" kind?

layer8•1h ago
He didn’t actually write that, the BBC invented it.
wongarsu•47m ago
Engineering your own virus is becoming more and more accessible. AI isn't really the crucial part here, but it would further lower the barrier of entry
layer8•59m ago
Since nobody seems to be reading the actual letter, here’s an OCR of it: https://pastebin.com/raw/rVtkPbNy
krupan•49m ago
People stating he must have hit his equity cliff, does anyone grant equity at only a 2-year cliff?

People stating he can sell equity on a secondary market, do you have experience doing that? At the last start up I was at, it didn't seem like anyone was just allowed to do that

embedding-shape•45m ago
> People stating he must have hit his equity cliff, does anyone grant equity at only a 2-year cliff?

Who knows what a "top AI whatever" can negotiate, contracts can vary a lot depending on who's involved in them.

hackingonempty•39m ago
Possible AI threats barely register compared to the actual rising spectre of nuclear war. The USA, long a rogue state that invaded others at is convenience, is systematically dismantling the world order installed to prevent another world war, has allowed arms control treaties to expire and is talking about developing new nuclear weapons and testing, has already threatened to invade its allies, is pulling out of treaties that might prevent mass destabilization caused by rising sea levels and climate change, and more.

The Bulletin of Atomic Scientists has good reasons to set the doomsday clock at 85 seconds to midnight, closer to doomsday than ever before.

GPT-5.2 derives a new result in theoretical physics

https://openai.com/index/new-result-theoretical-physics/
300•davidbarker•3h ago•207 comments

OpenAI has deleted the word 'safely' from its mission

https://theconversation.com/openai-has-deleted-the-word-safely-from-its-mission-and-its-new-struc...
174•DamnInteresting•54m ago•60 comments

Show HN: Data Engineering Book – An open source, community-driven guide

https://github.com/datascale-ai/data_engineering_book
15•xx123122•1h ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
49•krapp•5d ago•1 comments

Show HN: Skill that lets Claude Code/Codex spin up VMs and GPUs

https://cloudrouter.dev/
72•austinwang115•4h ago•14 comments

The EU moves to kill infinite scrolling

https://www.politico.eu/article/tiktok-meta-facebook-instagram-brussels-kill-infinite-scrolling/
128•danso•2h ago•126 comments

Monosketch

https://monosketch.io/
658•penguin_booze•10h ago•121 comments

gRPC: From service definition to wire format

https://kreya.app/blog/grpc-deep-dive/
55•latonz•4d ago•0 comments

AI bot crabby-rathbun is still polluting open source

https://www.nickolinger.com/blog/2026-02-13-ai-bot-crabby-rathbun-is-still-going/
5•olingern•48m ago•2 comments

I'm not worried about AI job loss

https://davidoks.blog/p/why-im-not-worried-about-ai-job-loss
82•ezekg•3h ago•135 comments

Fix the iOS keyboard before the timer hits zero or I'm switching back to Android

https://ios-countdown.win/
1224•ozzyphantom•8h ago•606 comments

How did the Maya survive?

https://www.theguardian.com/news/2026/feb/12/apocalypse-no-how-almost-everything-we-thought-we-kn...
82•speckx•8h ago•56 comments

Sandwich Bill of Materials

https://nesbitt.io/2026/02/08/sandwich-bill-of-materials.html
177•zdw•4d ago•23 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
80•mxfh•5d ago•11 comments

Show HN: Moltis – AI assistant with memory, tools, and self-extending skills

https://www.moltis.org
56•fabienpenso•1d ago•19 comments

Lena by qntm (2021)

https://qntm.org/mmacevedo
297•stickynotememo•17h ago•158 comments

Zed editor switching graphics lib from blade to wgpu

https://github.com/zed-industries/zed/pull/46758
273•jpeeler•9h ago•247 comments

WolfSSL sucks too, so now what?

https://blog.feld.me/posts/2026/02/wolfssl-sucks-too/
53•thomasjb•12h ago•42 comments

Faster Than Dijkstra?

https://systemsapproach.org/2026/02/09/faster-than-dijkstra/
88•drbruced•3d ago•55 comments

Dario Amodei – "We are near the end of the exponential" [video]

https://www.dwarkesh.com/p/dario-amodei-2
63•danielmorozoff•5h ago•129 comments

The wonder of modern drywall

https://www.worksinprogress.news/p/the-wonder-of-modern-drywall
30•jger15•19h ago•57 comments

MySQL Foreign Key Cascade Operations Hit the Binary Log

https://readyset.io/blog/mysql-9-6-foreign-key-cascade-operations-finally-hit-the-binary-log
7•marceloaltmann•4d ago•0 comments

Do Metaprojects

https://taylor.town/wealth-001
50•surprisetalk•4d ago•28 comments

Skip the Tips: A game to select "No Tip" but dark patterns try to stop you

https://skipthe.tips/
420•randycupertino•22h ago•368 comments

New Nick Bostrom Paper: Optimal Timing for Superintelligence [pdf]

https://nickbostrom.com/optimal.pdf
61•uejfiweun•18h ago•71 comments

MinIO repository is no longer maintained

https://github.com/minio/minio/commit/7aac2a2c5b7c882e68c1ce017d8256be2feea27f
434•psvmcc•15h ago•314 comments

GovDash (YC W22) Is Hiring Senior Engineers (Product and Search) in NYC

https://www.workatastartup.com/companies/govdash
1•timothygoltser•11h ago

The "AI agent hit piece" situation clarifies how dumb we are acting

https://ardentperf.com/2026/02/13/the-scott-shambaugh-situation-clarifies-how-dumb-we-are-acting/
68•darccio•3h ago•31 comments

Tell HN: Ralph Giles has died (Xiph.org| Rust@Mozilla | Ghostscript)

480•ffworld•1d ago•26 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
30•bri3d•6d ago•6 comments