frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Fission for Algorithms: The Undermining of Nuclear Regulation in Service of AI

https://ainowinstitute.org/publications/fission-for-algorithms
1•speckx•7m ago•0 comments

Tim Cook could step down as Apple CEO next year

https://www.theverge.com/news/821691/tim-cook-step-down-apple-ceo-next-year
1•ciccionamente•9m ago•0 comments

Show HN: ZenPaint, a pixel-perfect MacPaint recreation for the browser

https://zenpaint.org/
5•allthreespies•10m ago•0 comments

Adriana Kugler: Public Financial Disclosure Report [pdf]

https://extapps2.oge.gov/201/Presiden.nsf/PAS+Index/79B6D1BA0CC8C9A085258D43003191D2/$FILE/Adrian...
2•impish9208•10m ago•0 comments

New York Lacked an Affordable Housing Portal. So These Teenagers Made One

https://www.nytimes.com/2025/11/14/realestate/affordable-housing-rent-stabilized-website.html
2•ownlife•11m ago•0 comments

Aptera's Solar-Powered EV Just Hit a Crucial Milestone

https://insideevs.com/news/778659/aptera-validation-assembly-line/
2•MilnerRoute•12m ago•0 comments

Understanding Go's Garbage Collector

https://rugu.dev/en/blog/understanding-go-gc/
3•raicem•13m ago•0 comments

Climate scientists claim Gulf Stream could be near collapse

https://www.nature.com/articles/s43247-025-02793-1
3•CGMthrowaway•13m ago•0 comments

Using a Progressive Web App to Teach My Son Absolute Pitch [video]

https://www.youtube.com/watch?v=l2Z6uEsx9lE
2•wintercarver•14m ago•0 comments

GitHub Actions in Your JetBrains IDE

https://revenate.github.io/actionate/
1•revenate_•17m ago•1 comments

Caffeinated Coffee Consumption or Abstinence to Reduce Atrial Fibrillation

https://jamanetwork.com/journals/jama/fullarticle/2841253
3•stared•18m ago•0 comments

I Gave a Bounty Hunter $300. Then He Located Our Phone (2019)

https://www.vice.com/en/article/i-gave-a-bounty-hunter-300-dollars-located-phone-microbilt-zumigo...
1•JumpCrisscross•25m ago•0 comments

AI for coding is still playing Go, not StarCraft

https://quesma.com/blog/coding-is-starcraft-not-go/
1•stared•29m ago•0 comments

Ancient RNA expression profiles from the extinct woolly mammoth

https://www.cell.com/cell/fulltext/S0092-8674(25)01231-0?_returnURL=https%3A%2F%2Flinkinghub.else...
1•naves•29m ago•0 comments

Masimo wins $634M verdict against Apple in patent fight over Apple Watch

https://www.dailyjournal.com/articles/388571-masimo-wins-634-million-verdict-against-apple-in-hig...
1•swat535•31m ago•0 comments

Discoveries That Changed My Worldview:an exploration of the human predicament [video]

https://www.youtube.com/watch?v=sdy9tKCAe_s
1•ambientenv•31m ago•0 comments

Archimedes – A Python toolkit for hardware engineering

https://pinetreelabs.github.io/archimedes/blog/2025/introduction.html
3•i_don_t_know•32m ago•0 comments

Why are my food delivery apps AI generating photos of food?

https://shub.club/writings/2025/november/why-are-my-food-delivery-apps-ai-generating-photos-of-my...
2•forthwall•32m ago•1 comments

I built an OSS newsletter digester that uses AI to send me daily Slack summaries

https://github.com/mfyz/newsletter-blog-digester
1•mfyz•37m ago•1 comments

Upgrading Postgres Major, and Django Model with Logical Replication

https://tr3s.ma/posts/2025-11/pgmajorupgradedjango/
2•3manuek•39m ago•0 comments

Some context on why some 80s kids keep getting mistaken for GPT

https://old.reddit.com/r/diypedals/comments/1ovmx4l/comment/nokigif/
1•neilv•39m ago•1 comments

Forget AGI–Sam Altman celebrates ChatGPT following em dash formatting

https://arstechnica.com/ai/2025/11/forget-agi-sam-altman-celebrates-chatgpt-finally-following-em-...
1•joak•40m ago•1 comments

Monetizing Telegram: 4 Methods Every Channel Owner Should Know

https://wilnickmagazine.medium.com/how-to-monetize-a-telegram-channel-4-simple-methods-to-earn-mo...
1•Leonise•40m ago•0 comments

Coolify accidentally broke Docker layer caching (and what you can do now)

https://www.loopwerk.io/articles/2025/coolify-docker-layer-caching/
1•kjmr•41m ago•0 comments

Show HN: I built an AI thumbnail generator with a live editor, no login required

https://genlayers.com
2•mustafiz8260•41m ago•0 comments

Some Economics of Artificial Super Intelligence

https://marginalrevolution.com/marginalrevolution/2025/11/some-economics-of-artificial-super-inte...
3•metadat•43m ago•0 comments

South Korea bans flights as 500k take crucial university admission test

https://www.cnn.com/2025/11/13/asia/south-korea-exam-flights-intl-hnk
3•edward•44m ago•0 comments

Tender: Inbox for Your Personal Finance

https://demo.tender.run/?
1•skadamat•47m ago•0 comments

Just a Reminder: The Health Risks of Sitting More Than 8 Hours a Day

https://www.scimex.org/newsfeed/sitting-too-much-increases-your-risk-of-death,-especially-in-midd...
1•birdculture•50m ago•0 comments

Nevada Governor's office covered up Boring Co safety violations

https://fortune.com/2025/11/12/elon-musk-boring-company-tunnels-injuries-osha-citations-fines-res...
2•Chinjut•51m ago•0 comments
Open in hackernews

Llmdeathcount.com

https://llmdeathcount.com/
41•brian_peiris•1h ago

Comments

brian_peiris•1h ago
Large Language Models like ChatGPT have lead people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
courseofaction•44m ago
Let's examine one article to see whether or not this site is intellectually honest:

    ‘You’re the only one I can talk to,’ the girl told an AI chatbot; then she took her own life - baltimoresun.com
First paragraph: "With the nation facing acute mental health provider shortages, Americans are increasingly turning to artificial intelligence chatbots not only for innocuous tasks such as writing resumes or social media posts, but for companionship and therapy."

"LLMDeathCount.com" willfully misrepresents the article and underlying issue. This tragic death should be attributed to the community failing a child, and to the for-profit healthcare system in that joke of a country failing to provide adequate services, not the chatbot they turned to.

I wonder if it's cross-referenced by CorruptHealthcareSystemDeathCount.com

fishgoesblub•1h ago
If the bullshit generator tells me that fire is actually cold and not dangerous, the fault lies entirely with me if I touch it and burn my hand.
d-us-vb•1h ago
It's harder when the the BS generator says that "it's true strength to recognize how unhappy you are. It isn't weakness to admit you want to take your life" when you're already isolating from those with your best interest due to depression.
fishgoesblub•57m ago
Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts.
politelemon•49m ago
The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context.
collingreen•46m ago
Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead?
afandian•1h ago
What a shameful comment. Look at the ages of some of these people.

You may [claim to] be of sound mind, and not vulnerable to suggestion. That doesn't mean everyone else in the world is.

GaryBluto•1h ago
If an LLM can get you to kill yourself you shouldn't have had access to a phone with the ability to access an LLM in the first place.
afandian•57m ago
I'd invite you to step away, pause, and think about this subject for a bit. There are many shades of grey to human existence. And plenty of people who are vulnerable but not yet suicidal.

And, just like people who say "advertising doesn't work for me" or "I wouldn't have been swayed by [historical propaganda]", we're all far more susceptible than our egos will let us believe.

courseofaction•50m ago
"LLMDeathCount.com" is not trucking with shades of grey.
free_bip•59m ago
You are not immune to propaganda.
GaryBluto•1h ago
Looking forward to mobilephonedeathcount.com and computernetworkingdeathcount.com because most of them accessed the LLM through those technologies.

This is an incredibly manipulative propaganda piece that seeks to blame companies for mental health issues of the user. We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.

lukev•58m ago
This is an issue of content, not transmission technology.

Have you read the transcripts of any of these chats? It's horrifying.

GaryBluto•55m ago
>Have you read the transcripts of any of these chats? It's horrifying.

Most LLMs reflect the user's attitudes and frequently hallucinate. Everybody knows this. If people misuse LLMs and treat them as a source of truth and rationality, that is not the fault of the providers.

lukev•52m ago
These products are being marketed as "artificial intelligence."

Do you expect a mentally troubled 13 year old to see past the marketing and understand how these things actually work?

GaryBluto•50m ago
The mentally troubled 13 year old's parents should have intervened. We can't design the world for the severely mentally ill.
atkirtland•35m ago
Responsibility for handling mental illness should be a joint effort. It's not reasonable to expect parents alone to handle all problems. Some issues may not be apparent at home, for example.
pinkgolem•58m ago
You are comparing a medium of transport to (generated) content.

And yes, Contend that encourages suicide is largely discouraged/shunned, be it film, forums, books

maartin0•55m ago
Maybe not the entire internet, but this absolutely true for TikTok/Instagram-like algorithms
loeg•47m ago
> We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.

Wrongly or rightly, people frequently blame social media for tangentially associated outcomes. Including suicide.

lukev•1h ago
LLMs are an interesting, useful technology.

The "chatbot" format is a cognitive hazard, and places users in a funhouse mirror maze reflecting back all sorts of mental and conceptual distortions.

d-us-vb•55m ago
If they were developed to actually tell people the truth, rather than simply be a sycophant, things might be different. But as Pilate said all those years ago "what is truth".
lukev•53m ago
Well, truth is hard to pin down, let alone computationally. But the sycophancy is definitely a problem.
jstummbillig•57m ago
What a distasteful and devious project.
DonaldPShimoda•54m ago
"Oh no, people are finding links between an unregulated technology and potential real-world harms, how awful."
ipsum2•51m ago
Don't make up quotes and put words in other people's mouths. Own your words.
DonaldPShimoda•46m ago
I was very obviously writing facetiously and in a mocking tone?

I swear, it's like literacy's been made illegal or something. (For sake of explicitness, I am now mocking your inability to decipher what I said in the previous comment despite it being very straightforward.)

d-us-vb•51m ago
If a new technology is directly or indirectly involved in people's deaths, we can't just ignore the problems. Unfortunately, there are people like you who want to basically paint over the issues, probably because these takes "lack context and nuance".
GaryBluto•51m ago
> probably because these takes "lack context and nuance".

How anti-intellectual of you.

d-us-vb•44m ago
Well, I'm definitely anti-pseudo-intellectual. Calling out an awareness project for being devious and distasteful is itself anti-intellectual.

The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them away from it.

GaryBluto•40m ago
> Calling out an awareness project for being devious and distasteful is itself anti-intellectual.

Read that again. Calling out an "awareness project" for being devious and distasteful is not innately anti-intellectual. Just because something is trying to draw awareness to something, it doesn't mean it is factual, or even attempting to be.

> The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them out.

Mirroring the user's most prominent attitude is what it's designed to do. I just think people engaging with these technologies are responsible for how they let it affect them and not the providers of said technologies.

jstummbillig•22m ago
The issue I take is not criticism of LLMs. It is the lack thereof, and presenting it as such.

If you find ~30 reported deaths among 500 million users problematic to begin with, you are simply out of touch with reality. If you then put effort behind promoting this as a problem, that's not an issue of "lack of context and nuance" (what's with the quotes? Who are you quoting?). I called it what it is to me: Distasteful and devious.

jackblemming•56m ago
How does a clearly mentally ill and suicidal person deciding to take their own life mean the LLM is responsible? That’s silly. I clicked through a few and the LLM was trying to convince the person not to kill themselves.
GaryBluto•48m ago
This was a project I have no doubt was established after the creator had already made up their mind on LLMs and artificial intelligence.
loeg•46m ago
Also, the background suicide rate is not zero. Is this a higher or lower rate?
kachapopopow•55m ago
I don't know how to feel about this until it is put in relative terms, if the claims are to be believed then out of 200m users that is a fairly low number, suspiciously low to be exact compared to how badly AI can feed into delusions.

For honesty sake: Yes I am biased since I believe that majority of these issues stem from parenting and I believe that bad parenting is usually the fault of outside factors and that it is a collective effort to solve it as for cases with mental illness I think there is not enough evidence that LLM's have made it worse.

xiphias2•41m ago
The amount of times ChatGPT o3 helped me with medical issues makes me think that it already saved much more lives.

Of course I'm not trying to suggest that these deaths are not tragedy, but the help it gives is so much more.

puppycodes•21m ago
As someone who has built and managed several suicide hotlines I'm very skeptical of these claims.

Unfortunately suicide is a complex topic filled with important nuance that is being lost here.

Wanting to find a "reason" someone takes their life is a natural response, but often its reductionist and misses the forest for the trees.