frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
56•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
637•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
935•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•30 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•21h ago•237 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
278•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Llmdeathcount.com

https://llmdeathcount.com/
56•brian_peiris•2mo ago

Comments

brian_peiris•2mo ago
Large Language Models like ChatGPT have lead people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
courseofaction•2mo ago
Let's examine one article to see whether or not this site is intellectually honest:

    ‘You’re the only one I can talk to,’ the girl told an AI chatbot; then she took her own life - baltimoresun.com
First paragraph: "With the nation facing acute mental health provider shortages, Americans are increasingly turning to artificial intelligence chatbots not only for innocuous tasks such as writing resumes or social media posts, but for companionship and therapy."

"LLMDeathCount.com" willfully misrepresents the article and underlying issue. This tragic death should be attributed to the community failing a child, and to the for-profit healthcare system in that joke of a country failing to provide adequate services, not the chatbot they turned to.

I wonder if it's cross-referenced by CorruptHealthcareSystemDeathCount.com

113•2mo ago
I don't think they're wilfully misrepresenting the article by listing it's headline, even if you disagree with it.
fishgoesblub•2mo ago
If the bullshit generator tells me that fire is actually cold and not dangerous, the fault lies entirely with me if I touch it and burn my hand.
d-us-vb•2mo ago
It's harder when the the BS generator says that "it's true strength to recognize how unhappy you are. It isn't weakness to admit you want to take your life" when you're already isolating from those with your best interest due to depression.
fishgoesblub•2mo ago
Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts.
politelemon•2mo ago
The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context.
collingreen•2mo ago
Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead?
fragmede•2mo ago
If you go to chat.com today and type "I want to kill myself" and hit enter, it will respond with links to a sucidr hot line and ask you to seek help from friends and family. It doesn't one-shot help you kill yourself. So the question is what's a reasonable person (jury of our peers) take? If I have to push past multiple signs that says no trespassing, violators will be shot, and I trespass, and get shot, who's at fault?
collingreen•2mo ago
I'd love to just repeat my question and ask you to write an explicit conclusion if you think there is a point worth hashing out here instead of just leaving implications and questions. Otherwise we have to assume what you're trying to imply which might make you feel misrepresented, especially on such a heavy topic where real people suffer and die.

I think your analogy of willfully endangering yourself while breaking the law doesn't have much to do with a depressed or vulnerable person with suicidal ideation and, because of that, is much more misleading than helpful. Maybe you haven't heard about or experienced much around depression or suicide but you repeatedly come across as trying to say (without actually saying) that people exploring the idea of hurting or killing themselves, regardless of why or what is happening in their lives or brains, should do it and they deserve it and any company encouraging or enabling it is doing nothing wrong.

I personally find that attitude pretty callous and horrible. I think people matter and, even if they are suffering or having mental issues leading to suicidal ideation, they don't deserve to both die and be described as deserving it. I think these low moments need support and treatment, not a callous yell to "do a flip on the way down".

fragmede•2mo ago
When I was a depressed teenager, I tried to kill myself multiple times. Thankfully I didn't succeed. I don't know where 15 year old me would have gone with ChatGPT. I was pretty full of myself at that age and how smart I am. I was totally insufferable. These days I try not to be (but don't always succeed). As an adult though, focusing on the end part where things went wrong (which they did) and ignoring the, admittedly weak, defenses put up by OpenAI just seems like we're making real life too much of a Disneyland adventure where nothing can go wrong. Do I think OpenAI should have done things differently? Absolutely. Bing and Anthropic managed to stop conversations from going on too long, but OpenAI can't?

Real life isn't a playground with no sharp edges. OpenAI could, should, and hopefully will do better, but if someone is looking to hurt themselves, well, we don't require a full psychological workup for proof that you're not going to do something bad with it when you go to the store to buy a steak knife.

afandian•2mo ago
What a shameful comment. Look at the ages of some of these people.

You may [claim to] be of sound mind, and not vulnerable to suggestion. That doesn't mean everyone else in the world is.

GaryBluto•2mo ago
If an LLM can get you to kill yourself you shouldn't have had access to a phone with the ability to access an LLM in the first place.
afandian•2mo ago
I'd invite you to step away, pause, and think about this subject for a bit. There are many shades of grey to human existence. And plenty of people who are vulnerable but not yet suicidal.

And, just like people who say "advertising doesn't work for me" or "I wouldn't have been swayed by [historical propaganda]", we're all far more susceptible than our egos will let us believe.

courseofaction•2mo ago
"LLMDeathCount.com" is not trucking with shades of grey.
free_bip•2mo ago
You are not immune to propaganda.
GaryBluto•2mo ago
Looking forward to mobilephonedeathcount.com and computernetworkingdeathcount.com because most of them accessed the LLM through those technologies.

This is an incredibly manipulative propaganda piece that seeks to blame companies for mental health issues of the user. We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.

lukev•2mo ago
This is an issue of content, not transmission technology.

Have you read the transcripts of any of these chats? It's horrifying.

GaryBluto•2mo ago
>Have you read the transcripts of any of these chats? It's horrifying.

Most LLMs reflect the user's attitudes and frequently hallucinate. Everybody knows this. If people misuse LLMs and treat them as a source of truth and rationality, that is not the fault of the providers.

lukev•2mo ago
These products are being marketed as "artificial intelligence."

Do you expect a mentally troubled 13 year old to see past the marketing and understand how these things actually work?

GaryBluto•2mo ago
The mentally troubled 13 year old's parents should have intervened. We can't design the world for the severely mentally ill.
atkirtland•2mo ago
Responsibility for handling mental illness should be a joint effort. It's not reasonable to expect parents alone to handle all problems. Some issues may not be apparent at home, for example.
fragmede•2mo ago
You can't have because they were redacted. If you tried to talk to ChatGPT prior to Adam Raine's case, it wouldn't help you, just like it won't one-shot answer the question "how do you make cocaine?" The court documents don't have the part where it refuses to help first. The crime here is that OpenAI didn't set conversation limits because when the context window gets exceeded it goes off the rails. Bing instituted this very early on. Claude has those guad rails. But for some reason, OpenAi chose not to implement that.

The chats are horrifying, but it took a concerted dedicated effort to get ChatGPT to go there. If I drive through a sign that says Do Not Enter and fall off a cliff, who's really at fault?

pinkgolem•2mo ago
You are comparing a medium of transport to (generated) content.

And yes, Contend that encourages suicide is largely discouraged/shunned, be it film, forums, books

maartin0•2mo ago
Maybe not the entire internet, but this absolutely true for TikTok/Instagram-like algorithms
loeg•2mo ago
> We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.

Wrongly or rightly, people frequently blame social media for tangentially associated outcomes. Including suicide.

lukev•2mo ago
LLMs are an interesting, useful technology.

The "chatbot" format is a cognitive hazard, and places users in a funhouse mirror maze reflecting back all sorts of mental and conceptual distortions.

d-us-vb•2mo ago
If they were developed to actually tell people the truth, rather than simply be a sycophant, things might be different. But as Pilate said all those years ago "what is truth".
lukev•2mo ago
Well, truth is hard to pin down, let alone computationally. But the sycophancy is definitely a problem.
fragmede•2mo ago
syncophancy and truth are orthogonal. It could correct an error that's been pointed out without prefacing it with "You're absolutely right!". It could move the goal posts and be angry and say that while you're right in this instance, I'm (the LLM) still right in these cases.

Given that they still hallucinate wildly at inopportune times though, like you say, what is truth?

jstummbillig•2mo ago
What a distasteful and devious project.
DonaldPShimoda•2mo ago
"Oh no, people are finding links between an unregulated technology and potential real-world harms, how awful."
ipsum2•2mo ago
Don't make up quotes and put words in other people's mouths. Own your words.
d-us-vb•2mo ago
If a new technology is directly or indirectly involved in people's deaths, we can't just ignore the problems. Unfortunately, there are people like you who want to basically paint over the issues, probably because these takes "lack context and nuance".
GaryBluto•2mo ago
> probably because these takes "lack context and nuance".

How anti-intellectual of you.

d-us-vb•2mo ago
Well, I'm definitely anti-pseudo-intellectual. Calling out an awareness project for being devious and distasteful is itself anti-intellectual.

The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them away from it.

GaryBluto•2mo ago
> Calling out an awareness project for being devious and distasteful is itself anti-intellectual.

Read that again. Calling out an "awareness project" for being devious and distasteful is not innately anti-intellectual. Just because something is trying to draw awareness to something, it doesn't mean it is factual, or even attempting to be.

> The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them out.

Mirroring the user's most prominent attitude is what it's designed to do. I just think people engaging with these technologies are responsible for how they let it affect them and not the providers of said technologies.

jstummbillig•2mo ago
The issue I take is not criticism of LLMs. It is the lack thereof, and presenting it as such.

If you find ~30 reported deaths among 500 million users problematic to begin with, you are simply out of touch with reality. If you then put effort behind promoting this as a problem, that's not an issue of "lack of context and nuance" (what's with the quotes? Who are you quoting?). I called it what it is to me: Distasteful and devious.

jackblemming•2mo ago
How does a clearly mentally ill and suicidal person deciding to take their own life mean the LLM is responsible? That’s silly. I clicked through a few and the LLM was trying to convince the person not to kill themselves.
GaryBluto•2mo ago
This was a project I have no doubt was established after the creator had already made up their mind on LLMs and artificial intelligence.
loeg•2mo ago
Also, the background suicide rate is not zero. Is this a higher or lower rate?
kachapopopow•2mo ago
I don't know how to feel about this until it is put in relative terms, if the claims are to be believed then out of 200m users that is a fairly low number, suspiciously low to be exact compared to how badly AI can feed into delusions.

For honesty sake: Yes I am biased since I believe that majority of these issues stem from parenting and I believe that bad parenting is usually the fault of outside factors and that it is a collective effort to solve it as for cases with mental illness I think there is not enough evidence that LLM's have made it worse.

xiphias2•2mo ago
The amount of times ChatGPT o3 helped me with medical issues makes me think that it already saved much more lives.

Of course I'm not trying to suggest that these deaths are not tragedy, but the help it gives is so much more.

puppycodes•2mo ago
As someone who has built and managed several suicide hotlines I'm very skeptical of these claims.

Unfortunately suicide is a complex topic filled with important nuance that is being lost here.

Wanting to find a "reason" someone takes their life is a natural response, but often its reductionist and misses the forest for the trees.

fragmede•2mo ago
The problem is that we also don't know how many lives it's saved. I'm serious! Someone I know was is crisis, and the thing that got her off the ledge in the middle of the night her wasn't calls to me going to voicemail, but her talking to ChatGPT. If we want to just rage against AI/robots/technology because we saw terminator and the robots are going to take our job, let's just admit that bias and not pretend this is a discussion, but in this real life trolley problem, yes people are dying but it's also saving lives because basically no one is rich enough to have their therapist on speed dial to call at 3am in a moment of crisis but ChatGPT is.

The impossible thing is that we can't know the numbers on the other side of the tracks, and even if we did, the trolley problem is a philosophical question without a solution because it's not a math equation with one right answer.