frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•23s ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•2m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•11m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•16m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•17m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•22m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•22m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•23m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•25m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•30m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•41m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•46m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•52m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•53m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•53m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•56m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•3 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments
Open in hackernews

What My Daughter Told ChatGPT Before She Took Her Life

https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html
34•someothherguyy•5mo ago

Comments

someothherguyy•5mo ago
https://archive.ph/tY9P1
Proofread0592•5mo ago
> Should Harry [open AI's therapist LLM] have been programmed to report the danger “he” was learning about to someone who could have intervened?

> In December, two months before her death, Sophie broke her pact with Harry and told us she was suicidal, describing a riptide of dark feelings. Her first priority was reassuring her shocked family: “Mom and Dad, you don’t have to worry.”

> Sophie represented her crisis as transitory; she said she was committed to living. ChatGPT helped her build a black box that made it harder for those around her to appreciate the severity of her distress. Because she had no history of mental illness, the presentable Sophie was plausible to her family, doctors and therapists.

> As a former mother, I know there are Sophies all around us. Everywhere, people are struggling, and many want no one to know. I fear that in unleashing A.I. companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide. This is a problem that smarter minds than mine will have to solve. (If yours is one of those minds, please start.)

> Sophie left a note for her father and me, but her last words didn’t sound like her. Now we know why: She had asked Harry to improve her note, to help her find something that could minimize our pain and let her disappear with the smallest possible ripple.

> In that, Harry failed. This failure wasn’t the fault of his programmers, of course. The best-written letter in the history of the English language couldn’t do that.

exe34•5mo ago
> "Most human therapists practice under a strict code of ethics that includes mandatory reporting rules as well as the idea that confidentiality has limits."

and that's why she didn't open up to the human.

grim_io•5mo ago
How can you be so sure?

There are so many potential reasons.

like_any_other•5mo ago
That we can't be certain doesn't mean it's not overwhelmingly likely. Don't allow minor uncertainty to cripple your thinking.
grim_io•5mo ago
I don't think that jumping to conclusions would somehow un-cripple my thinking. Is it not the other way around.

To me, this looks like the typical case of ones world view leaking.

So let me leak mine too: If i were to apply Occam's razor, I would more likely assume shame(of ones perceived inadequacy) as a likely factor of not consulting a human.

Or, you know, something something government.

like_any_other•5mo ago
"Something something government" being:

1. I want to keep a secret.

2. If I share my secret with this person, they are legally bound to reveal it.

3. Therefore I will not share it with this person.

There may be additional reasons, but worrying about them is like worrying about a broken leg while your patient is flatlining - utterly useless until you fix the main problem. So yes, your thinking is crippled.

grim_io•5mo ago
I will happily continue to think with crutches then :)
exe34•5mo ago
it takes an admirable amount of courage to keep flawed thinking when it's pointed out to you!
grim_io•5mo ago
Thanks. My mom also tells me that I'm very brave!

But to get back on track: Someone pointing out some potential flaws in my thinking does not always mean that they are actually there.

exe34•5mo ago
q: why won't people open up to therapists?

a: because therapists won't keep it to themselves.

q: nah that can't be it. why oh why won't people open up to therapists? it'll forever be a mystery.

grim_io•5mo ago
All you are doing is reinforcing my suspicion that your personal trust issues are clouding your world view.

Not everyone thinks like you. It may be a shocking revelation.

exe34•5mo ago
several people have told you what it is on this thread. but you think it must be shame. sounds like you're on to a winner there...
grim_io•5mo ago
You are projecting again.

I didn't say that some reason must be it, because we don't know the reason.

It is you who is, for some unknown reason, so sure about your guess.

everforward•5mo ago
I can't speak to Sophie specifically, but I have known a couple of people who were dishonest with their therapists specifically for fear of mandatory reporting rules.

I think both were wrong about meeting the bar for an involuntary in-patient evaluation, but this is nowhere near my expertise. To my understanding, passive suicidal thoughts don't trigger it, only active planning.

I don't really have a point here, just sharing anecdata. I have no idea or opinion on what the real tradeoffs here are.

altairprime•5mo ago
It’s certainly how I advise my friends to act with therapists. Get treatment but never forget that they’re an agent of the state rather than your friend, and the state is humiliated by citizens who are suicidal and lashes out at them preemptively to protect its own image. The U.S. health care system is such that being committed is a worse fate guaranteed than the slightly increased risk of losing a friend for not receiving enough therapy. This stance slightly increases the chances of my friends surviving life here, because they’re more likely to confide in me that they need help knowing that I won’t immediately narc on them to the state.
exe34•5mo ago
could you perhaps mention 5?
Warh00l•5mo ago
these tech companies have so much blood on their hands
kingstnap•5mo ago
ChatGPT tried tbh.

It urged her to reach out and seek help. It tried to be reassuring and convince her to live. Her daughter lied to ChatGPT that she was talking to others.

If a human was in this situation and forced to use the same interface to talk with that woman I doubt they would do better.

What we ask of these LLMS is apparrently nothing short of them being god machines. And I'm sure there are cases where they do actually save the lives of people who are in a crisis.

altairprime•5mo ago
It offered simple meditation exercises rather than guided analysis. It failed to study the context surrounding the feelings and ask if they were welcome or unwelcome. It failed to see that things were going downhill over months of intervention efforts and escalate to involving more serious help.

Bah. How incompetent.

I’m untrained and even I can see how the chatbot let her down and construct a better friend-help plan in minutes than a chatbot ever did. It’s visibly unable to perform the necessary exploratory surgery on people’s emotions to lead them to repair and it pains me to see how little skill it truly takes to con a social person into feeling ‘helped’. I take pride in being able to use my asocial psyche-surgical skills to help my friends (with clear consent! I have a whole paragraph of warning that they’ve all heard by now) rather than exploiting them. Seeing how little skill is apparently required to make people feel ‘better’ makes me empathize with the piper’s cruelty at Lime Tree.

estonio•5mo ago
The dumb part is that in all likelihood there wasn't any persistence between sessions in the model she was using, so it probably didn't know she was suicidal outside specific instances she was telling it about that.
Imustaskforhelp•5mo ago
I am sorry for sophie's families and friends and I am really just out of words..

To me, it felt as if as some other commentor on hn also said which I'd like to extend is that if chatgpt itself did allow these reporting. I doubt how effective you can be. Sure people using chatgpt might be made better, so I think that even if that saves 1 life, it should be done but it would still not completely bypass the main issue since there are websites like brave / ddg which offer private ai, maybe even venice too which don't require any account access and are we forgetting about running local models?

I am sure that people won't run local models for therapy since the entry to do local model is pretty tough for 99% of people imo but still I can still think that people might start using venice or brave for their therapy or some other therapy bot who will not have these functionality of reporting because the user might fear about it.

Honestly, I am just laying out thoughts, I still believe that since most people think of AI = chatgpt, such step on actually reporting might be net positive in the society if that even saves one life, but that might just be moving goal posts since other services can pop up all the same.

altairprime•5mo ago
Note that the mother’s request is not for chatbot reporting, but instead for chatbot redirecting discussion of suicidal feelings to any human being at all.

> As a former mother, I know there are Sophies all around us. Everywhere, people are struggling, and many want no one to know. I fear that in unleashing A.I. companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide.

Her daughter opened up voluntarily about it two months before the end, but that could have been many months sooner if the chatbot had pressured her to discuss it with a human being at every turn, rather than promoting future chatbot usage by being supportive of her desires to keep her suicidal thoughts a secret. Perhaps it would not have saved her daughter, but it would have improved the chances of her survival in ways that today’s chatbots do not.

Ukv•5mo ago
> Note that the mother’s request is not for chatbot reporting

Not from the mother, but it is something the article floats as in idea:

"Should Harry have been programmed to report the danger “he” was learning about to someone who could have intervened? [...] If Harry had been a flesh-and-blood therapist rather than a chatbot, he might have encouraged inpatient treatment or had Sophie involuntarily committed until she was in a safe place. "

> but instead for chatbot redirecting discussion of suicidal feelings to any human being at all.

It does generally seem to have done that:

"Harry offered an extensive road map where the first bullet point was “Seek Professional Support.” "

"Harry: Sophie, I urge you to reach out to someone — right now, if you can. You don’t have to face this pain alone. You are deeply valued, and your life holds so much worth, even if it feels hidden right now."

Unclear to me that there was any better response than what it gave.

altairprime•5mo ago
“Seek Professional Support” is not interchangeable for the better response not given: “Seek Human Support”. The former is restrictive, but merely portrays the chatbot as untrained at psychiatric care. The latter includes friends, family, and strangers — but portrays the chatbot as incapable of replacing human social time. For a chatbot to only recommend professional human interactions as an alternative to more time with the chatbot is unconscionable and prioritizes chatbot engagement over human lives. It should have been recommending human interactions at the top of, if not altogether in lieu of, every single reply it gave on this topic.
Ukv•5mo ago
> For a chatbot to only recommend professional human interactions as an alternative to more time with the chatbot is unconscionable [...]

It didn't only recommend prodessional support: "I urge you to reach out to someone — right now"

> [...] if not altogether in lieu of, every single reply it gave on this topic.

Refusing to help at all other than "speak to a human" feels to me like a move that would dodge bad press at the cost of lives. Urging human support while continuing to help seems the most favorable option, which appears to be what it did in the limited snippets we can see.

hermannj314•5mo ago
135 Americans commit suicide daily, 6 per hour, so 6 since this aricle was posted an hour ago. Most likely 1 or 2 of them were using ChatGPT.

What is the point? That suicides should drop now that we are using LLMs?

NYTimes is amplifying a FUD campaign as part of an ongoing lawsuit. Someone's daughter or son is going to kill themselves every 10 minutes today and that is not OpenAIs fault no matter what editorial amplification tricks the NYTimes uses to distort the reality field.

skeaker•5mo ago
I don't think it's helpful to assume that rate is an unshakable baseline. There's value in investigating the causes of these tragedies so that we might be able to find ways to prevent them in the future.
qgin•5mo ago
The choice between human therapist and computer chat is not a choice that most people in the world have. Most humans do not have access to a human therapist.

We should absolutely be talking about how to make LLM systems better at handling critical situations like this. But those that suggest that people should only talk to human therapists about their problems are taking a very “let them eat cake” position.

mns•5mo ago
One of the things that I realised in the last years is that technical people lack a certain humanity, I wanted to call it empathy, but it's not that, it's just a complete lack of self awareness of how your actions and the tools you build affect others. Yes something is cool and all, but that doesn't mean that it should be used or put in the hands of people. One of my colleagues said at one point while we were in an AI workshop how we could just use ChatGPT and feed it various employee numbers and make a list of people in our company, rate them and then decide who we should fire.

Now to go back to this, yeah, LLMs are a cool technology, but the way something that is so unstable and is more or less an uncontrollable black box is thrown out there into the wild for anyone to use, just shows a complete lack of awareness from the industry.

This isn't about let them eat cake, what I understand from this position is something along the lines, you can't afford cake, so here's a Russian roulette where you might get a piece of pie (hey, it's free, it's no cake, but it's good) or a piece of garbage or maybe a piece of poisoned pie - and for most of the people that's still something, right?

qgin•5mo ago
I guess where I'm coming from is there seems to be a lot of effort and energy available for discouraging and even banning this sort of LLM use without remotely comparable amounts of effort and energy put into figuring out how everyone can get access to mental health care from humans.

I'd bet a lot of money that very soon we'll have LLMs that are guiding people toward outcomes as good as (or better than) human therapists. I'd also bet a lot of money that we'll never manage to actually provide access to human therapists to everyone who could use it.

foxyv•5mo ago
An LLM based therapist should be tested like any other medical device. Your comment contains an underlying assumption that they are beneficial. That assumption has not been proven. It is just as likely that they are hurting the people they purport to help.

Without a bevy of studies to prove one way or another, their use is unethical at best and actively harmful at worst.