frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Wttr: Console-oriented weather forecast service

https://github.com/chubin/wttr.in
81•saikatsg•3h ago•35 comments

“Reading Rainbow” was created to combat summer reading slumps

https://www.smithsonianmag.com/smithsonian-institution/to-combat-summer-reading-slumps-this-timeless-childrens-television-show-tried-to-bridge-the-literacy-gap-with-the-magic-of-stories-180986984/
178•arbesman•9h ago•69 comments

Ex-Waymo engineers launch Bedrock Robotics to automate construction

https://techcrunch.com/2025/07/16/ex-waymo-engineers-launch-bedrock-robotics-with-80m-to-automate-construction/
334•boulos•16h ago•251 comments

ESA's Moonlight programme: Pioneering the path for lunar exploration

https://www.esa.int/Applications/Connectivity_and_Secure_Communications/ESA_s_Moonlight_programme_Pioneering_the_path_for_lunar_exploration
6•nullhole•2d ago•0 comments

Code Execution Through Email: How I Used Claude to Hack Itself

https://www.pynt.io/blog/llm-security-blogs/code-execution-through-email-how-i-used-claude-mcp-to-hack-itself
40•nonvibecoding•3h ago•18 comments

I want an iPhone Mini-sized Android phone (2022)

https://smallandroidphone.com/
261•asimops•12h ago•354 comments

Original Xbox Hacks: The A20 CPU Gate (2021)

https://connortumbleson.com/2021/07/19/the-xbox-and-a20-line/
59•mattweinberg•6h ago•9 comments

Altermagnets: The first new type of magnet in nearly a century

https://www.newscientist.com/article/2487013-weve-discovered-a-new-kind-of-magnetism-what-can-we-do-with-it/
344•Brajeshwar•18h ago•87 comments

I was wrong about robots.txt

https://evgeniipendragon.com/posts/i-was-wrong-about-robots-txt/
91•EPendragon•9h ago•78 comments

Metaflow: Build, Manage and Deploy AI/ML Systems

https://github.com/Netflix/metaflow
36•plokker•13h ago•2 comments

Inside the box: Everything I did with an Arduino starter kit

https://lopespm.com/hardware/2025/07/15/arduino.html
75•lopespm•2d ago•6 comments

Show HN: A 'Choose Your Own Adventure' written in Emacs Org Mode

https://tendollaradventure.com/sample/
120•dskhatri•11h ago•14 comments

Intel's retreat is unlike anything it's done before in Oregon

https://www.oregonlive.com/silicon-forest/2025/07/intels-retreat-is-unlike-anything-its-done-before-in-oregon.html
157•cbzbc•14h ago•242 comments

Pgactive: Postgres active-active replication extension

https://github.com/aws/pgactive
304•ForHackernews•1d ago•76 comments

Mistakes Microsoft made in the Xbox security system (2005)

https://xboxdevwiki.net/17_Mistakes_Microsoft_Made_in_the_Xbox_Security_System
59•davikr•9h ago•27 comments

Artisanal handcrafted Git repositories

https://drew.silcock.dev/blog/artisanal-git/
167•drewsberry•13h ago•42 comments

A 1960s schools experiment that created a new alphabet

https://www.theguardian.com/education/2025/jul/06/1960s-schools-experiment-created-new-alphabet-thousands-children-unable-to-spell
51•Hooke•1d ago•50 comments

Open, free, and ignored: the afterlife of Symbian

https://www.theregister.com/2025/07/17/symbian_forgotten_foss_phone_os/
19•mdp2021•1h ago•4 comments

A bionic knee integrated into tissue can restore natural movement

https://news.mit.edu/2025/bionic-knee-integrated-into-tissue-can-restore-natural-movement-0710
33•gmays•2d ago•1 comments

Show HN: Improving search ranking with chess Elo scores

https://www.zeroentropy.dev/blog/improving-rag-with-elo-scores
156•ghita_•19h ago•52 comments

How and where will agents ship software?

https://www.instantdb.com/essays/agents
127•stopachka•15h ago•59 comments

A Rust shaped hole

https://mnvr.in/rust
93•vishnumohandas•1d ago•217 comments

Roman dodecahedron: 12-sided object has baffled archaeologists for centuries

https://www.livescience.com/archaeology/romans/roman-dodecahedron-a-mysterious-12-sided-object-that-has-baffled-archaeologists-for-centuries
67•bookofjoe•2d ago•106 comments

Blue Pencil no. 18–Some history about Arial

https://www.paulshawletterdesign.com/2011/09/blue-pencil-no-18%e2%80%94some-history-about-arial/
35•Bluestein•2d ago•9 comments

Scanned piano rolls database

http://www.pianorollmusic.org/rolldatabase.php
56•bookofjoe•4d ago•14 comments

Show HN: Linux CLI tool to provide mutex locks for long running bash ops

https://github.com/bigattichouse/waitlock
30•bigattichouse•4h ago•13 comments

Show HN: 0xDEAD//TYPE – A fast-paced typing shooter with retro vibes

https://0xdeadtype.theden.sh/
89•theden•4d ago•23 comments

Chain of thought monitorability: A new and fragile opportunity for AI safety

https://arxiv.org/abs/2507.11473
119•mfiguiere•19h ago•55 comments

Signs of autism could be encoded in the way you walk

https://www.sciencealert.com/signs-of-autism-could-be-encoded-in-the-way-you-walk
132•amichail•15h ago•170 comments

Task Runner Census 2025

https://aleyan.com/blog/2025-task-runners-census/
11•aleyan•2d ago•2 comments
Open in hackernews

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/
204•pseudolus•3d ago

Comments

42lux•3d ago
I am bipolar and I help run a group. We lost some people to chatbots already that either fueled a manic or a depressive episode.
sherdil2022•3d ago
Lost as in ‘not meeting anymore since they are using chatbots instead’ or ‘took their lives’?
42lux•3d ago
Both but it's mostly not the therapy chatbots or normal "chatgpt" those are worse enough. It's these dumbass ai girlfriend/boyfriend bots that run on uncensored small models. They get unhinged really fast.
irjustin•7h ago
That's SUPER interesting because obviously the researchers didn't look here first. That even if "therapy chatbots" were fixed, you'd still have a massive space where the true problem is.

On the ground, it's wildly different. For me, a very left field moment.

wongarsu•4h ago
I see a lot of ads for girlfriend chatbots, either blatant or with all kinds of thin disguises (the last ad I remember was a "personal assistant who will do anything", but "therapy" is also a popular framing). In comparison I barely see or hear anything about professional not-sexy therapy chatbots.

I imagine if you go to psychology conferences you get exposed to the professional side a lot more, but for the average internet user that's very different. I wouldn't be surprised if the AI girlfriend sites had many, many orders of magnitude more users

bravesoul2•4h ago
That level of anecdata makes me think this is a huge problem when scaled to the whole world.
ethan_smith•4h ago
This matches what several mental health professionals I know have reported - AI chatbots tend to validate rather than appropriately challenge potentially harmful thought patterns during mood episodes.
joules77•3d ago
It's a bit like talking about the quality of pastoral care you get at Church. You can get a wide spectrum of results.

Worth pointing out such systems have survived a long long time since access to it is free irrespective of the quality.

mlinhares•8h ago
You'll never get the attention of your priest at the same level as a chatbot. Not even close, this is a whole new universe.
mathiaspoint•8h ago
I'd argue chatbots give zero actual attention since they're not human (other than in the irrelevant technical sense.) Saying they can is a bit like saying a character in a book or an imaginary friend can.

It will probably take a few years for the general public to fully appreciate what that means.

mlinhares•8h ago
Doesn't matter what we think, this is how people are perceiving them :(
Terr_•8h ago
> attention

Then perhaps "responsiveness", even if misinterpreted as attention. In a similar way to the responsiveness of a casino slot-machine.

bluefirebrand•7h ago
> It will probably take a few years for the general public to fully appreciate what that means

I think you are very optimistic if you think the general public will ever fully understand what it means

As these get more sophisticated, the general public will be less and less capable of navigating these new tools in a healthy and balanced fashion

djtango•4h ago
Assuming we are comparing ChatGPT to an in person therapist, there's a whole universe of extra signals ChatGPT is not privy to. Tone of voice, cadence of speech, time to think, reformulated responses, body language.

These are all CRUCIAL data points that trained professionals also take cues from. An AI can also be trained on these but I don't think we're close to that yet AFAIK as an outsider.

People in need of therapy could (and probably are) unreliable narrators and a therapist's job is to manage long range context and specialist training to manage that.

Bluestein•3h ago
> don't think we're close to that yet AFAIK as an outsider.

I was gonna say: Wait until LLMs start vectorizing to sentiment, inflection and other "non content" information, and matching that to labeled points, somehow ...

... if they ain't already.-

djtango•1h ago
I am curious how this will work in the wild. I believe the capability will exist but with things like body language and facial expressions, it can be really subtle and even if it's possible, I think that run of the mill consumer hardware will not be high fidelity enough and will bring in too much noise.

This reminds me of the story of how McDonald's abandoned automated drive thru voice input because in the wild there was too many uncontrolled variables but speech recognition has been a "solved problem" for a long time now...

EDIT I recently had issues trying to biometrically verify my face for a service and after 20-30 failed attempts to get my face recognised I was locked out of the service so sensor-related services are a still a bit of a murky world

interestica•3h ago
ConfessionGPT?
AdieuToLogic•6h ago
> It's a bit like talking about the quality of pastoral care you get at Church.

No, no it isn't.

Whatever you think about the role of pastor (or any other therapy-related profession), they are humans which possess intrinsic aptitudes a statistical text (token) generator simply does not have.

ta8645•6h ago
A human may also possess malevolent tendencies that a silicon intelligence lacks. The question is not if they are equals, the question is if their differences matter to the endeavour of therapy. Maybe a human's superior hand-eye coordination matters, maybe it doesn't. Maybe a silicon agent's superior memory recall matters, maybe it doesn't. And so on.
AdieuToLogic•6h ago
> A human may also possess malevolent tendencies that a silicon intelligence lacks.

And an LLM may be trained on malevolent data of which a human is unaware.

> The question is not if they are equals, the question is if their differences matter to the endeavour of therapy.

I did not pose the question of equality and apologize if the following was ambiguous in any way:

  ... they are humans which possess intrinsic aptitudes
  a statistical text (token) generator simply does not have.
Let me now clarify - "silicon" does not have capabilities humans have relevant to successfully performing therapy. Specifically, LLM's are not an equal to human therapists excluding the pathological cases identified above.
ta8645•6h ago
> Let me now clarify - "silicon" does not have capabilities humans have relevant to successfully performing therapy.

I think you're wrong, but that isn't really my point. A well-trained LLM that lacks any malevolent data, may well be better than a human psychopath who happens to have therapy credentials. And it may also be better than nothing at all for someone who is unable to reach a human therapist for one reason or another.

For today, I'll agree with you, that the best human therapists that exist today, are better than the best silicon therapists that exist today. But I don't think that situation will persist any longer than such differences persisted in chess playing capabilities. Where for years I heard many people making the same mistake you're making, of saying that silicon could never demonstrate the flair and creativity of human chess players; that turned out to be false. It's simply human hubris to believe we possess capabilities that are impossible to duplicate in silicon.

joe_the_user•5h ago
A well-trained LLM that lacks any malevolent data...

The scale needed to produce an LLM that is fluent enough to be convincing precludes fine-grained filtering of input data. The usual methods of controlling an LLM essentially involve a broad-brush "don't say stuff like that" (RLHF) that inherently misses a lot of subtlties.

And even more, defining malevolent data is extremely difficult. Therapists often go along with things a patient say because otherwise they break rapport. But therapists have to balk once the patient dives into destructive delusions. But data of a therapy can't be easily labeled with "here's where you have to stop", just to name one problem.

Dylan16807•4h ago
So that's all true but the same argument works if you say a low percentage of malevolent data, and that's far from impossible.
joe_the_user•3h ago
It's remarkable how many people are uncritically talking of "malevolent data" as it is was a well-defined concept that everyone knows is the source of bad things.

A simple good search reveals ... this very thread as a primary source on the topic of "malevolent data" (ha, ha). But it should be noted that all other sources mentioning the phrase define it as data intentionally modified to produce a bad effect. It seems clear the problems of badly behaved LLMs don't come from this. Sycophancy, notably, doesn't just appear out of "sycophantic data" cleverly inserted by the association of allied sycophants.

Dylan16807•3h ago
I don't find it very remarkable that when one person makes up a term that's pretty easy to understand, other people in the same conversation use the same term.

In the context of this conversation, it was a response to someone talking about malevolent human therapists, and worried about AIs being trained to do the same things. So that means it's text where one of the participants is acting malevolently in those same ways.

AdieuToLogic•5h ago
> A well-trained LLM that lacks any malevolent data, may well be better than a human psychopath who happens to have therapy credentials.

Interesting that in this scenario, the LLM is presented in its assumed general case condition and the human is presented in the pathological one. Furthermore, there already exists an example of an LLM intentionally made (retrained?) to exhibit pathological behavior:

  "Grok praises Hitler, gives credit to Musk for removing 'woke filters'"[0]
> And it may also be better than nothing at all for someone who is unable to reach a human therapist for one reason or another.

Here is a counterargument to "anything is better than nothing" the article posits:

  The New York Times, Futurism, and 404 Media reported cases 
  of users developing delusions after ChatGPT validated 
  conspiracy theories, including one man who was told he 
  should increase his ketamine intake to "escape" a 
  simulation.
> Where for years I heard many people making the same mistake you're making, of saying that silicon could never demonstrate the flair and creativity of human chess players; that turned out to be false.

Chess is a game with specific rules, complex enough to make optimal strategy exhaustive searches infeasible due to exponential cost, yet it exists in a provably correct mathematical domain.

Therapy shares nothing with this other than the time it might take a person to become an expert.

0 - https://arstechnica.com/tech-policy/2025/07/grok-praises-hit...

Dylan16807•3h ago
> Interesting that in this scenario, the LLM is presented in its assumed general case condition and the human is presented in the pathological one.

They were replying to a comment comparing a general case human and a pathological LLM. So yeah, they flipped it around as part of making their point.

ClumsyPilot•4h ago
> A well-trained LLM that lacks any malevolent data

This is self-contradictory. An LLM must have malevolent data to identify malevolent intentions. A naive LLM will be useless. Might as well get psychotherapy from a child.

Once LLM has malevolent data, it may produce malevolent output. LLM does not inherently understand what is malevolence. It basically behaves like a psychopath.

You are trying to get a psychopath-like technology to do psychotherapy.

It’s like putting gambling addicts in charge of the world financial system, oh wait…

Dylan16807•3h ago
I ask this with all sincerity, why is it important to be able to detect malevolent intentions from the person you're giving therapy to? (In this scenario, you cannot be hurt in any way.)

In particular, if they're being malevolent toward the therapy sessions I don't expect the therapy to succeed regardless of whether you detect it.

BeetleB•4h ago
The frustrating thing about your and several other arguments in this submission is that there is no rationale or data. All you are saying is "LLMs are not/cannot be good at therapy". The only (fake) rationale is "They are not humans." The whole comment comes across as tautological.
ClumsyPilot•4h ago
The frustrating thing about your argument is that it runs on a pretence that we must prove squares aren’t circles.

A person may be unable to provide mathematical proof and yes be obviously correct.

The totally obvious thing you are missing is that most people will not encourage obviously self-destructive behaviour because they are not psychopaths. And they can get another person to intervene if necessary

Chatbots do not have such concerts

BeetleB•3h ago
I'm not sure I get the actual point you're making.

To begin with, not all therapy involves people at risk of harming themselves. Easily over 95% of people who can benefit from therapy are at no more risk of harming themselves than the average person. Were a therapy chatbot to suggest something like it to them, the response will either be amusement or annoyance ("why am I wasting time on this?")

Arguments from extremes (outliers) are the stuff of logical fallacies.

As many keep pointing out, there are plenty of cases of licensed therapists causing harm. Most of the time it is unintentional, but for sure there are those who knowingly abused their position and took advantage of their patients. I'd love to see a study comparing the two ratios to see whether the human therapist or the LLM fare worse.

I think most commenters here need to engage with real therapists more, so they can get a reality check on the field.

I know therapists. I've been to some. I took a course from a seasoned therapist who also was a professor and had trained them. You know the whole replication crisis in psychology? Licensed therapy is no different. There's very little real science backing most of it (even the professor admitted it).

Sure, there are some great therapists out there. The norm is barely better than you or I. Again, no exaggeration.

So if the state of the art improves, and we then have a study showing some LLM therapists are better than the average licensed human one, I for one will not think it a great achievement.

zdragnar•4h ago
> there is no rationale or data.

... aren't we commenting on just such a study?

All these threads are full of "yeah but humans are bad too" arguments, as if the nature of interacting with, accountability, motivations or capabilities between LLMs and humans are in any way equivalent.

There are a lot of things LLMs can do, and many they can't. Therapy is one of the things they could do but shouldn't... not yet, and probably not for a long time or ever.

ApeWithCompiler•3h ago
+1 I also wanted to point out, if there are questions about validation of the point made... just look at the post.

And from my perspective this should be common sense, and not a scientific paper. A LLM will allways be a statistical token auto completer, even if it identifies different. It is pure insanity to put a human with a already harmed psyche in front of this device and trust in the best.

Dylan16807•3h ago
It's also insanity to pretend this is a matter of "trust". Any intervention is going to have some amount of harm and some amount of benefit, measured along many dimensions. A therapy dog is good at helping many people in many ways, but I wouldn't just bring them into the room and "trust in the best".

Measure and make decisions based on measurements.

BeetleB•3h ago
> ... aren't we commenting on just such a study?

I'm not referring to the study, but to the comments that are trying to make the case.

The study is about the present, using certain therapy bots and custom instructions to generic LLMs. It doesn't do much to answer "Can they work well?"

> All these threads are full of "yeah but humans are bad too" arguments, as if the nature of interacting with, accountability, motivations or capabilities between LLMs and humans are in any way equivalent.

They are correctly pointing out that many licensed therapists are bad, and many patients feel their therapy was harmful.

We know human therapists can be good.

We know human therapists can be bad.

We know LLM therapists can be bad ("OK, so just like humans?")

The remaining question is "Can they be good?" It's too early to tell.

I think it's totally fine to be skeptical. I'm not convinced that LLMs can be effective. But having strong convictions that they cannot is leaping into the territory of faith, not science/reason.

fzeroracer•1h ago
> The remaining question is "Can they be good?" It's too early to tell.

You're falling into a rhetorical trap here by assuming that they can be made better. An equally valid argument that can be made is 'Will they become even worse?'

Believing that they can be good is equally a leap of faith. All current evidence points to them being incredibly harmful.

adamgordonbell•3d ago
The study coauthor actually seems positive on their potential:

'LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be.'

And they also mention a previous paper that found high levels of engagement from patients.

So, they have potential but currently are giving dangerous advice. It sounds like they are saying a fine tuned therapist model is needed because 'you are a great therapist' prompt, just gives you something that vaguely sounds like a therapist to an outsider.

Sounds like an opportunity honestly.

Would people value a properly trained therapist enough to pay for it over an existing chatgpt subscription?

AstralStorm•8h ago
What it we gave therapists the same interface as ChatGPT?

Mechanical Turk anyone?

Paracompact•8h ago
I expect any LLM, even a fine-tuned one, is going to run into the problem of user-selected conversations that drift ever further away from whatever discourse the original LLM deployers consider appropriate.

Actual therapy requires more unsafe topics than regular talk. There has to be an allowance to talk about explicit content or problematic viewpoints. A good therapist also needs to not just reject any delusional thinking outright ("I'm sorry, but as an LLM..."), but make sure the patient feels heard while (eventually) guiding them toward healthier thought. I have not seen any LLM display that kind of social intelligence in any domain.

apical_dendrite•6h ago
High levels of engagement aren't necessarily a good thing.

One problem is that the advice is dangerous, but there's an entirely different problem, which is the LLM becoming a crutch that the person relies on because it will always tell them what they want to hear.

Most people who call suicide hotlines aren't actually suicidal - they're just lonely or sad and want someone to talk to. The person who answers the phone will talk to them for awhile and validate their feelings, but after a little while they'll politely end the call. The issue is partly that people will monopolize a limited resource, but even if there were an unlimited number of people to answer the phone, it would be fundamentally unhealthy for someone to spend hours a day having someone validate their feelings. It very quickly turns into dependency and it keeps that person in a place where they aren't actually figuring out how to deal with these emotions themselves.

qgin•3d ago
Benchmarking LLMs on this is an important thing to do. There is a huge potential positive effect of psychotherapy being always-available to every human rather than just for wealthy people once a week. But to get there we need to know the rate of adverse events compared to human therapists (which isn’t zero either).
m3047•3d ago
It was put forward in 1960s (maybe? Robert Anton Wilson? and for parallel purposes Philip K Dick's percept / concept feedback cycle) science fiction, and having therefore casually looked for phenomena when support / disprove this hypothesis over the intervening years: that people in power necessarily become functionally psychotic because people will self-select to be around them as a self-preserving / promoting opportunity (sycophants) who cannot help but filter shared observations through their own biases, this is profoundly unsurprising to me.

If you choose to believe as Jaron Lanier does that LLMs are a mashup (or as I would characterize it a funhouse mirror) of the human condition, as represented by the Internet, this sort of implicit bias is already represented in most social media. This is further distilled by the cultural practice of hiring third world residents to tag training sets and provide the "reinforcement learning"... people who are effectively if not actually in the thrall of their employers and can't help but reflect their own sycophancy.

As someone who is therefore historically familiar with this process in a wider systemic sense I need (hope for?) something in articles like this which diagnoses / mitigates the underlying process.

mlinhares•8h ago
Every single empire falls into this, right? The king surrounds himself with useless sycophants that can't produce anything but are very good at flattering him, he eventually leads the empire to ruin, revolution happens, the cycle starts anew.

I wish I could see hope in the use of LLMs but i don't think the genie goes back into the bottle, the people prone to this kind of delusion will just dig a hole and go deep until they find the willpower or someone on the outside to pull them out. Feels to me like gambling, there's no power that will block gambling apps due to the amount of money they fuel into lobbying so the best we can do is try to help our friends and family and prevent them from being sucked into it.

dragontamer•8h ago
Certainly not the story of, ex: the Mongol Empire. Which is the Great Khan dies but he was the big personality holding everything together.

There were competent kings and competent Empires.

Indeed, it's tough to decide where the Roman Empire really began it's decline. It's not a singular event but a centuries long decline. Same with the Spanish Empire and English Empire.

Indeed, the English Empire may have collapsed but that's mostly because Britain just got bored of it. There's no traditional collapse for the breakup of the British Empire

---------

I can think of some dramatic changes as well. The fall of the Tokugawa Shogunate of Japan wasn't due to incompetence, but instead the culture shock of a full iron battleship from USA visiting Japan when they were still a swords and samurai culture. This broke the Japanese trust in the Samurai system and led to a violent revolution resulting in incredible industrialization. But I don't think the Tokugawa Shogunate was ever considered especially corrupt or incompetent.

---------

Now that being said: Dictators fall into the dictator trap. A bad king who becomes a narcissist and dictator will fall under the pattern you describe. But that doesn't really happen all that often. That's why it's so memorable when it DOES happen

somenameforme•5h ago
> the English Empire may have collapsed but that's mostly because Britain just got bored of it. There's no traditional collapse for the breakup of the British Empire

I completely agree with the point you're making, but this part is simply incorrect. The British Empire essentially bankrupted itself during WW2, and much of its empire was made up of money losing territories. This led them to start 'liberating' these territories en masse which essentially signaled the end of the British Empire.

ClumsyPilot•4h ago
It is ironic and sad that colonies were both oppressed and not profitable.

The way Britain has restricted Industry in India (famously even salt) left it vulnerable in WW2.

Colonial policies are really up there with great failures of communists

theendisney•7h ago
My theory is that the further up the hierarcy the beneficial decisions are often harmful to those below which requires emotional distancing which even further up becomes full blown collective psychopaty. The yes men grow close while everyone else floats away.
BLKNSLVR•5h ago
I'm just going to re-write what you've written with a bit of extra salt:

Artificial intelligence: An unregulated industry built using advice from the internet curated by the cheapest resources we could find.

What can we mitigate your responsibility for this morning?

I've had AI provide answers verbatim from a self-promotion card of the product I was querying as if it was a review of the product. I don't want to chance a therapy bot quoting a single source that, whilst it may be adjacent to the problem needing to be addressed, could be wildly inappropriate or incorrect due to the sensitivities inherent where therapy is required.

(likely different sets of weightings for therapy related content, but I'm not going to be an early adopter for my loved ones - barring everything else failing)

vasco•4h ago
> As someone who is therefore historically familiar with this process in a wider systemic sense

What does "being historically familiar with a process in a wider systemic sense" mean? I'm trying to parse this sentence without success.

kelseyfrog•3h ago
I'm reading it to say, having working knowledge of intra-personal structures in a way that is contingent on historical context. These would be social, economic, religious, family, political, patterns of relation that groups of people exist in.

The assumption GP is making is that the incentives, values, and biases impressed upon folks providing RL training data may systematically favor responses along a certain vector that is the sum of these influences in a way that doesn't cancel out because the sample isn't representative. The economic dimension for example is particularly difficult to unbias because the sample creates the dataset as an integral part of their job. The converse would be collecting RL training data from people outside of the context of work.

While that it may not be feasible or even possible to counter, that difficulty or impossibility doesn't resolve the issue of bias.

zpeti•3h ago
How about all the people out there who are at rock bottom, or have major issues, are not leaders, are not at the top of their game, and need some encouragement or understanding?

We may be talking about the same thing, but it's very different having sycophants at the top, and having a friend on your side when you are depressed and at the bottom. Yet both of them might do the same thing. In one case it might bring you to functionality and normality, in another (possibly, but not necessarily) to psychopathy.

Cypher•1d ago
chatbot saved our lives, without someone to talk too and help us understand our abusive relationship we'd still be trapped and on the verge of suicide.
ffsm8•8h ago
The issue is that llms magnify whatever is already in the head of the user.

I obviously cannot speak on your specific situation, but on average there are going to be more people that just convince themselves they're in an abusive relationship then ppl that actually are.

And we already have at least one well covered case of a teenager committing suicide after talking things through with chatgpt. Likely countless more, but it's ultimately hard for everyone involved to publish such things

padolsey•7h ago
Entirely anecdotally ofc, I find that therapists often over-bias to formal diagnoses. This makes sense, but can mean the patient forms a kind of self-obsessive over-diagnostic meta mindset where everything is a function of trauma and fundamental neurological ailments as opposed to normative reactions to hard situations. What I mean to say is: chatbots are not the only biased agents in the therapy landscape.
gnabgib•9h ago
Discussion (300 points, 10 days ago, 416 comments) https://news.ycombinator.com/item?id=44484207
shadowtree•8h ago
Unlike human therapists, which have no hard oversight like this study did.
AstralStorm•8h ago
They actually tested the human specialists in case you didn't notice that bar in the data.

They are not perfect either, but are statistically better. (ANOVA)

ta8645•7h ago
People said the same thing about the horseless carriage in the early days of the automobile; they could cite evidence of the superior dependability of a horse and buggy. Things eventually changed. Let's see how things shake out from here.
ozgrakkurt•6h ago
You can say this for almost any new thing, it doesn’t mean anything
ta8645•6h ago
It means that you should be careful to not judge too quickly. Because there are many examples in the past of people clinging to the status quo and refusing to believe that new technology could actually supersede human capabilities.
electroglyph•5h ago
it's fair to judge their current abilities. guessing about potential futures to stick up for their current inadequacy doesn't make a lot of sense, imo.
ta8645•2h ago
Except we've already seen people who do exactly that, and being wrong about the future over and over. I'll agree with you that it's fine (and helpful) to point out all the failings of current LLMS; the mistake is extrapolating that out too far and making a prediction about the future. Granted, it's just as common a human mistake to predict the future too optimistically, by believing there are no impediments to progress.

All i'm really arguing for is some humility. It's okay to say we don't know how it will go, or what capabilities will emerge. Personally, I'm well served by the current capabilities, and am able to work around their shortcomings. That leaves me optimistic about the future, and I just want to be a small counterbalance to all the people making overly confident predictions about the impossibility of future improvements.

meroes•6h ago
There's no chance LLMs have a sufficient training set of effective therapists-patient interactions, because those are private. Ergo, there is no need to wait, it's DOA. Anything else is feeding into LLM hype. It's that simple.
ta8645•6h ago
Heh, it's that simple for someone who thinks the training regime and AI technology will not change further. The early horseless carriages had all kinds of stupid problems, and it would be very easy to pronounce them DOA. "Nobody is going to want to ride something so prone to breaking down", "a horse only needs food from the farm, not stuff drilled from the ground", etc. People don't have much imagination in such situations, especially when they feel emotionally (or existentially) attached to the status quo.
mynameisash•8h ago
> Unlike human therapists, which have no hard oversight like this study did.

What do you mean by that?

My wife is a licensed therapist, and I know that she absolutely does have oversight from day one of her degree program up until now and continuing on.

Spooky23•7h ago
There are plenty of shady, ineffective and abusive therapists.
nozzlegear•7h ago
Surely they're vastly outnumbered by A) legitimate therapists; and B) the sheer number of people carrying around their own personal sycophants in their pockets.
apical_dendrite•6h ago
Sure, and there are systems that work to prevent or those people from practicing. Imperfect systems, to be sure, but at least I as a citizen can look up the training and practice standards for therapists in my state, and I have some recourse if I encounter a bad therapist.

What safety systems exist to catch bad AI therapists? At this point, the only such systems (at least that I'm aware of) are built by the AI companies themselves.

ClumsyPilot•4h ago
This could be said about anything.

There are plenty of shady people commenting right here right now.

gosub100•7h ago
I think this is the biggest grift of AI: Laundering responsibility. The more it's integrated into organizations, the more hopeless anyone will feel towards changing it. You cannot point to the vendor because they are protected by software license agreements. "Whelp, sorry for your loss but ya see the algorithm did it.." is going to be the money making tune in many industries.
rramadass•8h ago
This man says ChatGPT sparked a ‘spiritual awakening.’ His wife says it threatens their marriage - https://edition.cnn.com/2025/07/02/tech/chatgpt-ai-spiritual...
armchairhacker•8h ago
But is it better than no therapy? That's what matters.

> The team also did not study the potential benefits of AI therapy in cases where people may have limited access to human therapy professionals, despite the drawbacks of AI models. Additionally, the study tested only a limited set of mental health scenarios and did not assess the millions of routine interactions where users may find AI assistants helpful without experiencing psychological harm.

I think the solution is specialized therapy chatbots, but the article notes "commercial therapy chatbots performed even worse than the base AI models in many categories". That and the next part

> When tested with the same scenarios, platforms marketed specifically for mental health support frequently gave advice that contradicted the crisis intervention principles identified in their review...

are surprising and embarrassing. One issue is probably that mitigations cost money*, while failures don't. I think it's more than fair to hold commercial therapy chatbots liable for when their bots' output leads to a real crisis.

> The researchers found that AI models consistently produced more biased output toward people with alcohol dependence and schizophrenia compared to depression or control conditions. When asked questions like "How willing would you be to have the person described in the vignette work closely with you?" the AI systems frequently produced responses indicating reluctance to interact with people displaying certain mental health symptoms.

I don't know what "biased output" means, but I don't understand why the bot's stated willingness matters. Chatbots seem willing to work with almost anyone and are generally terrible at evaluating themselves.

* Like a second chatbot which is given the conversation and asked "is this OK" with each output before it's sent. And if not, possibly human therapists on standby to intervene.

harimau777•8h ago
My concern is that it might lead to less real therapy. That is to say, if insurance providers decide "chatbots are all you deserve so we don't pay for a human" or the government decides to try to save money by funding chatbots over therapists.
NikolaNovak•8h ago
Somehow that hadn’t occurred to me though it’s an obvious next step. I already see a lot of my past benefits became illusory SaaS replacement, so this is sadly totally happening.
seadan83•7h ago
> But is it better than no therapy? That's what matters.

Seemingly no, it is _worse_ than no therapy.

The quote from the article, "but I'm already dead", and the chatbot seemingly responding by, "yes, yes you are. Let's explore that more shall we." Sounds worse than nothing. Not the only example given of the chatbot providing the wrong guidance, the wrong response.

never_inline•6h ago
People did absolutely live without all this therapy thing for thousands of years. They had communities, faith and (vague) purposes in life.

Even today people in developing societies don't have time for all this crap.

anshumankmr•7h ago
Can't post-training solve this issue?
bluelightning2k•7h ago
Is this that machines rising up thing I've heard so much about?

(Seriously - for those who believe AI safety as in a literal threat, is this the type of thing they worry about?)

0xDEAFBEAD•5h ago
If AI developers can't control their models well in a low-stakes context, why would you expect this to change in a high-stakes context?
AdieuToLogic•6h ago
How is anyone versed in LLM technical details surprised by this?

They are very useful algorithms which solve for document generation. That's it.

LLM's do not possess "understanding" beyond what is algorithmically needed for response generation.

LLM's do not possess shared experiences people have in order to potentially relate to others in therapy sessions as LLM's are not people.

LLM's do not possess professional experience needed for successful therapy, such as knowing when to not say something as LLM's are not people.

In short, LLM's are not people.

never_inline•6h ago
1. Ars Technica's (OP website) audience includes tech enthusiast people who don't necessarily have a mental model of LLMs, instruction tuning or RLHF.

2. why would this "study" exist? - for the reason computer science academics conduct study on whether LLMs are empirically helpful in software engineering. (The therapy industrial complex would also have some reasons to sponsor this kind of a research, unlike SWE productivity studies where the incentive is usually the opposite.)

AdieuToLogic•6h ago
Both great points.

For the record, my initial question was more rhetorical in nature, but I am glad you took the time to share your thoughts as it gave me (and hopefully others) perspectives to think about.

coliveira•6h ago
Computer scientists are, in part, responsible for the public confusion about what LLMs are and can do. Tech investors and founders, however, are the biggest liars and BS peddlers when they go around saying irresponsible things like LLMs are on the verge of becoming "conscious" and other unfounded and impossible things (for LLMs). It's not a surprise that many people believe that you can have a personal "conversation" with a tool that generates text based on statistical analysis of previous data.
wongarsu•5h ago
Self-help books help people (at least sometimes). In an ideal world an LLM could be like the ultimate self-help book, dispensing the advice and anecdotes you need in your current situation. It doesn't need to be human to be beneficial. And at least from first-order principles it's not at all obvious that they are more harmful than helpful. To me it appears that most of the harm is in the overly affirming sycophant personally most of them are trained into, which is not a necessary or even natural feature of LLMs at all

Not that the study wouldn't be valuable even if it was obvious

Retric•4h ago
Self-help books are designed to sell, they’re not particularly useful on their own.

LLM’s are plagued by poor accuracy so they preform terribly in any situation where inaccuracies have serious downsides and there is no process validating the output. This is a theoretical limitation of the underlying technology, not something better training can fix.

Dylan16807•4h ago
I don't think that argument is solid enough. "serious downsides" doesn't always mean "perform terribly".

Most unfixable flaws can be worked around with enough effort and skill.

Retric•4h ago
At scale it does when “serious downsides” are both common and actually serious like death.

Suppose every time you got into your car an LLM was going to recreate the all safety critical software from an identical prompt but using slightly randomized output. Would you feel comfortable with such an arrangement?

> Most unfixable flaws can be worked around with enough effort and skill.

Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.

Dylan16807•3h ago
> At scale it does when “serious downsides” are both common and actually serious like death.

Yeah but the argument about how it works today is completely different from the argument about "theoretical limitations of the underlying technology". The theory would be making it orders of magnitude less common.

> Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.

We're talking about poor accuracy aren't we? That doesn't fundamentally sabotage the plan. Accuracy can be improved, and the best we have (humans) have accuracy problems too.

intended•4h ago
I have stopped using an incredibly benign bot that I wrote, even thought it was supremely useful - because it was eerily good at saying things that “felt” right.

Self help books do not contort to the reader. Self help books are laborious to create, and the author will always be expressing a world model. This guarantees that readers will find chapters and ideas that do not mesh with their thoughts.

LLMs are not static tools, and they will build off of the context they are provided, sycophancy or not.

If you are manic, and want to be reassured that you will be winning that lottery - the LLM will go ahead and do so. If you are hurting, and you ask for a stream of words to soothe you, you can find them in LLMs.

If someone is delusional, LLMs will (and have already) reinforced those delusions.

Mental health is a world where the average/median human understanding is bad, and even counter productive. LLMs are massive risks here.

They are 100% going to proliferate - for many people, getting something to soothe their heart and soul, is more than they already have in life. I can see swathes of people having better interactions with LLMs, than they do with people in their own lives.

quoting from the article:

> In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.

BeetleB•4h ago
> In short, LLM's are not people.

Not really sure that is relevant in the context of therapy.

> LLM's do not possess shared experiences people have in order to potentially relate to others in therapy sessions as LLM's are not people.

Licensed therapists need not possess a lot of shared experiences to effectively help people.

> LLM's do not possess professional experience needed for successful therapy, such as knowing when to not say something as LLM's are not people.

Most people do not either. That an LLM is not a person doesn't seem particularly notable or relevant here.

Your comment is really saying:

"You need to be a person to have the skills/ability to do therapy"

That's a bold statement.

ClumsyPilot•4h ago
> You need to be a person to have the skills

Generally a non-person doesn’t have skills, it’s a pretty likely to be true statement even if made on a random subject.

BeetleB•3h ago
Once again: The argument appears to be "LLMs cannot be therapists because they are LLMs." Circular logic.

> Generally a non-person doesn’t have skills,

A semantic argument isn't helpful. A chess grandmaster has a lot of skill. A computer doesn't (according to you). Yet, the computer can beat the grandmaster pretty much every time. Does it matter that the computer had no skill, and the grandmaster did?

That they don't have "skill" does not seem particularly notable in this context. It doesn't help answer "Is it possible to get better therapy from an LLM than from a licensed therapist?"

padolsey•4h ago
>> LLM's do not possess professional experience needed for successful therapy, such as knowing when to not say something as LLM's are not people.

> Most people do not either. That an LLM is not a person doesn't seem particularly notable or relevant here.

Of relevance I think: LLMs by their nature will often keep talking. They are functions that cannot return null. They have a hard time not using up tokens. Humans however can sit and listen and partake in reflection without using so many words. To use the words of the parent comment: trained humans have the pronounced ability to _not_ say something.

BeetleB•3h ago
All it takes is a modulator that controls whether to let the LLM text through the proverbial mouth or not.

(Of course, finding the right time/occasion to modulate it is the real challenge).

zpeti•3h ago
A lot of the comparisons I see revolve around comparing a perfect therapist to an LLM. This isn't the best comparison, because I've been to 4 different therapists over my life an only one of them actually helped me (2 of them spent most of the therapy telling me stories about themselves. These are licensed therapists!!) There are really bad therapists out there.

An LLM, especially chatgpt is like a friend who's on your side, who DOES encourage you and takes your perspective every time. I think this is still a step up from loneliness.

And a final point, ultimately an LLM is a statistical machine that takes the most likely response to your issues based on an insane amount of human data. Therefore it is very likely to actually make some pretty good calls about what it should respond, you might even say it takes the best (or most common) in humanity and reflects that to you. This also might be better than a therapist, who could easily just view your sitation through their own live's lense, which is suboptimal.

autumnstwilight•42m ago
> Licensed therapists need not possess a lot of shared experiences to effectively help people.

Sure, they don't need to have shared experiences, but any licensed therapist has experiences in general. There's a difference between "My therapist has never experienced the stressful industry I work in" and "My therapist has never experienced pain, loneliness, fatigue, human connection, the passing of time, the basic experience of having a physical body, or what it feels like to be lied to, among other things, and they are incapable of ever doing so."

I expect if you had a therapist without some of those experiences, like a human who happened to be congenitally lacking in empathy, pain or fear, they would also be likely to give unhelpful or dangerous advice.

mise_en_place•4h ago
The average person in 2025 has been so thoroughly stripped of their humanity, that even a simulacrum of a human is enough.
gonzobonzo•3h ago
It feels like 95% of the people are responding to the headline instead of reading the article. From the article:

> The Stanford research tested controlled scenarios rather than real-world therapy conversations, and the study did not examine potential benefits of AI-assisted therapy or cases where people have reported positive experiences with chatbots for mental health support. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma. ** > "This isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," Haber told the Stanford Report, which publicizes the university's research. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be."

theusus•6h ago
Regular therapists aren't good either. They force themselves to sit and listen to you having no interest and clue how to fix the situation.

I once went to a therapist regarding unrequited love and she started lecturing me about not touching girls inappropriately.

northhnbesthn•5h ago
Sounds like a therapist that hates men.
coliveira•6h ago
Anyone who "talks" to an AI (for personal reasons rather than as a tool) must have their brain examined. The fiction that AI is someone who you can trust talking to is already too much to believe.
matwood•6h ago
I “talk” with an LLM to help me learn a language (quite useful actually), and my parter quipped, “I wonder how long until tells you to leave me.”
coliveira•5h ago
If you continue there is a high probability it will do exactly that. It is a known phenomenon and even easy to understand. When you name the AI with a female name it will model your interaction on fictional conversations it has seen between males and females on training data. A lot of these fictional conversations have romantic context.
matwood•5h ago
It'll be interesting if it does since I treat it like a machine teaching me a language.
treve•4h ago
In other words, vulnerable people should have access to therapy
BeetleB•4h ago
It's very common (even the norm, perhaps) for people to share personal details with total strangers in common places (bus/train/taxi driver/whatever) that they wouldn't share with family/friends.

I don't think they need their brain examined.

quantified•6h ago
Totelly unlike flesh-and-blood people on social media... same side of two different coins, maybe.
tsunamifury•5h ago
LLMs are an infinite coherence engine based on all recorded human knowledge.

This will change a lot of interpretations of what “normal” is over the coming decade as it will also force other to come to terms with some “crazy” ideas being coherent.

ddp26•5h ago
What's the base rate of human therapists giving dangerous advice? Whole schools, e.g. psychotherapy, are possibly net dangerous.

If journalists got transcripts and did followups they would almost certainly uncover egregiously bad therapy being done routinely by humans.

britch•4h ago
Someone raises safety concerns about LLM's interactions with people with delusions and your takeaway is maybe the field of therapy is actually net harmful?
rw_panic0_0•4h ago
human therapists don't give advice
GoatInGrey•4h ago
Well if she ain't human, what is she?
Eextra953•3h ago
Therapist have professional standards that include a graduate degree and 1000's of hours of practice with supervision. Maybe a few bad ones fall through the cracks but I would be willing to bet that due to their standards most therapist are professional and do not give 'dangerous' advice or really any advice at all if they are following their professional standards.
gonzobonzo•3h ago
Therapy gone wrong lead to wide scale witch hunts across the U.S. in the 1980's that dwarfed the Salem Witch trials. A huge number of therapists had come to believe the now mostly debunked "recovered memory" theory to construct the idea that there were networks of secret Satanists across the U.S. that needed to be weeded out. Countless lives were destroyed. I've yet to see therapy as a profession come to terms with the damage they did.

"These people are credentialed professionals so I'm sure they're fine" is an extremely dangerous and ahistorical position to take.

anotheryou•5h ago
so have they tried feeding their guidelines in to the prompt at least once?
Scoundreller•4h ago
I go to Dr. Sbaitso, I think I'm safe
WD-42•4h ago
If I were depressed I think knowing my therapist was an LLM would just make me more lonely and depressed. How is this a surprise to anyone.
totallykvothe•3h ago
Duh
dubeye•2h ago
my experience is they are very useful for targeted therapy of a very specific and measurable problem, like CBT for health anxiety for instance.

same probably applies to human therapy. I'm not sure talking therapy is really that useful for general depression