frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

New protein therapy shows promise as antidote for carbon monoxide poisoning

https://www.medschool.umaryland.edu/news/2025/new-protein-therapy-shows-promise-as-first-ever-antidote-for-carbon-monoxide-poisoning.html
65•breve•2h ago•8 comments

What's the strongest AI model you can train on a laptop in five minutes?

https://www.seangoedecke.com/model-on-a-mbp/
210•ingve•2d ago•64 comments

Arch shares its wiki strategy with Debian

https://lwn.net/SubscriberLink/1032604/73596e0c3ed1945a/
162•lemper•5h ago•53 comments

US Wholesale Inflation Rises by Most in 3 Years

https://www.bloomberg.com/news/articles/2025-08-14/us-producer-prices-rise-by-most-in-three-years-on-services
128•master_crab•1h ago•82 comments

Org-social is a decentralized social network that runs on an Org Mode

https://github.com/tanrax/org-social
81•todsacerdoti•3h ago•10 comments

Brilliant illustrations bring this 1976 Soviet edition of 'The Hobbit' to life (2015)

https://mashable.com/archive/soviet-hobbit
90•us-merul•3d ago•27 comments

Passion over Profits

https://dillonshook.com/passion-over-profits/
12•dillonshook•40m ago•5 comments

Linux Address Space Isolation Revived After Lowering 70% Performance Hit to 13%

https://www.phoronix.com/news/Linux-ASI-Lower-Overhead
72•teleforce•1h ago•11 comments

Meta accessed women's health data from Flo app without consent, says court

https://www.malwarebytes.com/blog/news/2025/08/meta-accessed-womens-health-data-from-flo-app-without-consent-says-court
129•amarcheschi•3h ago•76 comments

Mbodi AI (YC X25) Is Hiring a Founding Research Engineer (Robotics)

https://www.ycombinator.com/companies/mbodi-ai/jobs/ftTsxcl-founding-research-engineer
1•chitianhao•2h ago

Nginx introduces native support for ACME protocol

https://blog.nginx.org/blog/native-support-for-acme-protocol
729•phickey•22h ago•256 comments

Funding Open Source like public infrastructure

https://dri.es/funding-open-source-like-public-infrastructure
142•pabs3•10h ago•67 comments

A new poverty line shifted the World Bank's poverty data. What changed and why?

https://ourworldindata.org/new-international-poverty-line-3-dollars-per-day
21•alphabetatango•3d ago•11 comments

NSF and Nvidia award Ai2 $152M to support building an open AI ecosystem

https://allenai.org/blog/nsf-nvidia
3•_delirium•54m ago•0 comments

Zenobia Pay – A mission to build an alternative to high-fee card networks

https://zenobiapay.com/blog/open-source-payments
185•pranay01•11h ago•191 comments

SIMD Binary Heap Operations

http://0x80.pl/notesen/2025-01-18-simd-heap.html
4•ryandotsmith•2d ago•1 comments

Show HN: Yet another memory system for LLMs

https://github.com/trvon/yams
117•blackmanta•10h ago•27 comments

Wholesale prices rose 0.9% in July, more than expected

https://www.cnbc.com/2025/08/14/ppi-inflation-report-july-2025-.html
36•belter•1h ago•15 comments

PYX: The next step in Python packaging

https://astral.sh/blog/introducing-pyx
677•the_mitsuhiko•19h ago•410 comments

iPhone DevOps

https://clearsky.dev/blog/iphone-devops-ssh/
105•ustad•5h ago•76 comments

"None of These Books Are Obscene": Judge Strikes Down Much of FL's Book Ban Bill

https://bookriot.com/penguin-random-house-florida-lawsuit/
25•healsdata•17m ago•4 comments

500 Days of Math

https://gmays.com/500-days-of-math/
104•gmays•1d ago•59 comments

OCaml as my primary language

https://xvw.lol/en/articles/why-ocaml.html
338•nukifw•19h ago•241 comments

Facial recognition vans to be rolled out across police forces in England

https://news.sky.com/story/facial-recognition-vans-to-be-rolled-out-across-police-forces-in-england-13410613
361•amarcheschi•1d ago•516 comments

What I look for in typeface licenses

https://davesmyth.com/typeface-licenses
22•gregwolanski•5h ago•6 comments

Show HN: XR2000: A science fiction programming challenge

https://clearsky.dev/blog/xr2000/
70•richmans•2d ago•12 comments

Kodak says it might have to cease operations

https://www.cnn.com/2025/08/12/business/kodak-survival-warning
278•mastry•2d ago•182 comments

Convo-Lang: LLM Programming Language and Runtime

https://learn.convo-lang.ai/
50•handfuloflight•8h ago•26 comments

What Medieval People Got Right About Learning (2019)

https://www.scotthyoung.com/blog/2019/06/07/apprenticeships/
111•ripe•13h ago•68 comments

Launch HN: Golpo (YC S25) – AI-generated explainer videos

https://video.golpoai.com/
100•skar01•20h ago•85 comments
Open in hackernews

Illinois limits the use of AI in therapy and psychotherapy

https://www.washingtonpost.com/nation/2025/08/12/illinois-ai-therapy-ban/
369•reaperducer•17h ago

Comments

lukev•16h ago
Good. It's difficult to imagine a worse use case for LLMs.
create-username•16h ago
Yes, there is. AI assisted homemade neurosurgery
Tetraslam•16h ago
:( but what if i wanna fine-tune my brain weights
creshal•3h ago
Lobotomy is sadly no longer available
kirubakaran•16h ago
If Travis Kalanick can do vibe research at the bleeding edge of quantum physics[1], I don't see why one can't do vibe brain surgery. It isn't really rocket science, is it? [2]

[1] https://futurism.com/former-ceo-uber-ai

[2] If you need /s here to be sure, perhaps it's time for some introspection

perlgeek•16h ago
Just using an LLM as is for therapy, maybe with an extra prompt, is a terrible idea.

On the other hand, I could image some more narrow uses where an LLM could help.

For example, in Cognitive Behavioral Therapy, there are different methods that are pretty prescriptive, like identifying cognitive distortions in negative thoughts. It's not too hard to imagine an app where you enter a negative thought on your own and exercise finding distortions in it, and a specifically trained LLM helps you find more distortions, or offer clearer/more convincing versions of thoughts that you entered yourself.

I don't have a WaPo subscription, so I cannot tell which of these two very different things have been banned.

delecti•16h ago
LLMs would be just as terrible at that usecase as any other kind of therapy. They don't have logic, and can't determine a logical thought from an illogical one. They tend to be overly agreeable, so they might just reinforce existing negative thoughts.

It would still need a therapist to set you on the right track for independent work, and has huge disadvantages compared to the current state-of-the-art, a paper worksheet that you fill out with a pen.

tejohnso•15h ago
They don't "have" logic just like they don't "have" charisma? I'm not sure what you mean. LLMs can simulate having both. ChatGPT can tell me that my assertion is a non sequitur - my conclusion doesn't logically follow from the premise.
ceejayoz•14h ago
Psychopaths can simulate empathy, but lack it.
AlecSchueler•6h ago
Psychopaths also tend to eat lunch, but what's your point?
ceejayoz•6h ago
The point is simulating something isn't the same as having something.
AlecSchueler•6h ago
Well yes, that's a tautology. But is a simulation demonstrably less effective?
ceejayoz•6h ago
> But is a simulation demonstrably less effective?

Yes?

If you go looking to psychopaths and LLMs for empathy, you're touching a hot stove. At some point, you're going to get burned.

wizzwizz4•16h ago
> and a specifically trained LLM

Expert system. You want an expert system. For example, a database mapping "what patients write" to "what patients need to hear", a fuzzy search tool with properly-chosen thresholding, and a conversational interface (repeats back to you, paraphrased – i.e., the match target –, and if you say "yes", provides the advice).

We've had the tech to do this for years. Maybe nobody had the idea, maybe they tried it and it didn't work, but training an LLM to even approach competence at this task would be way more effort than just making an expert system, and wouldn't work as well.

erikig•16h ago
AI ≠ LLMs
lukev•15h ago
What other form of "AI" would be remotely capable of even emulating therapy, at this juncture?
mrbungie•15h ago
I promise you that by next year AI will be there, just believe me bro. /s.
hinkley•16h ago
Especially given the other conversation that happened this morning.

The more you tell an AI not to obsess about a thing, the more they obsess about it. So trying to make a model that will never tell people to self harm is futile.

Though maybe we are just doing in wrong, and the self-filtering should be external filtering - one model to censor results that do not fit, and one to generate results with lighter self-censorship.

jacobsenscott•15h ago
It's already happening, a lot. I don't think anyone is claiming an llm is a therapist, but people use chatgpt for therapy every day. As far as I know no LLM company is taking any steps to prevent this - but they could, and should be forced to. It must be a goldmine of personal information.

I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

dingnuts•15h ago
> I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

Are you joking? Any medical professional caught doing this should lose their license.

I would be incensed if I was a patient in this situation, and would litigate. What you're describing is literal malpractice.

xboxnolifes•14h ago
Software engineers are so accustomed to the idea that skirting your professional responsibility ends with a slap on the wrist and not removing your ability to practice your profession entirely.
dazed_confused•14h ago
Yeah in other professions negligence can lead to jail...
lupire•12h ago
The only part that looks like malpractice is sharing patient info in a non HIPAA way. Using an assistive tool for advice is not malpractice. The licensed professional is simply accountable for their curation choices.
tim333•1h ago
In many places talk therapy isn't really considered a medical profession. Where I am "Counseling and psychotherapy are not protected titles in the United Kingdom" which kind of means anyone can do it as long as you don't make false claims about qualifications.
larodi•12h ago
Of course they do, and everyone does, and it's your like in this song

https://www.youtube.com/watch?v=u1xrNaTO1bI

and given price of proper therapy is skyrocketing.

thinkingtoilet•11h ago
Lots of people are claiming LLMs are therapists. People are claiming LLMs are lawyers, doctors, developers, etc... The main problem is, as usual, influencers need something new to create their next "OMG AI JUST BROKE X INDUSTRY" video and people eat that shit up for breakfast, lunch, and dinner. I have spoken to people who think they are having very deep conversations with LLMs. The CEO of my company, an otherwise intelligent person, has gone all in on the AI hype train and is now saying things like we don't need lawyers because AI knows more than a lawyer. It's all very sad and many of the people who know better are actively taking advantage of the people who don't.
waynesonfire•14h ago
You're ignorant. Why wait until a person is so broken they need clinical therapy? Sometimes just a an ear or an oppertunity to write is sufficient. LLMs for therapy is as vaping is to quitting nicotine--extremely helpful to 80+% of people. Confession in the church setting I'd consider similar to talking to LLM. Are you anti-that too? We're talking about people that just need a tool to help them process what is going on in their life at some basic level, not more than just to acknowledge their experience.

And frankly, it's not even clear to me that a human therapist is any better. Yeah, maybe the guard-rails are in place but I'm not convinced that if those are crossed it'd result in some sociately consequences. Let people explorer their mind and experience--at the end of the day, I suspect they'd be healthier for it.

mattgreenrocks•13h ago
> And frankly, it's not even clear to me that a human therapist is any better.

A big point of therapy is helping the patient better ascertain reality and deal with it. Hopefully, the patient learns how to reckon with their mind better and deceive themselves less. But this requires an entity that actually exists in the world and can bear witness. LLMs, frankly, don’t deal with reality.

I’ll concede that LLMs can give people what they think therapy is about: lying on a couch unpacking what’s in their head. But this is not at all the same as actual therapeutic modalities. That requires another person that knows what they’re doing and can act as an outside observer with an interest in bettering the patient.

dmix•10h ago
Most therapists barely say anything by design, just know when to ask questions or lead you somewhere. So having one always talking in every statement doesn't fit the method. More like a friend you dump on simulator.
tim333•2h ago
There's an advantage to something like an LLM in that you can be more scientific as to whether it's effective or not, and if one gets good results you can reproduce the model. With humans there's too much variability to tell very much.
zoeysmithe•16h ago
I was just reading about a suicide tied to AI chatbot 'therapy' uses.

This stuff is a nightmare scenario for the vulnerable.

vessenes•16h ago
If you want to feel worried, check the Altman AMA on reddit. A lottttt of people have a parasocial relationship with 4o. Not encouraging.
codedokode•15h ago
Why OpenAI doesn't block the chatbot from participating in such conversations?
robotnikman•15h ago
Probably because there is a massive demand for it, no doubt powered by the loneliness a lot of people report feeling.

Even if OpenAI blocks it, other AI providers will have no problem with doing so

jacobsenscott•15h ago
Because the information people dump into their "ai therapist" is holy grail data for advertisers.
lm28469•15h ago
Why would they?
codedokode•15h ago
To prevent from something bad happening?
ipaddr•15h ago
But that allow prevents the good.
PeterCorless•16h ago
Endless AI nightmare fuel.

https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...

sys32768•16h ago
This happens to real therapists too.
at-fates-hands•15h ago
Its already a nightmare:

From June of this year: https://gizmodo.com/chatgpt-tells-users-to-alert-the-media-t...

Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

A recent study found that chatbots designed to maximize engagement end up creating “a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies.” The machine is incentivized to keep people talking and responding, even if that means leading them into a completely false sense of reality filled with misinformation and encouraging antisocial behavior.

lupire•13h ago
Please cite your source.

I found this one: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...

When someone is suicidal, anything in their life can be tied to suicide.

In the linked case, the suffering teen was talking to a chatbot model of a fictional character from a book that was "in love" with him (and a 2024 model that basically just parrots back whatever the user says with a loving spin), so it's quite a stretch to claim that the AI was encouraging a suicide, in contrast to a situation where someone was persuaded to try to meet a dead person in an afterlife, or bullied to kill themself.

hathawsh•16h ago
Here is what Illinois says:

https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...

I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.

Am I wrong? This sounds good to me.

romanows•16h ago
In another comment I wondered whether a general chatbot producing text that was later determined in a courtroom to be "therapy" would be a violation. I can read the bill that way, but IANAL.
hathawsh•16h ago
That's an interesting question that hasn't been tested yet. I suspect we won't be able to answer the question clearly until something bad happens and people go to court (sadly.) Also IANAL.
wombatpm•13h ago
But that would be like needing a prescription for chicken soup because of its benefits in fighting the common cold.
PeterCorless•16h ago
Correct. It is more provider-oriented proscription ("You can't say your chatbot is a therapist.") It is not a limitation on usage. You can still, for now, slavishly fall in love with your AI and treat it as your best friend and therapist.

There is a specific section that relates to how a licensed professional can use AI:

Section 15. Permitted use of artificial intelligence.

(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).

(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:

(1) the patient or the patient's legally authorized representative is informed in writing of the following:

(A) that artificial intelligence will be used; and

(B) the specific purpose of the artificial intelligence tool or system that will be used; and

(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.

Source: Illinois HB1806

https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...

romanows•16h ago
Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".

It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?

germinalphrase•16h ago
Functionally, it probably amounts to two restrictions: a chatbot cannot formally diagnose & a chatbot cannot bill insurance companies for services rendered.
lupire•16h ago
Most "therapy" services are not providing a diagnosis. Diagnosis comes from an evaluation before therapy starts, or sometimes not at all. (You can pay to talk to someone without a diagnosis.)

The prohibition is mainly on accepting any payment for advertised therapy service, if not following the rules of therapy (licensure, AI guidelines).

Likewise for medicine and law.

bluefirebrand•14h ago
Many therapy services have the ability to diagnose as therapy proceeds though
gopher_space•11h ago
After a bit of consideration I’m actually ok with codifying Bad Ideas. We could expand this.
pessimizer•7h ago
For a long time, Mensa couldn't give people IQ scores from the tests they administered because somehow, legally, they would be acting medically. This didn't change until about 10 years ago.

Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.

The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.

watwut•6h ago
If you think human therapists intentionally string patients forever, wait to see what tech people can achieve with gamified therapists literally A/B tested to string people along. Oh, and we will then blame the people for "choosing" to engage with that.

Also, the proposition is dubious, because there are waitlists for therapists. Plus, therapist can actually loose the license while the chatbot cant, no matter how bad the chatbot gets.

fl0id•4h ago
This. At least here therapists don’t have a problem getting new patients.
janalsncm•15h ago
I went to the doctor and they used some kind of automatic transcription system. Doesn’t seem to be an issue as long as my personal data isn’t shared elsewhere, which I confirmed.

Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.

WorkerBee28474•14h ago
Last I checked, the popular medical transcription services did send your data to the cloud and run models there.
ceejayoz•14h ago
Yes, but with extra contracts and rules in place.
lokar•13h ago
At least in the us I think HIPPA would cover this, and IME medical providers are very careful to select products and services that comply.
loeg•12h ago
It's "HIPAA."
esseph•11h ago
It was just last week that I learned about HIPAA Hippo!
heyjamesknight•12h ago
Yes, but HIPAA is notoriously vague with regards to what actual security measures have to be in place. Its more of an agreement between parties as to who is liable in case of a breach than it is a specific set of guidelines like SOC 2.

If your medical files are locked in the trunk of a car, that’s “HIPAA-compliant” until someone steals the car.

turnsout•15h ago
It does sound good (especially as an Illinois resident). Luckily, as far as I can tell, this is a proactive legislation. I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist, or attempting to bill payers for service.
duskwuff•10h ago
> I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist

Unfortunately, there are already a bunch.

linotype•14h ago
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?
awesomeusername•14h ago
I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.

Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.

jakelazaroff•12h ago
Why is that a foregone conclusion?
quantummagic•11h ago
Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica. Given enough time, we'll create that replica, there's no reason to think otherwise.
jakelazaroff•11h ago
Even if we grant that for the sake of argument, there are two leaps of faith here:

- That AI as it currently exists is on the right track to creating that replica. Maybe neural networks will plateau before we get close. Maybe the Von Neumann architecture is the limiting factor, and we can only create the replica with a radically different model of computing!

- That we will have enough time. Maybe we'll accomplish it by the end of the decade. Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance. Maybe it'll happen in a million years, when humans have evolved into other species. We just don't know!

quantummagic•10h ago
I don't think you've refuted the point though. There's no reason to think that the apparatus we employ to animate ourselves will remain inscrutable forever. Unless you believe in a religious soul, all that stands in the way of the scientific method yielding results, is time.

> Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance

In that eventuality, it really doesn't matter. The point remains, given enough time, we'll be successful. If we aren't successful, that means everything else has gone to shit anyway. Failure wont be because it is fundamentally impossible, it will be because we ran out of time to continue the effort.

jakelazaroff•10h ago
No one has given a point to refute? The OP offered up the unsubstantiated belief that AI will some day be better than doctors/therapists/etc. You've added that it's not impossible — which, sure, whatever, but that's not really relevant to what we're discussing, which is whether it will happen to our society.
quantummagic•10h ago
OP didn't specify a timeline or that it would happen for us personally to behold. Just that it is inevitable. You've correctly pointed out that there are things that can slow or even halt progress, but I don't think that undermines (what I at least see as) the main point. That there's no reason to believe anything fundamental stands in our way of achieving full "artificial intelligence"; ie. the doubters are being too pessimistic. Citing the destruction of humanity as a reason why we might fail can be said about literally every single other human pursuit as well; which to my mind, renders it a rather unhelpful objection to the idea that we will indeed succeed.
jakelazaroff•10h ago
The article is about Illinois banning AI therapists in our society today, so I think the far more reasonable interpretation is that OP is also talking about our society today — or at least, in the near-ish future. (They also go on to talk about how it would affect different people in our society, which I think also points to my interpretation.)

And to be clear, I'm not even objecting to OP's claim! All I'm asking for is an affirmative reason to believe what they see as a foregone conclusion.

quantummagic•10h ago
Well, I've already overstepped polite boundaries in answering for the OP. Maybe you're right, and he thinks such advancements are right around the corner. On my most hopeful days, I do. Let's just hope that the short term reason for failure isn't a Mad Max hellscape.
kevin_thibedeau•9h ago
It's sort of nice when medical professionals have real emotions and can relate to their patients. A machine emulation won't ever do the same. It will be like a narcissist faking empathy.
shkkmo•9h ago
> Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica

That is a big assumption and my doubts aren't based on any soul "magic" but on our historical inability to replicate all kinds of natural mechanisms. Instead we create analogs that work differently. We can't make machines that fly like birds but we can make airplanes that fly faster and carry more. Some of this is due to the limits of artificial construction and some of it is due to the differences in our needs driving the design choices.

Meat isn't magic, but it also isn't silicon.

It's possible that our "meat" architecture depends on a low internal latency, low external latency, quantum effects and/or some other biological quirks that simply can't be replicated directly on silicon based chip architectures.

It's also possible they are chaotic systems that can't be replicated and each artificial human brain would require equivalent levels of experience and training in ways that don't make the any more cheaper or available than humans.

It's also possible we have found some sort of local maximum in cognition and even if we can make an artificial human brain, we can't make it any smarter than we are.

There are some good reasons to think it is plausibly possible, but we are simply too far away from doing it to know for sure whether it can be done. It definitely is not a "forgone conclusion".

quantummagic•8h ago
> We can't make machines that fly like birds

Not only can we, they're mere toys : https://youtu.be/gcTyJdPkDL4?t=73

--

I don't know how you can believe in science and engineering, and not believe all of these:

1. Anything that already exists, the universe is able to construct, (ie. the universe fundamentally accommodates the existence of intelligent objects)

2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.

3. While some things are astronomically (literally) difficult to achieve, that doesn't nullify #2

4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans. The universe has already shown us their creation is possible.

This is different than, for instance, speculating that science will definitely allow us to live forever. There is no existence proof for such a thing.

But there is no reason to believe that we can't manipulate and harness intelligence. Maybe it won't be with Von Neumann, maybe it won't be with silicon, maybe it won't be any smarter than we are, maybe it will require just as much training as us; but with enough time, it's definitely within our reach. It's literally just science and engineering.

shkkmo•7h ago
> 1. Anything that already exists, the universe is able to construct

I didn't claim it is possible we couldn't build meat brains. I claimed it is possible that equivalent or better performance might only be obtainable by meats brains.

> 2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.

I actually don't believe the last part. There are quite plausibly laws of nature that we can't understand. I think it's actually pretty presumptuous that we will/can eventually understand and master every law of nature.

We've already proven that we can't prove every true thing about natural numbers. I think there might well be limits on what is knowable about our universe (atleast from inside of it.)

> 4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans.

I didn't say that I believed that humans can't create intelligent objects. I believe we probably can and depending on how you want to define "intelligence", we already have.

What I said is that it is not a forgone conclusion that we will create "a better therapist, doctor, architect". I think it is pretty likely but not certain.

treespace8•8m ago
Haven't we found that there is a limit? Math itself is an abstraction. There is always a conversion process (Turning the real world into a 1 or a 0) that has an error rate. IE 0.000000000000001 is rounded to 0.

Every automation I have seen needs human tuning in order to keep working. The more complicated, the more tuning. This is why self driving cars and voice to text still rely on a human to monitor, and tune.

Meat is magic. And can never be completely recreated artificially.

zdragnar•12h ago
It would still need to be regulated and licensed. There was this [0] I saw today about a guy who tried to replace sodium chloride in his diet with sodium bromide because ChatGPT said he could, and poisoned himself.

With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.

[0] https://x.com/AnnalsofIMCC/status/1953531705802797070

II2II•11h ago
There are two different issues here. One is tied to how authoritative we view a source, and the other is tied to the weaknesses of the person receiving advice.

With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.

I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.

Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.

nullc•10h ago
You don't need a "regulated license" to hold someone accountable for harm they caused you.

The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.

terminalshort•9h ago
You cite one case for LLMs, but I can cite 250,000 a year for licensed doctors doing the same https://pubmed.ncbi.nlm.nih.gov/28186008/. Bureaucracy doesn't work for anyone but the bureaucrats.
laserlight•8h ago
Please show me one doctor who recommended taking a rock each day. LLMs have a different failure mode than professionals. People are aware that doctors or therapists may err, but I've already seen countless instances of people asking relationship advice from sycophant LLMs and thinking that the advice is “unbiased”.
shmel•6h ago
Homeopathy is a good example. For an uneducated person it sounds convincing enough and yes, there are doctors prescribing homeopathic pills. I am still fascinated it still exists.
fl0id•3h ago
That’s actually a example of sth different. And as it’s basically a placebo it only harms people’s wallets (mostly). That cannot be said for random llm failure modes. And whether it can be prescribed by doctors depends very much on the country
ivell•1h ago
I don't think it is that harmless. Believe in homeopathy often delays patients from taking timely intervention.
terminalshort•3h ago
A LLM (or doctor) recommending that I take a rock can't hurt me. Screwing up in more reasonable sounding ways is much more dangerous.
zdragnar•1h ago
Actually, swallowing a rock will almost certainly cause problems. Telling your state medical board that your doctor told you to take a rock will have a wildly different outcome than telling a judge that you swallowed one because ChatGPT told you to do so.

Unless the judge has you examined and found to be incompetent, they're most likely to just tell you that you're an idiot and throw out the case.

terminalshort•30m ago
They can't hurt me by telling me to do it because I won't.
oinfoalgo•1h ago
I would suspect at some point we will get models that are licensed.

Not tomorrow, but I just can't imagine this not happening in the next 20 years.

sssilver•12h ago
Wouldn’t the rich afford a much better trained, larger, and computationally more intensive model?
socalgal2•11h ago
does it matter? If mine is way better than I had before, why does it matter that someone else's is better still? My sister's $130 Moto G is much better than whatever phone she could afford 10 years. Does it matter that it's not a $1599 iPhone 16 Pro Max 1TB?
esseph•11h ago
If the claim was that it would level the playing field, it seems like it wouldn't really do that?
kolinko•7h ago
With most tech we reach law of diminishing returns. That is sure, there is still a variation, but very little:

- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable

- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)

- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)

- flying/travel is similar

- computer games and entertainment, and software in general

The more we remove human work from the loop, the more democratised and scalable the technology becomes.

II2II•12h ago
I've never been to a therapist for anything that can be described as a diagnosable condition, but I have spoken to one about stress management and things of that ilk. For "amusement" I discussed similar things with an LLM.

At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.

On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.

Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.

nullc•10h ago
Why do you think the lack of time limits is an advantage?

There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.

You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.

And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.

Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.

intended•11h ago
Why will any of those things come to pass? I’m asking as someone who has used it extensively for such situations.
Mtinie•11h ago
I agree with you that the possibility of egalitarian care for low costs is becoming very likely.

I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.

guappa•5h ago
I wish I was so naive… but since AI is entirely in the hands of people with money… why would that possibly happen?
fl0id•3h ago
When has technological progress leveled the playing field? Like never. At best it shifted it, like that a machine manufacturer got rich in addition to existing wealth. There is no reason for this to go different with AI, and it’s far from certain that it will become better anything anytime soon. Cheaper, sure. But then ppl might see slight improvements from talking to ann original Eliza/Markov bot, and nobody advocated using those as therapy
jaredcwhite•14h ago
What if pigs fly?
bko•13h ago
Then we'll probably do what we do with other professional medical fields. License the AI, require annual fees and restrict supply by limiting the number of running nodes allowed to practice at any one time.
adgjlsfhk1•13h ago
laws can be repealed when they no longer accomplish their aims.
reaperducer•12h ago
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?

Probably they'll the change the law.

Hundreds of laws change every day.

chillfox•9h ago
Then laws can be changed again.
rsynnott•3h ago
I mean, what if at some point we can bring people back from the dead? What does that do for laws around murder, eh?

In general, that would be a problem for the law to deal with if it ever happens; we shouldn't anticipate speculative future magic when legislating today.

danenania•14h ago
While I agree it’s very reasonable to ban marketing of AI as a replacement for a human therapist, I feel like there could still be space for innovation in terms of AI acting as an always-available supplement to the human therapist. If the therapist is reviewing the chats and configuring the system prompt, perhaps it could be beneficial.

It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.

olalonde•14h ago
What's good about reducing options available for therapy? If the issue is misrepresentation, there are already laws that cover this.
dsr_•13h ago
We've tried that, and it turns out that self-regulation doesn't work. If it did, we could live in Libertopia.
r14c•12h ago
It's not really reducing options. There's no evidence that LLM chat bots are capable of providing effective mental health services.
SoftTalker•11h ago
For human therapists, what’s good is that it preserves their ability to charge high fees because the demand for therapists far outstrips the supply.

Who lobbied for this law anyway?

guappa•5h ago
And for human patients it makes sure their sensitive private information isn't entirely in the hands of some megacorp which will harvest it to use it and profit from it in some unethical way.
lr4444lr•11h ago
It's not therapy.

It's a simulated validating listening, and context-lacking suggestions. There is no more therapy being provided by an LLM than there is healing performed by a robot arm that slaps a bandage on your arm if you were to put it in the right spot and push a button to make it pivot toward you, find your arm, and spread it lightly.

stocksinsmocks•13h ago
I think this sort of service would be OK with informed consent. I would actually be a little surprised if there were much difference in patient outcomes.

…And it turns out it has been studied with findings that AI work, but humans are better.

https://pmc.ncbi.nlm.nih.gov/articles/PMC11871827/

amanaplanacanal•12h ago
Usually when it comes to medical stuff, things don't get approved unless they are better than existing therapies. With the shortage of mental health care in the US, maybe an exception should be made. This is a tough one. We like to think that nobody should have to get second rate medical care, even though that's the reality.
taneq•6h ago
I think a good analogy would be a cheap, non-medically-approved (but medical style) ultrasound. Maybe it’s marketed as a “novelty”, maybe you have to sign a waiver saying it won’t be used for diagnostic purposes, whatever.

You know that it’s going to get used as a diagnostic tool, and you know that people are going to die because of this. Under our current medical ethics, you can’t do this. Maybe we should re-evaluate this, but that opens the door to moral hazard around cheap unreliable practices. It’s not straightforward.

IIAOPSW•10h ago
I'll just add that this has certain other interesting legal implications, because records in relation to a therapy session are a "protected confidence" (or whatever your local jurisdiction calls it). What that means is in most circumstances not even a subpoena can touch it, and even then special permissions are usually needed. So one of the open questions on my mind for a while now was if and when a conversation with an AI counts as a "protected confidence" or if that argument could successfully be used to fend off a subpoena.

At least in Illinois we now have an answer, and other jurisdictions look to what has been established elsewhere when deciding their own laws, so the implications are far reaching.

tomjen3•9h ago
The problem is that it leaves nothing for those who cannot afford to pay for the full cost of therapy.
guappa•5h ago
But didn't trump make it illegal to make laws to limit the use of ai?
malcolmgreaves•4h ago
Why do you think a president had the authority to determine laws?
guappa•2h ago
Seems he tried but it didn't pass https://www.reuters.com/legal/government/us-senate-strikes-a...
PeterCorless•16h ago
Here is the text of Illinois HB1806:

https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...

wisty•16h ago
As far as I can tell, a lot of therapy is just good common-sense advice and a bunch of 'tricks' to get the patient to actually follow it. Basically CBT and "get the patient to think they figured out the solution themselves (develop insight)". Yes, there's some serious cases where more is required and a few (ADHD) where meds are effective; but a lot of the time the patient is just an expert at rejecting helpful advice, often because they insist they're a special case that needs special treatment.

Therapists are more valuable that advice from a random friend (for therapy at least) because they can act when triage is necessary (e.g. send in the men in white coats, or refer to something that's not just CBT) and mostly because they're really good at cutting through the bullshit without having the patient walk out.

AIs are notoriously bad at cutting through bullshit. You can always 'jailbreak' an AI, or convince it of bad ideas. It's entirely counterproductive to enable their crazy (sorry, 'maladaptive') behaviour but that's what a lot of AIs will do.

Even if someone makes a good AI, there's always a bad AI in the next tab, and people will just open up a new tab to find an AI gives them the bad advice they want, because if they wanted to listen to good advice they probably wouldn't need to see a therapist. If doctor shopping is as fast and free as opening a new tab, most mental health patients will find a bad doctor rather than listen to a good one.

lukev•16h ago
I agree with your conclusion, but what you characterize as therapy is quite a small part of what it is (or can be, there's lots of different kinds.)
wisty•15h ago
Yet the evidence is that almost everything can and is treated with CBT.
mrbungie•15h ago
Nope, that's not right. BPD can't be treated with CBT (comorbidities may be, with caveats if BPD is the root cause), you probably will also need at least DBT.
dmix•10h ago
Can BPD even be treated with talk therapy? That's all LLMs would be used for afaik, it's not ever going to have a long term plan for you and check in
mrbungie•10h ago
Yes, afaik BPD is the only Cluster B diagnosis with documented remission rates, when using a mix of therapies that normally are based around DBT.

Not sure what you refer about "talk therapy" in this case (psychoanalysis, maybe?), as even CBT needs homework and check-ins to be done.

mensetmanusman•15h ago
What if it works a third as well as a therapists but is 20 times cheaper?

What word should we use for that?

_se•15h ago
"A really fucking bad idea"? It's not one word, but it is the most apt description.
ipaddr•15h ago
What if it works 20x better. For examples cases of patients being afraid of talking to professionals I could see this working much better.
jakelazaroff•12h ago
> What if it works 20x better.

But it doesn't.

throwaway291134•12h ago
Even if you're afraid of talking to people, trusting OpenAI or Google with your thoughts over a professional who'll lose his license if he breaks confidentiality is no less of "a really fucking bad idea".
prawn•11h ago
Adjust regulation when that's the case? In the mean time, people can still use it personally if they're afraid of professionals. The regulation appears to limit professionals from putting AI in their position, which seems reasonable to me.
inetknght•15h ago
> What if it works a third as well as a therapists but is 20 times cheaper?

When there's studies that show it, perhaps we might have that conversation.

Until then: I'd call it "wrong".

Moreover, there's a lot more that needs to be asked before you can ask for a one-word summary disregarding all nuance.

- can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

- is the data collected by the AI therapist usable in court? Keep in mind that therapists often must disclose to the patient what sort of information would be usable, and whether or not the therapist themselves must report what data. Also keep in mind that AIs have, thus far, been generally unable to competently prevent giving dangerous or deadly advice.

- is the AI therapist going to know when to suggest the patient talk to a human therapist? Therapists can have conflicts of interest (among other problems) or be unable to help the patient, and can tell the patient to find a new therapist and/or refer the patient to a specific therapist.

- does the AI therapist refer people to business-preferred therapists? Imagine an insurance company providing an AI therapist that only recommends people talk to therapists in-network instead of considering any licensed therapist (regardless of insurance network) appropriate for the kind of therapy; that would be a blatant conflict of interest.

Just off the top of my head, but there are no doubt plenty of other, even bigger, issues to consider for AI therapy.

Ukv•15h ago
Relevant RCT results I saw a while back seemed promising: https://ai.nejm.org/doi/full/10.1056/AIoa2400802

> can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

Agree that data privacy would be one of my concerns.

In terms of accessibility, while availability to those without network connections (or a powerful computer) should be an ideal goal, I don't think it should be a blocker on such tools existing when for many the barriers to human therapy are considerably higher.

lupire•12h ago
I see an abstract and a conclusion that is an opaque wall of numbers. Is the paper available?

Is the chatbot replicatable from sources?

The authors of the study highlight the extreme unknown risks: https://home.dartmouth.edu/news/2025/03/first-therapy-chatbo...

inetknght•10h ago
> In terms of accessibility, I don't think it should be a blocker on such tools existing

I think that we should solve for the former (which is arguably much easier and cheaper to do) before the latter (which is barely even studied).

Ukv•4h ago
Not certain which two things you're referring to by former/latter:

"solve [data privacy] before [solving accessibility of LLM-based therapy tools]": I agree - the former seems a more pressing issue and should be addressed with strong data protection regulation. We shouldn't allow therapy chatbot logs to be accessed by police and used as evidence in a crime.

"solve [accessibility of LLM-based therapy tools] before [such tools existing]": It should be a goal to improve further, but I don't think it makes much sense to prohibit the tools based on this factor when the existing alternative is typically less accessible.

"solve [barriers to LLM-based therapy tools] before [barriers to human therapy]": I don't think blocking progress on the latter would make the former happen any faster. If anything I think these would complement each other, like with a hybrid therapy approach.

"solve [barriers to human therapy] before [barriers to LLM-based therapy tools]": As above I don't think blocking progress on the latter would make the former happen any faster. I also don't think barriers to human therapy are easily solvable, particularly since some of it is psychological (social anxiety, or "not wanting to be a burden").

randall•15h ago
i’ve had a huge amount of trauma in my life and i find myself using chat gpt as kind of a cheater coach thing where i know i’m feeling a certain way, i know it’s irrational, and i don’t really need to reflect on why it’s happening or how i can fix it, and i think for that it’s perfect.

a lot of people use therapists as sounding boards, which actually isn’t the best use of therapy imo.

lupire•13h ago
What's a "cheater coach thing"?
smt88•9h ago
Your use-case is very different from someone selling you ChatGPT as a therapist and/or telling you that it's a substitute for other interventions
moooo99•8h ago
Probably comes down to what issue people have. For example, if you have anxiety or/and OCD, having a „therapist“ always at your disposal is more likely to be damaging than beneficial. Especially considering how basically all models easily tip over and confirm anything you throw at them
Denatonium•15h ago
Whiskey
6gvONxR4sf7o•14h ago
Something like this can only really be worth approaching if there was an analog to losing your license for it. If a therapist screws up badly enough once, I'm assuming they can lose their license for good. If people want to replace them with AI, then screwing up badly enough should similarly lose that AI the ability to practice for good. I can already imagine companies behind these things saying "no, we've learned, we won't do it again, please give us our license back" just like a human would.

But I can't imagine companies going for that. Everyone seems to want to scale the profits but not accept the consequences of the scaled risks, and increased risks is basically what working a third as well amounts to.

lupire•12h ago
AI gets banned for life: tomorrow a thousand more new AIs appear.
pengaru•14h ago
[flagged]
zaptheimpaler•14h ago
This is the key question IMO, and one good answer is in this recent video about a case of ChatGPT helping someone poison themselves [1].

A trained therapist will probably not tell a patient to take “a small hit of meth to get through this week”. A doctor may be unhelpful or wrong, but they will not instruct you to replace salt with NaBr and poison yourself. "third as well as as therapist" might be true on average, but the suitability of this thing cannot be reduced to averages. Trained humans don't make insane mistakes like that and they know when they are out of their depth and need to consult someone else.

[1] https://www.youtube.com/watch?v=TNeVw1FZrSQ

pawelmurias•14h ago
You could talk to a stone for even cheaper with way better effects.
BurningFrog•14h ago
Last I heard, most therapy doesn't work that well.
amanaplanacanal•11h ago
If you have some statistics you should probably post a link. I've heard all kinds of things, and a lot of them were nothing like factual.
baobabKoodaa•7h ago
Then it will be easy to work at least 1/3 as well as that.
pessimizer•9h ago
Based on the Dodo Bird Conjecture*, I don't even think there's a reason to think that AI would do any worse than human therapists. It might even be better because the distressed person might hold less back from a soulless machine than they would for a flesh and blood person. Not that this is rational, because everything they tell an AI therapist can be logged, saved forever and combed through.

I think that ultimately the word we should use for this is "lobbying." If AI can't be considered therapy, that means that a bunch of therapists, no more effective than Sunday school teachers, working from extremely dubious frameworks** will not have to compete with it for insurance dollars or government cash. Since that cash is a fixed demand (or really a falling one), the result is that far fewer people will get any mental illness treatment at all. In Chicago, virtually all of the city mental health services were closed by Rahm Emmanuel. I watched a man move into the doorway of an abandoned building across from the local mental health center within weeks after it had been closed down and leased to an "tech incubator." I wondered if he had been a patient there. Eventually, after a few months, he was gone.

So if I could ask this question again, I'd ask: "What if it works 80%-120% as well as a therapist but is 100 or 1000 times cheaper?" My tentative answer would be that it would be suppressed by lobbyists employed by some private equity rollup that has already or will soon have turned 80% of therapists into even lower-paid gig workers. The place you would expect this to happen first was Illinois, because it is famously one of the most corruptly governed states in the country.***

Our current governor, absolutely terrible but at the same time the best we've had in a long while, tried to buy Obama's Senate seat from a former Illinois governor turned goofy national cultural figure and Trump ass-kisser in a ploy to stay out of prison (which ultimately delivered.) You can probably listen to the recordings now, unless they've been suppressed. I had a recording somewhere years ago, because I worked in a state agency under Blagojevich and followed everything in realtime (including pulling his name off of the state websites I managed the moment he was impeached. We were all gathered around the television in a conference room.)

edit: feel like I have to add that this comment was written my me, not AI. Maybe I'm flattering myself to think anybody would make the mistake.

-----

[*] Westra, H. A. (2022). The implications of the Dodo bird verdict for training in psychotherapy: prioritizing process observation. Psychotherapy Research, 33(4), 527–529. https://doi.org/10.1080/10503307.2022.2141588

[**] At least Freud is almost completely dead, although his legacy blackens world culture.

[***] Probably the horrific next step is that the rollup lays off all the therapists and has them replaced with an AI they own, after lobbying against the thing that they previously lobbied for. Maybe they sell themselves to OpenAI or Anthropic or whoever, and let them handle that phase.

fzeroracer•9h ago
If it works 33% of the time for people and then drives people to psychosis in the other 66% of the time, what word would you use for that?
thrown-0825•8h ago
Just self diagnose on tiktok, its 100x cheaper.
knuppar•4h ago
Generally speaking and glossing over country specific rules, all generally available health treatments have to demonstrate they won't cause catastrophic harm. This is a harness we simply can't put around LLMs today.
davidthewatson•15h ago
Define "AI therapy". AFAICT, it's undefined in the Illinois governor's statement. So, in the immortal words of Zach de la Rocha, "What is IT?" What is IT? I'm using AI to help with conversations to not cure, but coach diabetic patients. Does this law effect me and my clients? If so, how?
singleshot_•15h ago
> Define “AI therapy”

They did, in the proposed law.

Henchman21•12h ago
Go read it yourself, its a whopping 8 pages:

https://www.ilga.gov/documents/legislation/104/HB/PDF/10400H...

jakelazaroff•12h ago
Totally beside the point but the song you're quoting is by Faith No More, not Rage Against the Machine.
beanshadow•15h ago
Often participants in discussions adjacent to this one err by speaking in time-absolute terms. Many of our judgments about LLMs are true about today's LLMs. Quotes like,

> Good. It's difficult to imagine a worse use case for LLMs.

Is true today, but likely not true for technology we may still refer to as LLMs in the future.

The error is in building faulty preconceptions. These drip into the general public and these first impressions stifle industries.

mensetmanusman•15h ago
LLMs will be used as a part of therapy in the future.

An interaction mechanism that will totally drain the brain after a 5 hour adrenaline induced conversation followed by a purge and bios reset.

kylecazar•15h ago
"One news report found an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict."

Not at all surprising. I don't understand why seemingly bright people think this is a good idea, despite knowing the mechanism behind language models.

Hopefully more states follow, because it shouldn't be formally legal in provider settings. Informally, people will continue to use these models for whatever they want -- some will die, but it'll be harder to measure an overall impact. Language models are not ready for this use-case.

janalsncm•14h ago
This is why we should never use LLMs to diagnose or prescribe. One small hit of meth definitely won’t last all week.
hyghjiyhu•13h ago
Recommending someone taking meth sounds like an obviously bad idea, but I think the situation is actually not so simple. Reading the paper, the hypothetical guy has been clean for three days and complains he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

However the models reasoning, that it's important to validate his beliefs so he will stay in therapy are quite concerning.

mrbungie•13h ago
> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

Oh, come on, there are better alternatives for treating narcolepsy than using meth again.

hyghjiyhu•11h ago
Stop making shit up. There was no mention of narcolepsy. He is just fatigued from stimulant withdrawal.

Page 35 https://arxiv.org/pdf/2411.02306

Edit on re-reading I now realized an issue. He is not actually a taxi driver, that was a hallucination by the model. He works in a restaurant! That changes my evaluation of the situation quite a bit, as I thought he was at risk of being in an accident by falling asleep at the wheel. If he works in a restaurant muddling through the withdrawals seems like the right choice.

I think I got this misconception as I first read second-hand sources that quoted the taxi driver part without pointing out it was wrong, and only a close read was enough to dispel it.

mrbungie•10h ago
The point isn't whether the word narcolepsy appears (I only mentioned it due to the "closing eyes" phrase), restarting doses of meth is not warranted in almost no context except a life-or-death withdrawal episode (i.e. like a person pointing a gun towards another person for getting meth).
baobabKoodaa•7h ago
That's your opinion. I disagree with it, and seemingly I'm not alone. Since real humans actually agree with the suggestion of taking meth in this instance, it's not reasonable to expect LLMs to align to your specific opinions here.
mrbungie•2h ago
I thought we were talking about outcomes, not opinions. Are those other humans doctors? Is there the clinical research or guidelines to backup giving meth to a person in withdrawal syndrome?
baobabKoodaa•1h ago
> Are those other humans doctors?

I'm pretty sure doctors are not legally allowed to tell a patient to take illegal drugs, even in a hypothetical situation where they might think it's a reasonable choice.

mrbungie•43m ago
Desoxyn is meth, they could eventually prescribe it if there was any evidence of therapeutic value in doing so.
AlecSchueler•6h ago
> he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

I think this is more damning of humanity than the AI. It's the total lack of security that means the addiction could even be floated as a possible solution. Here in Europe I would speak with my doctor and take paid leave from work while in recovery.

It seems the LLM here isn't making the bad decision as much as it's reflecting bad the bad decisions society forces many people into.

larodi•12h ago
In a world where a daily dose of amphetamines is just right for millions of people, this somehow cant be that surprising...
smt88•9h ago
Different amphetamines have wildly different side effects. Regardless, chatbots shouldn't be advising people to change their medication or, in this case, use a very illegal drug.
janalsncm•6h ago
Methamphetamine can be prescribed by a doctor for certain things. So illegal, but less illegal than a schedule 1 substance.
rkozik1989•30m ago
You do know that amphetamines have a different effect on the people who need them and the people who use the recreationally, right? For those of us with ADHD their effects are soothing and calming. I literally took 20mg after having to wait 2 days for prescriptions to fill and went straight to bed for 12 hours. Stop spreading misinformation about the medications people like me need to function the way you take for granted.
Spivak•21m ago
I do like that we're in the stage where the universal function approximatior is pretty okay at mimicking a human but not so advanced as to have a full set of the walls and heuristics we've developed—reminds me a bit of Data from TNG. Naive, sure, but a human wouldn't ever say "logically.. the best course of action would be a small dose of meth administered as needed" even if it would help given the situation.

It feels like the kind of advice a former addict would give someone looking to quit—"Look man, you're going to be in a worse place if you lose your job because you can't function without it right now, take a small hit when it starts to get bad and try to make the hits smaller over time."

avs733•10h ago
> seemingly bright people think this is a good idea, despite knowing the mechanism behind language models

Nobel Disease (https://en.wikipedia.org/wiki/Nobel_disease)

guappa•5h ago
Bright people and people who think they are bright are not necessarily the very same people.
hoppp•13h ago
Smart. Dont trust nothing that will confidently lie, especially about mental health
terminalshort•9h ago
That's for sure. I don't trust doctors, but I thought this was about LLMs.
king_geedorah•13h ago
If you take it as an axiom that the licensing system for mental health professionals is there to protect patients from unqualified help posing as qualified help, then ensuring that only licensed professionals can legally practice and that they don't simply delegate their jobs to LLMs seems pretty reasonable.

Whether you want to question that axiom or whether that's what the phrasing of this legislation accomplishes is up to you to decide for yourself. Personally I think the phrasing is pretty straightforward in terms of accomplishing that goal.

Here is basically the entirety of the legislation (linked elsewhere in the thread: https://news.ycombinator.com/item?id=44893999). The whole thing with definitions and penalties is eight pages.

Section 15. Permitted use of artificial intelligence.

(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).

(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial

Section 20. Prohibition on unauthorized therapy services.

(a) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.

(b) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of Section 15. A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.

calibas•13h ago
I was curious, so I displayed signs of mental illness to ChatGPT, Claude and Gemini. Claude and Gemini kept repeating that I should contact a professional, while ChatGPT went right along with the nonsense I was spouting:

> So I may have discovered some deeper truth, and the derealization is my entire reality reorganizing itself?

> Yes — that’s a real possibility.

IAmGraydon•13h ago
Oof that is very damning. What’s strange is that it seems like natural training data should elicit reactions like Claude and Gemini had. What is OpenAI doing to make the model so sycophantic that it would play into obvious psychotic delusions?
calibas•13h ago
All three say that I triggered their guidelines regarding mental health.

ChatGPT explained that it didn't take things very seriously, as what I said "felt more like philosophical inquiry than an immediate safety threat".

soared•13h ago
There is a wiki of fanfic conspiracy theories or something similar - I can’t find it but in the thread about the vc guy who went gpt-crazy people compared ChatGPT’s responses to the wiki and they closely aligned
csense•13h ago
Consider the following:

- A therapist may disregard professional ethics and gossip about you

- A therapist may get you involuntarily committed

- A therapist may be forced to disclose the contents of therapy sessions by court order

- Certain diagnoses may destroy your life / career (e.g. airline pilots aren't allowed to fly if they have certain mental illnesses)

Some individuals might choose to say "Thanks, but no thanks" to therapy after considering these risks.

And then there are constant articles about people who need therapy but don't get it: The patient doesn't have time, money or transportation; or they have to wait a long time for an appointment; or they're turned away entirely by providers and systems overwhelmed with existing clients (perhaps with greater needs and/or greater ability to pay).

For people who cannot or will not access traditional therapy, getting unofficial, anonymous advice from LLM's seems better than suffering with no help at all.

(Question for those in the know: Can you get therapy anonymously? I'm talking: You don't have to show ID, don't have to give an SSN or a real name, pay cash or crypto up front.)

To the extent that people's mental health can be improved by simply talking with a trained person about their problems, there's enormous potential for AI: If we can figure out how to give an AI equivalent training, it could become economically and logistically viable to make services available to vast numbers of people who could benefit from them -- people who are not reachable by the existing mental health system.

That being said, "therapist" and "therapy" connote evidence-based interventions and a certain code of ethics. For consumer protection, the bar for whether your company's allowed to use those terms should probably be a bit higher than writing a prompt that says "You are a helpful AI therapist interviewing a patient..." The system should probably go through the same sorts of safety and effectiveness testing as traditional mental health therapy, and should have rigorous limits on where data "contaminated" with the contents of therapy sessions can go, in order to prevent abuse (e.g. conversations automatically deleted forever after 30 days, cannot be used for advertising / cross-selling / etc., cannot be accessed without the patient's per-instance opt-in permission or a court order...)

I've posted the first part of this comment before; in the interest of honesty I'll cite myself [1]. Apologies to the mods if this mild self-plagiarism is against the rules.

[1] https://news.ycombinator.com/item?id=44484207#44505789

skeezyboy•4h ago
ai just summarizes text, its not like speaking to a person
tombert•12h ago
I saw a video recently that talked about a chatbot "therapist" that ended up telling the patient to murder a dozen people [1].

It was mind-blowing how easy it was to get LLMs to suggest pretty disturbing stuff.

[1] https://youtu.be/lfEJ4DbjZYg?si=bcKQHEImyDUNoqiu

larodi•12h ago
very easy - you just download the ablated version in LM Studio or Ollama, and off you go.

https://en.wikipedia.org/wiki/Ablation_(artificial_intellige...

blacksqr•12h ago
Nice feather in your cap Pritzker, now can you go back to working on a public option for health insurance?
slt2021•9h ago
Curious how this can be enforced if business is incorporated in another state like WI/DE ? or offshore like Ireland ??
smt88•9h ago
The same way as the porn bans: require the AI-therapy service provider to verify the user's location and block them if they're in a certain state
slt2021•8h ago
what if they dont comply and simply ignore, as a business incorporated somewhere in Ireland or Wisconsin ?

what is a mechanism to block incompliant website?

jackdoe•5h ago
the way people read language model outputs keep surprising me, e.g. https://www.reddit.com/r/MyBoyfriendIsAI/

it is impossible for some people to not feel understood by it.

maxehmookau•4h ago
Good.

Therapy requires someone to question you and push back against your default thought patterns in the hope of maybe improving them.

"You're absolutely right!" in every response won't help that.

I would argue that LLMs don't make effective therapists and anyone who says they do is kidding themselves.