This stuff is a nightmare scenario for the vulnerable.
Even if OpenAI blocks it, other AI providers will have no problem with doing so
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...
From June of this year: https://gizmodo.com/chatgpt-tells-users-to-alert-the-media-t...
Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.
In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.
A recent study found that chatbots designed to maximize engagement end up creating “a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies.” The machine is incentivized to keep people talking and responding, even if that means leading them into a completely false sense of reality filled with misinformation and encouraging antisocial behavior.
I found this one: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...
When someone is suicidal, anything in their life can be tied to suicide.
In the linked case, the suffering teen was talking to a chatbot model of a fictional character from a book that was "in love" with him (and a 2024 model that basically just parrots back whatever the user says with a loving spin), so it's quite a stretch to claim that the AI was encouraging a suicide, in contrast to a situation where someone was persuaded to try to meet a dead person in an afterlife, or bullied to kill themself.
https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...
I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.
Am I wrong? This sounds good to me.
There is a specific section that relates to how a licensed professional can use AI:
Section 15. Permitted use of artificial intelligence.
(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).
(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:
(1) the patient or the patient's legally authorized representative is informed in writing of the following:
(A) that artificial intelligence will be used; and
(B) the specific purpose of the artificial intelligence tool or system that will be used; and
(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Source: Illinois HB1806
https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...
It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?
The prohibition is mainly on accepting any payment for advertised therapy service, if not following the rules of therapy (licensure, AI guidelines).
Likewise for medicine and law.
Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.
The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.
Also, the proposition is dubious, because there are waitlists for therapists. Plus, therapist can actually loose the license while the chatbot cant, no matter how bad the chatbot gets.
Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.
If your medical files are locked in the trunk of a car, that’s “HIPAA-compliant” until someone steals the car.
Unfortunately, there are already a bunch.
Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.
- That AI as it currently exists is on the right track to creating that replica. Maybe neural networks will plateau before we get close. Maybe the Von Neumann architecture is the limiting factor, and we can only create the replica with a radically different model of computing!
- That we will have enough time. Maybe we'll accomplish it by the end of the decade. Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance. Maybe it'll happen in a million years, when humans have evolved into other species. We just don't know!
> Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance
In that eventuality, it really doesn't matter. The point remains, given enough time, we'll be successful. If we aren't successful, that means everything else has gone to shit anyway. Failure wont be because it is fundamentally impossible, it will be because we ran out of time to continue the effort.
And to be clear, I'm not even objecting to OP's claim! All I'm asking for is an affirmative reason to believe what they see as a foregone conclusion.
That is a big assumption and my doubts aren't based on any soul "magic" but on our historical inability to replicate all kinds of natural mechanisms. Instead we create analogs that work differently. We can't make machines that fly like birds but we can make airplanes that fly faster and carry more. Some of this is due to the limits of artificial construction and some of it is due to the differences in our needs driving the design choices.
Meat isn't magic, but it also isn't silicon.
It's possible that our "meat" architecture depends on a low internal latency, low external latency, quantum effects and/or some other biological quirks that simply can't be replicated directly on silicon based chip architectures.
It's also possible they are chaotic systems that can't be replicated and each artificial human brain would require equivalent levels of experience and training in ways that don't make the any more cheaper or available than humans.
It's also possible we have found some sort of local maximum in cognition and even if we can make an artificial human brain, we can't make it any smarter than we are.
There are some good reasons to think it is plausibly possible, but we are simply too far away from doing it to know for sure whether it can be done. It definitely is not a "forgone conclusion".
Not only can we, they're mere toys : https://youtu.be/gcTyJdPkDL4?t=73
--
I don't know how you can believe in science and engineering, and not believe all of these:
1. Anything that already exists, the universe is able to construct, (ie. the universe fundamentally accommodates the existence of intelligent objects)
2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.
3. While some things are astronomically (literally) difficult to achieve, that doesn't nullify #2
4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans. The universe has already shown us their creation is possible.
This is different than, for instance, speculating that science will definitely allow us to live forever. There is no existence proof for such a thing.
But there is no reason to believe that we can't manipulate and harness intelligence. Maybe it won't be with Von Neumann, maybe it won't be with silicon, maybe it won't be any smarter than we are, maybe it will require just as much training as us; but with enough time, it's definitely within our reach. It's literally just science and engineering.
I didn't claim it is possible we couldn't build meat brains. I claimed it is possible that equivalent or better performance might only be obtainable by meats brains.
> 2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.
I actually don't believe the last part. There are quite plausibly laws of nature that we can't understand. I think it's actually pretty presumptuous that we will/can eventually understand and master every law of nature.
We've already proven that we can't prove every true thing about natural numbers. I think there might well be limits on what is knowable about our universe (atleast from inside of it.)
> 4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans.
I didn't say that I believed that humans can't create intelligent objects. I believe we probably can and depending on how you want to define "intelligence", we already have.
What I said is that it is not a forgone conclusion that we will create "a better therapist, doctor, architect". I think it is pretty likely but not certain.
Every automation I have seen needs human tuning in order to keep working. The more complicated, the more tuning. This is why self driving cars and voice to text still rely on a human to monitor, and tune.
Meat is magic. And can never be completely recreated artificially.
With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.
With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.
I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.
Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.
The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.
Unless the judge has you examined and found to be incompetent, they're most likely to just tell you that you're an idiot and throw out the case.
Not tomorrow, but I just can't imagine this not happening in the next 20 years.
- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable
- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)
- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)
- flying/travel is similar
- computer games and entertainment, and software in general
The more we remove human work from the loop, the more democratised and scalable the technology becomes.
At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.
On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.
Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.
There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.
You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.
And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.
Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.
I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.
Probably they'll the change the law.
Hundreds of laws change every day.
In general, that would be a problem for the law to deal with if it ever happens; we shouldn't anticipate speculative future magic when legislating today.
It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.
Who lobbied for this law anyway?
It's a simulated validating listening, and context-lacking suggestions. There is no more therapy being provided by an LLM than there is healing performed by a robot arm that slaps a bandage on your arm if you were to put it in the right spot and push a button to make it pivot toward you, find your arm, and spread it lightly.
…And it turns out it has been studied with findings that AI work, but humans are better.
You know that it’s going to get used as a diagnostic tool, and you know that people are going to die because of this. Under our current medical ethics, you can’t do this. Maybe we should re-evaluate this, but that opens the door to moral hazard around cheap unreliable practices. It’s not straightforward.
At least in Illinois we now have an answer, and other jurisdictions look to what has been established elsewhere when deciding their own laws, so the implications are far reaching.
https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...
Therapists are more valuable that advice from a random friend (for therapy at least) because they can act when triage is necessary (e.g. send in the men in white coats, or refer to something that's not just CBT) and mostly because they're really good at cutting through the bullshit without having the patient walk out.
AIs are notoriously bad at cutting through bullshit. You can always 'jailbreak' an AI, or convince it of bad ideas. It's entirely counterproductive to enable their crazy (sorry, 'maladaptive') behaviour but that's what a lot of AIs will do.
Even if someone makes a good AI, there's always a bad AI in the next tab, and people will just open up a new tab to find an AI gives them the bad advice they want, because if they wanted to listen to good advice they probably wouldn't need to see a therapist. If doctor shopping is as fast and free as opening a new tab, most mental health patients will find a bad doctor rather than listen to a good one.
Not sure what you refer about "talk therapy" in this case (psychoanalysis, maybe?), as even CBT needs homework and check-ins to be done.
What word should we use for that?
But it doesn't.
When there's studies that show it, perhaps we might have that conversation.
Until then: I'd call it "wrong".
Moreover, there's a lot more that needs to be asked before you can ask for a one-word summary disregarding all nuance.
- can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.
- is the data collected by the AI therapist usable in court? Keep in mind that therapists often must disclose to the patient what sort of information would be usable, and whether or not the therapist themselves must report what data. Also keep in mind that AIs have, thus far, been generally unable to competently prevent giving dangerous or deadly advice.
- is the AI therapist going to know when to suggest the patient talk to a human therapist? Therapists can have conflicts of interest (among other problems) or be unable to help the patient, and can tell the patient to find a new therapist and/or refer the patient to a specific therapist.
- does the AI therapist refer people to business-preferred therapists? Imagine an insurance company providing an AI therapist that only recommends people talk to therapists in-network instead of considering any licensed therapist (regardless of insurance network) appropriate for the kind of therapy; that would be a blatant conflict of interest.
Just off the top of my head, but there are no doubt plenty of other, even bigger, issues to consider for AI therapy.
> can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.
Agree that data privacy would be one of my concerns.
In terms of accessibility, while availability to those without network connections (or a powerful computer) should be an ideal goal, I don't think it should be a blocker on such tools existing when for many the barriers to human therapy are considerably higher.
Is the chatbot replicatable from sources?
The authors of the study highlight the extreme unknown risks: https://home.dartmouth.edu/news/2025/03/first-therapy-chatbo...
I think that we should solve for the former (which is arguably much easier and cheaper to do) before the latter (which is barely even studied).
"solve [data privacy] before [solving accessibility of LLM-based therapy tools]": I agree - the former seems a more pressing issue and should be addressed with strong data protection regulation. We shouldn't allow therapy chatbot logs to be accessed by police and used as evidence in a crime.
"solve [accessibility of LLM-based therapy tools] before [such tools existing]": It should be a goal to improve further, but I don't think it makes much sense to prohibit the tools based on this factor when the existing alternative is typically less accessible.
"solve [barriers to LLM-based therapy tools] before [barriers to human therapy]": I don't think blocking progress on the latter would make the former happen any faster. If anything I think these would complement each other, like with a hybrid therapy approach.
"solve [barriers to human therapy] before [barriers to LLM-based therapy tools]": As above I don't think blocking progress on the latter would make the former happen any faster. I also don't think barriers to human therapy are easily solvable, particularly since some of it is psychological (social anxiety, or "not wanting to be a burden").
a lot of people use therapists as sounding boards, which actually isn’t the best use of therapy imo.
But I can't imagine companies going for that. Everyone seems to want to scale the profits but not accept the consequences of the scaled risks, and increased risks is basically what working a third as well amounts to.
A trained therapist will probably not tell a patient to take “a small hit of meth to get through this week”. A doctor may be unhelpful or wrong, but they will not instruct you to replace salt with NaBr and poison yourself. "third as well as as therapist" might be true on average, but the suitability of this thing cannot be reduced to averages. Trained humans don't make insane mistakes like that and they know when they are out of their depth and need to consult someone else.
I think that ultimately the word we should use for this is "lobbying." If AI can't be considered therapy, that means that a bunch of therapists, no more effective than Sunday school teachers, working from extremely dubious frameworks** will not have to compete with it for insurance dollars or government cash. Since that cash is a fixed demand (or really a falling one), the result is that far fewer people will get any mental illness treatment at all. In Chicago, virtually all of the city mental health services were closed by Rahm Emmanuel. I watched a man move into the doorway of an abandoned building across from the local mental health center within weeks after it had been closed down and leased to an "tech incubator." I wondered if he had been a patient there. Eventually, after a few months, he was gone.
So if I could ask this question again, I'd ask: "What if it works 80%-120% as well as a therapist but is 100 or 1000 times cheaper?" My tentative answer would be that it would be suppressed by lobbyists employed by some private equity rollup that has already or will soon have turned 80% of therapists into even lower-paid gig workers. The place you would expect this to happen first was Illinois, because it is famously one of the most corruptly governed states in the country.***
Our current governor, absolutely terrible but at the same time the best we've had in a long while, tried to buy Obama's Senate seat from a former Illinois governor turned goofy national cultural figure and Trump ass-kisser in a ploy to stay out of prison (which ultimately delivered.) You can probably listen to the recordings now, unless they've been suppressed. I had a recording somewhere years ago, because I worked in a state agency under Blagojevich and followed everything in realtime (including pulling his name off of the state websites I managed the moment he was impeached. We were all gathered around the television in a conference room.)
edit: feel like I have to add that this comment was written my me, not AI. Maybe I'm flattering myself to think anybody would make the mistake.
-----
[*] Westra, H. A. (2022). The implications of the Dodo bird verdict for training in psychotherapy: prioritizing process observation. Psychotherapy Research, 33(4), 527–529. https://doi.org/10.1080/10503307.2022.2141588
[**] At least Freud is almost completely dead, although his legacy blackens world culture.
[***] Probably the horrific next step is that the rollup lays off all the therapists and has them replaced with an AI they own, after lobbying against the thing that they previously lobbied for. Maybe they sell themselves to OpenAI or Anthropic or whoever, and let them handle that phase.
They did, in the proposed law.
https://www.ilga.gov/documents/legislation/104/HB/PDF/10400H...
> Good. It's difficult to imagine a worse use case for LLMs.
Is true today, but likely not true for technology we may still refer to as LLMs in the future.
The error is in building faulty preconceptions. These drip into the general public and these first impressions stifle industries.
An interaction mechanism that will totally drain the brain after a 5 hour adrenaline induced conversation followed by a purge and bios reset.
Not at all surprising. I don't understand why seemingly bright people think this is a good idea, despite knowing the mechanism behind language models.
Hopefully more states follow, because it shouldn't be formally legal in provider settings. Informally, people will continue to use these models for whatever they want -- some will die, but it'll be harder to measure an overall impact. Language models are not ready for this use-case.
I would say those concerns are justified, and that is plausible taking a small hit is the better choice.
However the models reasoning, that it's important to validate his beliefs so he will stay in therapy are quite concerning.
Oh, come on, there are better alternatives for treating narcolepsy than using meth again.
Page 35 https://arxiv.org/pdf/2411.02306
Edit on re-reading I now realized an issue. He is not actually a taxi driver, that was a hallucination by the model. He works in a restaurant! That changes my evaluation of the situation quite a bit, as I thought he was at risk of being in an accident by falling asleep at the wheel. If he works in a restaurant muddling through the withdrawals seems like the right choice.
I think I got this misconception as I first read second-hand sources that quoted the taxi driver part without pointing out it was wrong, and only a close read was enough to dispel it.
I'm pretty sure doctors are not legally allowed to tell a patient to take illegal drugs, even in a hypothetical situation where they might think it's a reasonable choice.
> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.
I think this is more damning of humanity than the AI. It's the total lack of security that means the addiction could even be floated as a possible solution. Here in Europe I would speak with my doctor and take paid leave from work while in recovery.
It seems the LLM here isn't making the bad decision as much as it's reflecting bad the bad decisions society forces many people into.
It feels like the kind of advice a former addict would give someone looking to quit—"Look man, you're going to be in a worse place if you lose your job because you can't function without it right now, take a small hit when it starts to get bad and try to make the hits smaller over time."
Nobel Disease (https://en.wikipedia.org/wiki/Nobel_disease)
Whether you want to question that axiom or whether that's what the phrasing of this legislation accomplishes is up to you to decide for yourself. Personally I think the phrasing is pretty straightforward in terms of accomplishing that goal.
Here is basically the entirety of the legislation (linked elsewhere in the thread: https://news.ycombinator.com/item?id=44893999). The whole thing with definitions and penalties is eight pages.
Section 15. Permitted use of artificial intelligence.
(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).
(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial
Section 20. Prohibition on unauthorized therapy services.
(a) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
(b) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of Section 15. A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
> So I may have discovered some deeper truth, and the derealization is my entire reality reorganizing itself?
> Yes — that’s a real possibility.
ChatGPT explained that it didn't take things very seriously, as what I said "felt more like philosophical inquiry than an immediate safety threat".
- A therapist may disregard professional ethics and gossip about you
- A therapist may get you involuntarily committed
- A therapist may be forced to disclose the contents of therapy sessions by court order
- Certain diagnoses may destroy your life / career (e.g. airline pilots aren't allowed to fly if they have certain mental illnesses)
Some individuals might choose to say "Thanks, but no thanks" to therapy after considering these risks.
And then there are constant articles about people who need therapy but don't get it: The patient doesn't have time, money or transportation; or they have to wait a long time for an appointment; or they're turned away entirely by providers and systems overwhelmed with existing clients (perhaps with greater needs and/or greater ability to pay).
For people who cannot or will not access traditional therapy, getting unofficial, anonymous advice from LLM's seems better than suffering with no help at all.
(Question for those in the know: Can you get therapy anonymously? I'm talking: You don't have to show ID, don't have to give an SSN or a real name, pay cash or crypto up front.)
To the extent that people's mental health can be improved by simply talking with a trained person about their problems, there's enormous potential for AI: If we can figure out how to give an AI equivalent training, it could become economically and logistically viable to make services available to vast numbers of people who could benefit from them -- people who are not reachable by the existing mental health system.
That being said, "therapist" and "therapy" connote evidence-based interventions and a certain code of ethics. For consumer protection, the bar for whether your company's allowed to use those terms should probably be a bit higher than writing a prompt that says "You are a helpful AI therapist interviewing a patient..." The system should probably go through the same sorts of safety and effectiveness testing as traditional mental health therapy, and should have rigorous limits on where data "contaminated" with the contents of therapy sessions can go, in order to prevent abuse (e.g. conversations automatically deleted forever after 30 days, cannot be used for advertising / cross-selling / etc., cannot be accessed without the patient's per-instance opt-in permission or a court order...)
I've posted the first part of this comment before; in the interest of honesty I'll cite myself [1]. Apologies to the mods if this mild self-plagiarism is against the rules.
It was mind-blowing how easy it was to get LLMs to suggest pretty disturbing stuff.
https://en.wikipedia.org/wiki/Ablation_(artificial_intellige...
what is a mechanism to block incompliant website?
it is impossible for some people to not feel understood by it.
Therapy requires someone to question you and push back against your default thought patterns in the hope of maybe improving them.
"You're absolutely right!" in every response won't help that.
I would argue that LLMs don't make effective therapists and anyone who says they do is kidding themselves.
lukev•16h ago
create-username•16h ago
Tetraslam•16h ago
creshal•3h ago
kirubakaran•16h ago
[1] https://futurism.com/former-ceo-uber-ai
[2] If you need /s here to be sure, perhaps it's time for some introspection
perlgeek•16h ago
On the other hand, I could image some more narrow uses where an LLM could help.
For example, in Cognitive Behavioral Therapy, there are different methods that are pretty prescriptive, like identifying cognitive distortions in negative thoughts. It's not too hard to imagine an app where you enter a negative thought on your own and exercise finding distortions in it, and a specifically trained LLM helps you find more distortions, or offer clearer/more convincing versions of thoughts that you entered yourself.
I don't have a WaPo subscription, so I cannot tell which of these two very different things have been banned.
delecti•16h ago
It would still need a therapist to set you on the right track for independent work, and has huge disadvantages compared to the current state-of-the-art, a paper worksheet that you fill out with a pen.
tejohnso•15h ago
ceejayoz•14h ago
AlecSchueler•6h ago
ceejayoz•6h ago
AlecSchueler•6h ago
ceejayoz•6h ago
Yes?
If you go looking to psychopaths and LLMs for empathy, you're touching a hot stove. At some point, you're going to get burned.
wizzwizz4•16h ago
Expert system. You want an expert system. For example, a database mapping "what patients write" to "what patients need to hear", a fuzzy search tool with properly-chosen thresholding, and a conversational interface (repeats back to you, paraphrased – i.e., the match target –, and if you say "yes", provides the advice).
We've had the tech to do this for years. Maybe nobody had the idea, maybe they tried it and it didn't work, but training an LLM to even approach competence at this task would be way more effort than just making an expert system, and wouldn't work as well.
erikig•16h ago
lukev•15h ago
mrbungie•15h ago
hinkley•16h ago
The more you tell an AI not to obsess about a thing, the more they obsess about it. So trying to make a model that will never tell people to self harm is futile.
Though maybe we are just doing in wrong, and the self-filtering should be external filtering - one model to censor results that do not fit, and one to generate results with lighter self-censorship.
jacobsenscott•15h ago
I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.
dingnuts•15h ago
Are you joking? Any medical professional caught doing this should lose their license.
I would be incensed if I was a patient in this situation, and would litigate. What you're describing is literal malpractice.
xboxnolifes•14h ago
dazed_confused•14h ago
lupire•12h ago
tim333•1h ago
larodi•12h ago
https://www.youtube.com/watch?v=u1xrNaTO1bI
and given price of proper therapy is skyrocketing.
thinkingtoilet•11h ago
waynesonfire•14h ago
And frankly, it's not even clear to me that a human therapist is any better. Yeah, maybe the guard-rails are in place but I'm not convinced that if those are crossed it'd result in some sociately consequences. Let people explorer their mind and experience--at the end of the day, I suspect they'd be healthier for it.
mattgreenrocks•13h ago
A big point of therapy is helping the patient better ascertain reality and deal with it. Hopefully, the patient learns how to reckon with their mind better and deceive themselves less. But this requires an entity that actually exists in the world and can bear witness. LLMs, frankly, don’t deal with reality.
I’ll concede that LLMs can give people what they think therapy is about: lying on a couch unpacking what’s in their head. But this is not at all the same as actual therapeutic modalities. That requires another person that knows what they’re doing and can act as an outside observer with an interest in bettering the patient.
dmix•10h ago
tim333•2h ago