We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.
This is something that bugs me about medical ethics, that it's more important not to cause any harm than it is to prevent any.
Ethics is like copywriting a weather hindcast.
That of course doesn't exclude doing good, being helpful, using skills and technologies to produce favorable outcomes. It does mean that healers must exercise due vigilance for unintended adverse consequences of therapies, let alone knowingly providing services that cause harm.
The problem with "safe/not safe" designation is simply that these states are more often than not indistinct. Or put another way, it depends on subtle contextual attributes that are hard to discern. Furthermore individual differences can make it difficult to predict safety of applying a procedure.
As a result healers should be cautious in approaching problems. Definitely prevention is better than cure, it's simply that relatively little is known about preventing burdensome conditions. Exercising what is known is a high priority.
Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.
Some people knew what the tobacco companies were secretly doing, yet they kept quiet, and let countless family tragedies happen.
What are best channels for people with info to help halt the corruption, this time?
(The channels might be different than usual right now, with much of US federal being disrupted.)
And if the alleged payer is outside the field, this might also be relevant to the public interest in other regards. (For example, if they're trying to suppress this, what else are they trying to do. Even if it turns out the research is invalid.)
I agree. Asking questions which are normal in my own field resulted in stonewalling and obvious distress. The worst thing being this leading to the end of what was a good relationship.
If not, you might consider whether you have actionable information yourself, any professional obligations you have (e.g., if you work in science/health/safety yourself), any societal obligations, whether reporting the allegation would be betraying a trust, and what the calculus is there.
Also it is not expected that the training material for the model deals with the actual practical aspects of therapy, only some of the theoretical aspects are probably in that material
on the other hand probabilistic/non-deterministic model, which can give 5 different advises if you ask 5 times.
So who do you trust? Until determinicity of LLM models gets improved and we can debug/fix them while keeping their deterministic behavior intact with new fixes, I would rely on human therapists.
I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.
You trust Anthropic that much?
Many dogs are produced by profit motive, but their owners can have interactions with the dog that are not about profit.
It would meet objective definition if you replaced 'capitalist' with 'socialist', which may have been what you meant, but that's merely an observation I make, not what you actually say.
The entire paragraph is quite contradictory, and lacks truth, and by extension it is entirely unclear what you mean, and it appears like you are confused when you use words and make statements that can't meet their definition.
You may want to clarify what you mean.
In order for it to be 'capitalist' true to its definition, you need to be able to achieve profit with it in purchasing power, but the outcomes of the entire business lifecycle resulting from this, taken as a whole, instead destroy that ability for everyone.
The companies involved didn't start on their merits seeking profit, they were funded by non-reserve debt issuance or money-printing which is the state picking winners and losers.
If they were capitalist they wouldn't have released model weights to the public. The only reason you would free a resource like that is if your goal was something not profit-driven (i.e. contagion towards chaos to justify control or succinctly totalism).
the problem is that "responsible deployment" feels extremely at odds with, say, needing to justify a $300B valuation
Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s Her came out.
From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).
I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.
People really like to anthropomorphize any object with even the most basic communication capabilities and most people have no concept of the distance between parroting phrases and a full on human consciousness. In the 90s Furbys were a popular toy that said started off speaking furbish and then eventually spoke some (maybe 20?) human phrases, many people were absolutely convinced you could teach them to talk and learn like a human and that they had essentially bought a very intelligent pet. The NSA even banned them for a time because they thought they were recording and learning from surroundings despite that being completely untrue. Point being this is going to get much worse now that LLMs have gotten a whole lot better at mimicking human conversations and there is incentive for companies to overstate capabilities.
There are psychological blindspots that we all have as human beings, and when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally (to a lesser or greater degree depending on the individual), and it mostly happens pre-perception with the victim none the wiser. They then effectively become slaves to the loudest monster, which is the AI speaking in their ear more than anyone else, and by extension to the slave master who programmed the AI.
One such blindspot is the consistency blindspot where someone may induce you to say something indicating agreement with something similar first, and then ask the question they really want to ask. Once you say something that's in agreement, and by extension something similar is asked, there is bleedover and you fight your own psychology later if you didn't have defenses to short circuit this fixed action pattern (i.e. and already know), and that's just a surface level blindspot that car salesman use all the time; there are much more subtle ones like distorted reflected appraisal which are used by cults, and nation states for thought reform.
To remain internally consistent, with distorted reflected appraisal, your psychology warps itself, and you as a person unravel. These things have been used in torture, but almost no one today is taught what the elements of torture are so they can recognize it, or know how it works. You would be surprised to find that these things are everywhere today, even in K12 education and that's not an accident.
Everyone has reflected appraisal because this is how we adopt the cultural identity we have as people from our parents while we are children.
All that's needed for torture to break someone down are the elements, structuring, and clustering.
Those elements are isolation, cognitive dissonance, coercion with perceived or real loss, and lack of agency to remove with these you break in a series of steps rational thought receding, involuntary hypnosis, and then psychological break (disassociation or a special semi-lucid psychosis capable of planning); with time and exposure.
Structuring uses diabolical structures to turn the psyche back on itself in a trauma loop, and clustering includes any multiples of these elements or structures within a short time period, as well as events that increase susceptibility such as narco-analysis/synthesis based in dopamine spikes triggered by associative priming (operant conditioning). Drug use makes one more susceptible as they found in the early 30s with barbituates, and its since been improved so you can induce this is in almost anyone with a phone.
No AI will ever be able to create and maintain a consistent reflected appraisal for the people they are interacting with, but because the harmful effects aren't seen immediately, people today have blinded themselves and discount the harms that naturally result. The harms from the unnatural loss of objective reality.
The coursework in an introduction to communication class may provide some foundational details (depending on the instructor), Sapir-Whorf has basis in blindspots.
Robert Lifton touches on the detailed case studies of torture from the 1950s (under Mao), in his book "Thought Reform and the Psychology of Totalism", and I've heard in later books he creates a framework that classifies cultures as Protean (self-direction, growth, self-determination/agency), or Totalism (towards control which eventually fails Darwin's fitness).
I haven't actually read his later books yet though his earlier books were quite detailed. I believe the internet archive has a copy of this available for reading as a pdf but be warned this is quite dark.
Joost Meerloo in his, "Rape of the Mind" as an overview touches on how Totalitarianism grows in the setting of WW2 and some Mao, though takes Freudian look at things (dating certain aspects which we know to be untrue now).
From there it branches out depending on your interest. The modern material itself while based on these earlier works often has the origins obscured following a separation of objectionable concerns.
There are congressional reports on COINTELPRO and you may find notice it has modern iterations (touching on protest/activist activity harassment), as well as the history of East German Stasi, and Zersetzung where governments use this to repress the population.
There are aspects in the Octalysis Framework (gamification/game design).
Paulo Freire used some of this material in developing his critical pedagogy which was used in the 70s to replace teaching method from a reduction of first principles (based in rome and the greeks) to what's commonly known as rote-based teaching, and later called "Lying to Children", which takes the reversal of that approach following more closely to gnosticism.
The approach is basically you give a flawed useless model which includes both true and false things. Students learn to competence, then are given a new model that's less flawed, where you have to learn and unlearn things already learned. You never actually unlearn anything and it induces frustration and torture destroying minds in the process. Each step towards gnosis becomes more useful but only the most compliant and blind make it to the end with few exceptions. Structures that burn bridges induce failure in math, and the effect is this acts as a filter to gatekeep the technical fields.
The water pipe analogy of voltage in electronics as an example of the latter instead of the first principled approach using diffusion which is more correct.
Disney and Dreamworks uses distorted reflected appraisal tailored towards destructive interference of identity, which some employees have blown the whistle on (for the latter), aimed at children and sneak things past their adult guardians. There's quite a lot if you look around but its not under any single name but scattered. Hopefully that helps.
The Dreamworks whistleblower interview can be found here: https://www.youtube.com/watch?v=vvNZRUtqqa8
All indexed references of it seem to now have been removed from search. I'm glad now that I kept a reference link in a text file.
Update: Dreamworks isn't Pixar, I misremembered,they are owned by Universal Studios, whereas Disney own's Pixar. Pixar and Disney appear to do the same things.
I’m not sure I understand how this relates to gnosticism, however. Are you comparing the “Lying to Children” model to gnostic initiation, and asserting that this model selects for the compliant? What is your proposed alternative here?
Particularly,
> Structures that burn bridges induce failure in math, and the effect is this acts as a filter to gatekeep the technical fields.
Sounds compelling, but it strikes me more as a limitation of demand for good math teachers outstripping their supply. I’ve seen this in English language learning a lot; even if the money was there (and it’s not), there are simply far more people with a desire to learn English than there are people qualified to teach it.
As a result, I agree with you.
It gives me pause when I stop to think about anyone without more context placing so much trust in these. And the developers engaged in the “industry” of it demanding blind faith and full payment.
Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?
We've also stigmatized a lot of the things that folks previously used to cope (tobacco, alcohol), and have loosened our stigma on mental health and the management thereof.
I'd disagree. If you worked in the fields, you have plenty of time to think. We fill out every waking hour of our day, leaving no time to ponder or reflect. Many can't even find time to workout and if they do they listen to a podcast during their workout. That's why so many ideas come to us in the shower, it's the only place left where we don't fill out minds with impressions.
What I notice is that the old members keep the younger members engaged socially, teach them skills and give them access to their extensive network of friends, family, previous (or current) co-workers, bosses, managers. They give advise, teach how to behave and so on. The younger members help out with moving, help with technology, call an ISP, drive others home, to the hospital and help maintain the facilities.
Regardless of age, there's always some dude you can talk to, or knows who you need to talk to, and sometimes there's even someone who knows how to make your problems go away or take you in if need by.
A former colleague had something similar, a complete ready so go support network in his old-boys football team. Ready to support in anyway they could, when he started his own software company.
The problem: This is something like 250 guys. What about the rest? Everyone needs a support network, if your alone, or your family isn't the best, you only have a few superficial friends, if any, then where do you go? Maybe the people around you aren't equipped to help you with your problems, not everyone is, some have their own issues. The safe spaces are mostly gone.
We can't even start up support networks, because the strongest have no reason to go, so we risk creating networks of people dragging each other down. The sports clubs works because members are from a wider part of society.
From the article:
> > Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.
That's a problem, because most likely to turn to an LLM for mental support don't understand the limitations. They need strong people to support and guide them, and maybe tell them that talking to a probability engine isn't the smartest choice, and take them on a walk instead.
I don't know that AI "advisory" chatbots can replace humans.
Could they help an individual organize their thoughts for more productive time with professionals? Probably.
Could such tech help individuals learn about different terminology, their usage and how to think about it? Probably.
Could there be .. a net results of spending fewer hours (and cost if the case) for the same progress? And be able to make it further with advice into improvement?
Maybe the baseline of advisory expertise in any field exists more around the beginner stage than not.
Experience matters, that's something we seem to be forgetting fast.
Guess we should stop?
2023 is ancient history in the LLM space. That person is totally out of touch with it.
Also, like most things, especially when they are starting out, the actual details of the implementation matter. For example, for the first few years that SSDs came out, there were a lot of models that were completely unreliable. I had someone tell me they would never trust enterprise data to run on an SSD. At the time, there were a few more expensive models like one of the Intel Extreme something that were robust, but most were not. However, since I had been using that reliable model, he was wrong to insist on going back to a mechanical hard drive. Things change fast, and details matter.
Leading LLMs in 2025 can absolutely do certain core aspects of cognitive behavioral therapy very effectively given the right prompts and framework and things like journaling tools for the user. CBT is actually very practical and logical.
If you take a random cheap inexpensive chat bot with a medium to low parameter count and middling intelligence and a weak prompt that was not written by a subject matter expert, then even with the advances in 2025, you will not get good advice. But if you implement it effectively with a very strong model etc., it will be able to do it.
Okay, what specifically has improved in that time, which would allay the doctors specific concerns?
> do certain core aspects
And not others? Is there a delineated list of such failings in the current set of products?
> given the right prompts and framework
A flamethrower is perfectly safe given the right training and support. In the wrong hands it's likely to be a complete and total disaster in record short time.
> a weak prompt that was not written by a subject matter expert
So how do end users ever get to use a tool like this?
And the thing when it comes to therapy is, a real therapist doesn't have to be prompted and can auto adjust to you without your explicit say so. They're not overly affirming, can stop you from doing things and say no to you. LLMs are the opposite of that.
Also, as a lay person how do i know the right prompts for <llm of the week> to work correctly?
Don't get me wrong, i would love for AI to be on par or better than a real life therapist, but we're not there yet, and i would advise everyone against using AI for therapy.
bigmattystyles•4h ago
distalx•4h ago
jobigoud•3h ago
codr7•49m ago
squigz•32m ago
What does this mean?
codr7•5m ago
52-6F-62•1h ago
codr7•48m ago
We used to worry about Bitcoin, now Google is funding nuclear plants.