Worth pointing out such systems have survived a long long time since access to it is free irrespective of the quality.
Being less snarky: there is a monumental conflict of interest here that makes the study worthless.
Literally none of the authors are therapists. They are all researchers.
The conflict of interest is entirely made up by you.
In reality, what matters is the methodology of the study. If the study's methodology is sound, and its results can be reproduced by others, then it is generally considered to be a good study. That's the whole reason we publish methodologies and results: so others can critique and verify. If you think this study is bad, explain why. The whole document is there for you to review.
Who can argue with a stall preventer, right? What one can, and has been exposed / argued with, is the observation that information about the operation of the stall preventer, training, and even the ability to effectively control it depended on how much the airline was willing to pay for this necessary feature.
So in reality, what matters is studying the methodology of set and setting, not how the pieces of the crashed airship ended up where they did.
As it relates to study design, controlling for set and setting are part of the methodology. For example, most drug studies are double-blinded so that neither patients nor clinicians are aware of whether the patient is getting the drug or not, to reduce or eliminate any placebo effect (i.e. to control for the "set"/mental state of those involved in the study).
There are certainly some cases in which it's effectively impossible to control for these factors (i.e. psychedelics). That's not what's really being discussed here, though.
An airline crash is an n of 1 incident, and not the same as a designed study.
... compared to humans? Yes. This is a philosophical conundrum which you tie yourself up in if you choose to postulate the artificial intelligence as equivalent to, rather than a simulacrum of, human intelligence. We fly (planes): are we "smarter" than birds? We breathe underwater: are we "smarter" than fish? And so on.
How do you discern that the "other" has an internal representation and dialogue? Oh. Because a human programmed it to be so. But how do you know that another human has internal representation and dialogue? I do (I have conscious control over the verbal dialogue but that's another matter), so I choose to believe that others (humans) do (not the verbal part so much unfortunately). I could extend that to machines, but why? I need a better reason than "because". I'd rather extend the courtesy to a bird or a fish first.
This is an epistemological / religious question: a matter of faith. There are many things which we can't really know / rigorously define against objective criteria.
This is about determining if AI can be a equivalent or better (defined as: achieving equal or better clinical outcomes) therapist than a human. That is a question that can be studied and answered.
Whether artificial intelligence accurately models human intelligence, or whether an airplane is "smarter" than a bird, are entirely separate questions that can perhaps serve to explain _why/how_ the AI can (or can't) achieve better results than the thing we're comparing against, but not whether it does or does not. Those questions are perhaps unanswerable based on today's knowledge. But they're not prerequisites.
It’s impossible to think that you are discussing this in good faith at this point.
'LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be.'
And they also mention a previous paper that found high levels of engagement from patients.
So, they have potential but currently are giving dangerous advice. It sounds like they are saying a fine tuned therapist model is needed because 'you are a great therapist' prompt, just gives you something that vaguely sounds like a therapist to an outsider.
Sounds like an opportunity honestly.
Would people value a properly trained therapist enough to pay for it over an existing chatgpt subscription?
If you choose to believe as Jaron Lanier does that LLMs are a mashup (or as I would characterize it a funhouse mirror) of the human condition, as represented by the Internet, this sort of implicit bias is already represented in most social media. This is further distilled by the cultural practice of hiring third world residents to tag training sets and provide the "reinforcement learning"... people who are effectively if not actually in the thrall of their employers and can't help but reflect their own sycophancy.
As someone who is therefore historically familiar with this process in a wider systemic sense I need (hope for?) something in articles like this which diagnoses / mitigates the underlying process.
42lux•6h ago
sherdil2022•4h ago
42lux•3h ago