I wonder how different the results would be if all of the people participating in the study were lawyers too. They would probably be way better at telling which response was AI vs a lawyer, but I wonder if they would also prefer the ChatGPT responses.
treetalker•9mo ago
Practicing appellate lawyer here.
As it so happens, just today I decided to give GPT 4o (through Kagi Assistant) a whirl on a legal issue I was dealing with, purely as an experiment. I gave it a custom context prompt (i.e., it was one of Kagi's custom agents/assistants). The context prompt was actually taken verbatim from another HN post from today (the one about querying the Apollonian oracle, if anyone saw that one).
I was impressed with the framework it told itself (and me) it would use to answer my question and draft a sample motion / outline based on that answer. By "impressed," I mean that it was basically the pattern that a 1L would be taught in a respectable legal rhetoric course (yet many practicing lawyers can't manage to do).
And then it invented cases to support the point I tried to make. Or it cited the name of an actual case but gave the citation of another, and offered holdings that were not part of either case. And it took me, a professional, several rounds to get the thing to stop offering bullshit and admit that it couldn't properly answer and handle my query.
I would not be surprised that laypersons would trust an AI (read: LLM) answer over a competent lawyer's answer: the layperson has no expertise to judge; the LLM is probably erring toward sycophantism, thus building trust unbeknownst to the layperson; and the answer is both faster and (far) less expensive. But you get what you pay for.
Even lawyers need lawyers, and that's both a truth and an evergreen source of jokes and stories for us.
Every legal application of LLMs I've seen is utter trash and I wouldn't trust them under any circumstances. Stay far away and beware.
Zambyte•9mo ago
treetalker•9mo ago
As it so happens, just today I decided to give GPT 4o (through Kagi Assistant) a whirl on a legal issue I was dealing with, purely as an experiment. I gave it a custom context prompt (i.e., it was one of Kagi's custom agents/assistants). The context prompt was actually taken verbatim from another HN post from today (the one about querying the Apollonian oracle, if anyone saw that one).
I was impressed with the framework it told itself (and me) it would use to answer my question and draft a sample motion / outline based on that answer. By "impressed," I mean that it was basically the pattern that a 1L would be taught in a respectable legal rhetoric course (yet many practicing lawyers can't manage to do).
And then it invented cases to support the point I tried to make. Or it cited the name of an actual case but gave the citation of another, and offered holdings that were not part of either case. And it took me, a professional, several rounds to get the thing to stop offering bullshit and admit that it couldn't properly answer and handle my query.
I would not be surprised that laypersons would trust an AI (read: LLM) answer over a competent lawyer's answer: the layperson has no expertise to judge; the LLM is probably erring toward sycophantism, thus building trust unbeknownst to the layperson; and the answer is both faster and (far) less expensive. But you get what you pay for.
Even lawyers need lawyers, and that's both a truth and an evergreen source of jokes and stories for us.
Every legal application of LLMs I've seen is utter trash and I wouldn't trust them under any circumstances. Stay far away and beware.