Way too many people think that LLMs understand the content in their dataset.
What annoys me more about this type of response is that I feel there's a less rude way to express the same.
Second, the mistakes weren't just incorrect citations any paralegal could check
... Some of the 'mistakes' (strictly speaking they are not mistakes, of course) are _citations of cases which do not exist_.
That leaves only those as lawyers who already have zero reputation left to lose, want to make a name for themselves in the far-right scene, who are members of the cult as well, and those who think they can milk an already dead/insolvent horse.
Jones is a good example of this. He cycled through about 20 different lawyers during the sandyhook trials. The reason he was defaulted is because when he was required to produce something, he fire the lawyers (or they'd quit), hire new ones, and invariably in the depositions an answer to "did you bring this document the court mandated that you produce" the answer was "oh, sorry, I'm brand new to this case and didn't know anything about that".
Jones wasn't cooperating with his lawyers.
There are plenty of good lawyers that have no problem representing far right figures. The issue really comes down to those figures being willing to follow their lawyer's advice.
The really bad lawyers simply don't care if their clients ignore their advice.
Rememebr Michael Avenatti?
I don’t think that nexus is political, for either party. It’s all tied to one man.
He’s a bankrupt, likely mentally ill acolyte of a dude who is infamous for stiffing his lawyers. His connection with reality is tenuous at best.
For example, yesterday I got a list of some study resources for abstract algebra. Claude referred me to a series by Benedict Gross (Which is excellent btw). It gave me a line to harvard’s website but it was a 404 and it was only with further searching that I found the real thing. It also suggested a youtube playlist by Socratica (again this exists but the url was wrong) and one by Michael Penn (same deal).
Literally every reference was almost right but actually wrong. How does anyone have the confidence to ship a legal brief that an AI produced without checking it thoroughly?
Just two days ago, I gave it a list of a dozen article titles from a newspaper website (The Guardian), asked it to look up their URLs and give me a list, and to summarise each article for me, and it made no mistakes at all.
Maybe your task was more complicated to do in some way, maybe you're not paying for ChatGPT and are on a less able model, or maybe it's a question of learning how to prompt, I don't know, I just know that for me it's gone from "assume sources cited are bullshit" to "verify each one still, but they're usually correct".
Something missing from this conversation is whether we're talking about the raw model or model+tool calls (search). This sounds like tool calls were enabled.
And I do think this is a sign that the current UX of the chatbots is deeply flawed: even on HN we don't seem to interact with the UI components to toggle these features frequently enough that they're the intuitive answer, instead we still talk about model classes as though that makes the biggest difference in accuracy.
But the reason I suggested model as a potential difference between me and the person I replied to, rather than ChatGPT interface vs. plain use of model without bells and whistles, is that they had said their trouble was while using ChatGPT, not while using a GPT model over the API or through a different service.
[#] (Technically I didn't, and never do, have the "search" button enabled in the chat interface, but it's able to search/browse the web without that focus being selected.)
I think people who do are simply not aware that AI is not deterministic the same way a calculator is. I would feel entirely safe signing my name on a mathematical result produced by a calculator (assuming I trusted my own input).
And I don’t mean essays edited with chatGPT, but essays that are clearly verbatim output. When the teacher asks the students to read them out loud to the class, they will stumble upon words and grammar that are way obviously way beyond anything we’ve studied. The utter lack of self awareness is both funny but also really sad.
I could see that, especially with sloppy lawyers in the first place. Or, I could see it being a convenient "the dog ate my homework" excuse.
Casually scrolling through TechCrunch I see over $1B in very recent investments into legal-focused startups alone. You can't push the messaging that the technology to replace humans is here and expect people will also know intrinsically that they need to do the work of checking the output. It runs counter to the massive public rollout of these products which have a simple pitch: we are going to replace the work of human employees.
1. Use reasoning models and include in the prompt to check the cited cases and verify holdings. 2. Take the draft, run it through ChatGpt deep research , Gemini deep research and Claude , and tell it to verify holdings.
I still double check, for now, but this is catching every hallucination.
https://www.sfgate.com/bayarea/article/controversy-californi...
The lawyer jokes aren't funny anymore...
I mean, that's always been tech's modus operandi....
Etheryte•4h ago
Glad to see that this is the outcome. Similar to bribes and other similar issues, the hammer has to be big and heavy so that people stop considering this as an option.