https://www.theguardian.com/us-news/2025/apr/24/california-b...
So lawyers use it, judges use it ... have we seen evidence of lawmakers submitting AI-generated language in bills or amendments?
There was an entire team dedicated to this work, and the hours were insane when the legislature was in session. She ended up not taking the job because of the downsides associated with moving to the capital, so I don't know more about the job. I'd be curious how much AI has changed what that team does now. Certainly, they still would want to meticulously look at every character, but it is certainly possible that AI has gotten better at analyzing the "average" ruling, which might make the job a little easier. What I know about law though, is that it's often defined by the non average ruling, that there's sort of a fractal nature to it, and it's the unusual cases that often forever shape future interpretations of a given law. Unusual scenarios are something that LLMs generally struggle with, and add to that the need to creatively come up with scenarios that might further distort the bill, and I'd expect LLMs to be patently bad at creating laws. So while, I have no doubt that legislators (and lobbyists) are using AI to draft bills, I am positive that there is still a lot of work that goes into refining bills, and we're probably not seeing straight vibe drafting.
This is a bit like all the stats like "this is appears to be an unprecedented majority in the last 10 years in a Vermont county starting with G for elections held on the 4th when no candidate is from Oklahoma".
Lots of things are historic but that doesn't necessarily mean they're impressive overall. More interesting is how many of these cases have already been tried such that this isn't "historic" for being the first one decided.
Do you think a Civil Engineer (PE) should be held liable if they vibe engineered a bridge using an LLM without reviewing the output? For this hypothetical, let’s assume an inspector caught the issue before the bridge was in use, but it would’ve collapsed had the inspector not noticed.
Why jail time for lawyers who use Chat-GPT, but not programmers? Are we that unimportant compared to the actual useful members of society, whose work actually has to be held to standards?
I don't think you meant it this way, but it feels like a frank admission that what we do has no value, and so compared to other people who have to be correct, it's fine for us to slather on the slop.
Surely it would suffice to eject him from the California bar.
I think this is a good reason for fines to not be incredibly big. People are using AI all the time. There will be a growing period until they learn of its limitations.
It's not a large step after that to verify that a quote actually exists in the cited document, though I can see how perhaps that was not something that was necessary up to this point.
I have to think the window on this being even slightly viable is going to close quickly. When you ship something to a judge and their copy ends up festooned with "NO REFERENT" symbols it's not going to go well for you.
Wow. Seems like he really took the lesson to heart. We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?
21 of 23 citations are fake, and so is whatever reasoning they purport to support, and that's casually "adding some citations"? I sometimes use tools that do things I don't expect, but usually I'd like to think I notice when I check their work... if there were 2 citations when I started, and 23 when I finished, I'd like to think I'd notice.
lordnacho•1h ago
This lawyer fabricating his filings is going to be among the first in a bunch of related stories: devs who check in code they don't understand, doctors diagnosing people without looking, scientists skipping their experiments, and more.
unshavedyak•1h ago
You're thinking too linearly imo. Your examples are where AI will "take", just perhaps not entirely replace.
Ie if liability is the only thing stopping them from being replaced, what's stopping them from simply assuming more liability? Why can't one lawyer assume the liability of ten lawyers?
lordnacho•32m ago
Just like with a lot of other jobs that got more productive.
observationist•27m ago
They don't understand how to calibrate their model of the world with the shape of future changes.
The gap between people who've been paying attention and those who haven't is going to increase, and the difficulty in explaining what's coming is going to keep rising, because humans don't do well with nonlinearities.
The robots are here. The AI is here. The future is now, it's just not evenly distributed, and by the time you've finished arguing or explaining to someone what's coming, it'll have already passed, and something even weirder will be hurtling towards us even faster than whatever they just integrated.
Sometime in the near future, there won't be much for people to do but stand by in befuddled amazement and hope the people who set this all in motion knew what they were doing (because if we're not doing that, we're all toast anyway.)