https://www.theguardian.com/us-news/2025/apr/24/california-b...
So lawyers use it, judges use it ... have we seen evidence of lawmakers submitting AI-generated language in bills or amendments?
There was an entire team dedicated to this work, and the hours were insane when the legislature was in session. She ended up not taking the job because of the downsides associated with moving to the capital, so I don't know more about the job. I'd be curious how much AI has changed what that team does now. Certainly, they still would want to meticulously look at every character, but it is certainly possible that AI has gotten better at analyzing the "average" ruling, which might make the job a little easier. What I know about law though, is that it's often defined by the non average ruling, that there's sort of a fractal nature to it, and it's the unusual cases that often forever shape future interpretations of a given law. Unusual scenarios are something that LLMs generally struggle with, and add to that the need to creatively come up with scenarios that might further distort the bill, and I'd expect LLMs to be patently bad at creating laws. So while, I have no doubt that legislators (and lobbyists) are using AI to draft bills, I am positive that there is still a lot of work that goes into refining bills, and we're probably not seeing straight vibe drafting.
Most people would be shocked to find the majority of bills are simply copycat bills or written by lobbyists.
https://goodparty.org/blog/article/who-actually-writes-congr...
Bank lobbyists, for example, authored 70 of the 85 lines in a Congressional bill that was designed to lessen banking regulations – essentially making their industry wealthier and more powerful. Our elected officials are quite literally, with no exaggeration, letting massive special interests write in the actual language of these bills in order to further enrich and empower themselves… because they are too lazy or disinterested in the actual work of lawmaking themselves.
a two-year investigation by USA Today, The Arizona Republic, and The Center for Public Integrity found widespread use of “copycat bills” at both federal and state levels. Copycat legislation is the phenomenon in which lawmakers introduce bills that contain identical langauge and phrases to “model bills” that are drafted by corporations and special interests for lobbying purposes. In other words, these lawmakers essentially copy-pasted the exact words that lobbyists sent them.
From 2011 to 2019, this investigation found over 10,000 copycat bills that lifted entire passages directly from well-funded lobbyists and corporations. 2,100 of these copycat bills were signed into law all across the country. And more often than not, these copycat bills contain provisions specifically designed to enrich or protect the corporations that wrote the initial drafts
Fines are arbitrary numbers set by some people not necessarily knowing about other fines for other offenses.
This is a bit like all the stats like "this is appears to be an unprecedented majority in the last 10 years in a Vermont county starting with G for elections held on the 4th when no candidate is from Oklahoma".
Lots of things are historic but that doesn't necessarily mean they're impressive overall. More interesting is how many of these cases have already been tried such that this isn't "historic" for being the first one decided.
Do you think a Civil Engineer (PE) should be held liable if they vibe engineered a bridge using an LLM without reviewing the output? For this hypothetical, let’s assume an inspector caught the issue before the bridge was in use, but it would’ve collapsed had the inspector not noticed.
A single person can design a building, why not a bridge?
P.S. I sell and run commercial construction work
Why jail time for lawyers who use Chat-GPT, but not programmers? Are we that unimportant compared to the actual useful members of society, whose work actually has to be held to standards?
I don't think you meant it this way, but it feels like a frank admission that what we do has no value, and so compared to other people who have to be correct, it's fine for us to slather on the slop.
Programmers generally don't need a degree or license to work. Anyone can become a programmer after a few weeks of work. There are no exams to pass unlike doctors or lawyers.
In absence of mitigations like laws and exams, it makes more important to use criminal and civil law to punish bad programmers.
speak for yourself. some of us are ready to retire and/or looking for parts of the field where code generation is verboten, for various reasons.
Sort of like recklessly vibe coding and pushing to prod. The cardinal rule with AI is that we should all be free to use it, but we're still equally responsible for the output we produce, regardless of the tooling we use to get there. I think that applies equally across professions.
Surely it would suffice to eject him from the California bar -- or suspend him from it for a time.
What do you understand sacred to be, and why would you include the legal system in that category?
I think this is a good reason for fines to not be incredibly big. People are using AI all the time. There will be a growing period until they learn of its limitations.
It's not a large step after that to verify that a quote actually exists in the cited document, though I can see how perhaps that was not something that was necessary up to this point.
I have to think the window on this being even slightly viable is going to close quickly. When you ship something to a judge and their copy ends up festooned with "NO REFERENT" symbols it's not going to go well for you.
Why would I pay for software what I could do with my own eyes in 2 minutes?
Wow. Seems like he really took the lesson to heart. We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?
21 of 23 citations are fake, and so is whatever reasoning they purport to support, and that's casually "adding some citations"? I sometimes use tools that do things I don't expect, but usually I'd like to think I notice when I check their work... if there were 2 citations when I started, and 23 when I finished, I'd like to think I'd notice.
I disagree. It worked until now, and using AI is clearly doing more harm than good, especially in situations where you hire an expert to help you.
Remember, a lawyer is someone who actually has passed a bar exam, and with that there is an understanding that whatever they sign, they validate as correct. The fact that they used AI here actually isn't the worst. The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.
We can make the argument that this might be pushed from upper management, but remember, the license is personal. So it's not that they can hide behind such a mandate.
It's the same discussions I'm having with colleagues about using AI to generate code, or to review code. At a certain moment there is pressure to go faster, and stuff gets committed without a human touching it.
Until that software ends up on your glucose pump, or the system used to radiate your brain tumor.
My guess is that he probably doesn't believe that, but that he's smart enough to try to spin it that way.
Since his career should be taking at least a small hit right now, not only for getting caught using ChatGPT, but also for submitting blatant fabrications to the court.
The court and professional groups will be understanding, and want to help him and others improve, but some clients/employers will be less understanding.
Sure. Its also unrealistic to expect nobody to murder anyone. That's why we invented jail.
Here we have irrefutable evidence of just how bad the result of using AI would be and yet... the response is just that we need to accept that there is going to be damage but keep using it?!?
This isn't a tech company that "needs" to keep pushing AI because investors seem to think it is the only path of the future. There is absolutely zero reason to keep trying to shoehorn this tech in places it clearly doesn't belong.
We don't need to accept anything here. Just don't use it... why is that such a hard concept.
It's not just the result of using AI, it's the result of failing to vet the information he was providing the court. The same thing could've happened if he hired a paralegal from Fiverr to write his pleadings and didn't check their work.
It's like saying that because he typed it on a computer, it's the computers that are the problem, and we shouldn't keep using them.
We're already at least a year past AI tools having the ability to perform grounding (Lexis+ from LexisNexis, as I cited on another comment in this post, for example), so this whole fiasco is already something from a bygone era.
They don't care if you use an AI or a Llama spitting at a board of letters to assemble your case, you are responsible for what you submit to the court.
This is just a bad lawyer who probably didn't check their work in many other cases, and AI just enabled that bad behavior further.
I didn't see them mentioned in the article.
[1] https://www4.courts.ca.gov/opinions/documents/B331918.PDF
https://apps.calbar.ca.gov/attorney/Licensee/Detail/282372
Doesn't seems like there isn't kind of disciplinary action. You can just make up stuff and if you're caught, pay some pocket change (in lawyer money level territory) and move on.
https://www.courtlistener.com/docket/68990373/nz-v-fenix-int...
After asking for recommendations, I always immediately ask it if any are hallucinations. It then tells me a bunch of them are, then goes "Would you like more information about how LLMs "hallucinate," and for us to come up with an architecture that could reduce or eliminate the problem?" No, fake dude, I just want real books instead of imaginary ones, not to hear about your problems.
detail: The question was to find a book that examined how the work of a short order cook is done in detail, or any book with a section covering this. I started the question by mentioning that I already had Fast Foods and Short Order Cooking(1984) by Popper et al. and that was the best I had found so far.
It gave me about a half dozen great hallucinations. You can try the question and see how it works for you. They're so dumb. Our economy is screwed.
Tools like Lexis+ from LexisNexis[1] now offer grounding, so it won't be as simple to bust people cutting corners in the future because these prevent the hallucinations.
We're now closer to the real Butlerian Jihad when we see pro se plaintiffs/defendants winning cases regularly.
[1] https://www.lexisnexis.com/blogs/en-au/insights/hallucinatio...
Using a tool that is widely known to be flawed to provide any sort of professional service (legal, medical, accounting, engineering, banking, etc.) is pretty much a textbook definition of negligence.
And lawyers just love negligence.
There is definitely benefit to using language models correctly in law, but they are different than most users in that their professional reputation is at stake wrt the output being created and the risk of adoption is always going to be greater for them.
lordnacho•3h ago
This lawyer fabricating his filings is going to be among the first in a bunch of related stories: devs who check in code they don't understand, doctors diagnosing people without looking, scientists skipping their experiments, and more.
unshavedyak•3h ago
You're thinking too linearly imo. Your examples are where AI will "take", just perhaps not entirely replace.
Ie if liability is the only thing stopping them from being replaced, what's stopping them from simply assuming more liability? Why can't one lawyer assume the liability of ten lawyers?
lordnacho•2h ago
Just like with a lot of other jobs that got more productive.
observationist•2h ago
They don't understand how to calibrate their model of the world with the shape of future changes.
The gap between people who've been paying attention and those who haven't is going to increase, and the difficulty in explaining what's coming is going to keep rising, because humans don't do well with nonlinearities.
The robots are here. The AI is here. The future is now, it's just not evenly distributed, and by the time you've finished arguing or explaining to someone what's coming, it'll have already passed, and something even weirder will be hurtling towards us even faster than whatever they just integrated.
Sometime in the near future, there won't be much for people to do but stand by in befuddled amazement and hope the people who set this all in motion knew what they were doing (because if we're not doing that, we're all toast anyway.)