Surely unions are too powerful in several industries. Police, medicine, and law. But not having some association holding these people accountable is a bad idea.
You are however entitled to represent yourself without passing the bar, and thus use the AI to help your case.
Even for the remaining lawyers, I imagine that their billable hours will crater due to competitive dynamics.
If now you were to come forward with an AI lawyer, in practice it'll be almost as if you didn't use a lawyer at all, as if you were representing yourself, which will get you the worst possible deal, if any. Things shouldn't be this way at all, but the system is crooked, and so they are this way.
As such, I think some lawyers are going away, but not all. The ones who stand in court will have business.
This isn't why expensive lawyers tend to get better results in court, or why those who represent themselves often end up screwed. I'm against the legal monopoly system, but this is out-of-touch and silly.
Expensive lawyers can get better deals in court even for run-of-the-mill cases. Why is this? Are cheaper lawyers so dumb that they can't even handle common cases?
As usual with "AI replacing humans", the key thing to consider here is accountability.
I want to get my legal advice from someone who is accountable for that advice, and is willing to stake their professional reputation on that advice being correct.
An LLM can never be accountable.
I don't want an LLM for a lawyer. I want a lawyer armed with LLMs, who's more effective than the previous generations of lawyers.
(I'd also like them to be cheaper because they take less time to solve my problems, but I would hope that means they can take on more clients and maintain a healthy income that way even as each client takes less time.)
The closing paragraph of that story:
> ‘My niece is a lovely girl, really smart, great at school, and the other day she told me she wants to be a lawyer. And I thought, “Oh my God, my little niece wants to be a lawyer”, and I flat out told her. I said please do not destroy your life. Do not get into a lifetime of debt for a job that won’t exist in ten years. Or less.’
Uh oh. Here we go again, with the "don't bother studying computer science, it's 2002, all the jobs will be outsourced to cheaper countries in the next few years!". So glad I didn't listen to that advice back then!
That's where AI businesses will make bank.
When they actually underwrite the risk of their models and sell that to clients - that's going to command an extremely high price premium.
The models aren't there yet, though.
From what we've seen thus far, there's a non-zero chance the lawyer armed with LLMs will submit a brief generated by said LLM without reviewing it, which makes the judge none too happy.
Look at how people handle bringing their cell phones with them while driving. Some people won't use it at all. Some will play music (unrelated to driving but overall neutral as long as they aren't fiddling with it). Some will use it for GPS driving assistance (net positive). But, many will irresponsibly use it for texting/talking while driving, which is at least as bad as being inebriated and can lead to harming themselves and others.
Don't expect people to be any more or less responsible with LLMs.
There are some promising AI-driven tools these days that use search against archives of cases to help check that citations aren't garbage. I'm hoping lawyers start using them to help pick apart each other's laziness.
The only way to guarantee that is to have a lawyer not armed with LLMs.
We've seen dozens of examples already of lawyers doing exactly that. (Some of them have then doubled down in court, to their eventual detriment.)
If you're making a habit of using LLMs to draft briefs for you, how long before you just forget to check the cited cases to replace the hallucinated ones with real ones? Or decide not to check, because surely they'll be fine this time...only they're not?
"Your honor, the death penalty for a traffic ticket?"
AI will never replace humans in this capacity. Lawyers may be scummy but most people would take a slimeball lawyer over a hallucinating, sycophantic "AI" pretending to be both a human and a lawyer. This reads more like astroturfing by Sam Altman to keep the ChatGPT hype going while he cashes out.
Bets that this won't happen in just 10 years?
DARPA Grand Challenge took 20 years, and it's still not on the interstates. Waymo is amazing, but it's still a work in progress.
I know it's coming, but solving problems that require 99.999% correctness is hard work. Mistakes multiply.
A toy can be ready tomorrow, but a precision legal tool needs to be better than humans. Not unlike driving 70 mph on the interstate highway with hands off the wheel.
Which is not to say you're wrong, but maybe we should look at ways of making the transition better, easier and less stressful. Perhaps actually giving people a choice, rather than having technocrats ram it down our throats.
That will the biggest criminal case processed without lawyers!
What worries me is the idea of them replacing JUDGES.
Read Weapons of Math Destruction.
It’s a short leap to comparing model scores to determine a quick and dirty settlement “winner” which really isn’t that far from manual processes.
Lawyering will look different, but there definitely will be lawyers. Judging on the other hand…. Judging is the one I wonder about.
That's actually the plot of Minority Report, a lot of people think it is about "what if computers could predict crime" but it is really about "What do you do when your 'omniscient' machines disagree with each other".
Either way the idea of getting sent to prison and having 0 human interaction is terrifying.
Obviously AI will change the legal industry. But a lawyer will still have an advantage because they know what questions to ask and can provide the AI with the relevant context.
Recently I asked Claude if I should convert my LLC to an S Corp for tax savings, and it sang the praises of how much I’d save if I did this.
When asked my accoutant, he pointed out that since I live in NYC, the S corp would be taxed in such a way that would completely wipe out the tax advantage I’d get elsewhere, and I’d likely end up paying more if I did this.
Once it looks like their profession is threatened, you will see many laws against AI.
I thought decades ago people found a way to avoid lawyers is a specific instance, I kind of remember doing that was made against the law. Not sure if I am remembering right, but I could swear that happened.
Edit: Reading the comments, I think it was the bar exam. IIRC there was a time you could take it without a degree, that was changed to force people to go to to college and get a degree.
https://www.reddit.com/r/Lawyertalk/comments/1n9cwfv/pro_se_...
The only thing I feel confident about is that people are bad at predicting the future. Why can't we just wait and see without all this overconfident guessing?
In most jurisdictions legal advice is a regulated and restricted activity. Qualified lawyers today get themselves into trouble without AI advising on areas they have no right to practice in.
Any physical world interaction might survive for more time - cooking, goods delivery, transport, construction, medical testing, field work, lab work, class room work, handyman jobs, factory work, farming, mining, fishing, travel & tourism, retail shops, offices, gym, sports, fashion, hardware,..
Security itself is a journey, not a destination. To say that you are secure is to say that you have been so clever that nobody else in the history of ever again will ever be as clever as you just were. Even knowing that they can study you being clever.
Even a super intelligent AI might not be able to replace lawyerhood unless it is also dynamically going out into the world and investigating new legal theory, researching old legal theory, socializing with the powers that be to ensure that they accept their approach, and carefully curating clients that can take advantage of the results.
-Doc Brown
downrightmike•1h ago