This is looking like an interesting read the more I get into it. According to law, one needs a "mental state" to enter into a contract. Whether you believe LLMs to have one or not, you would have to argue in court, which will be interesting to see when it happens.
If your agent drains your bank account or makes a bad purchase, you may want to have a bookmark to this paper to help get your money back.
> This paper argues that the possibility of LLMs going through the motions
of agreement proves that Solan’s is the better view of the objective theory of
contracts. The law’s purporting to enforce probabilistically generated
linguistic sequences wholly untethered from anyone’s mental states is an
absurdity—no matter how much the linguistic sequences look like those that
human beings with mental states use to enter contracts that the law ought to
enforce. For one thing, it’s not clear that the outputs of LLMs in this way
mean anything on their own terms. And even if they do, whatever they mean
is certainly not the kind of thing any reasonable legal system has normative
reason to burn taxpayer dollars “enforcing.”
>To be clear, I do not deny that LLMs can play a useful role in
contemporary contract drafting—if there’s one thing these models are good
at, it’s generating boilerplate.
9 Executive A can agree with Executive B to
bind themselves to the language generated by LLM C (presumably, for
interpretive purposes, as attributed to some kind of fictional “reasonable”
speaker)10—the law can certainly enforce these sorts of agreements.11 But
this is different from the cases I’m talking about—there are actual mental
states involved here; two entities with mental states (humans) actually agreed
on something. The cases in which legal enforcement is absurd are those where
two LLMs wholly disconnected from human mental states (except insofar as
they were trained on human-generated corpora) appear to “agree” with one
another—technically possible but, as far as I know, not a major component
of the contemporary economy.
verdverm•1h ago
If your agent drains your bank account or makes a bad purchase, you may want to have a bookmark to this paper to help get your money back.
> This paper argues that the possibility of LLMs going through the motions of agreement proves that Solan’s is the better view of the objective theory of contracts. The law’s purporting to enforce probabilistically generated linguistic sequences wholly untethered from anyone’s mental states is an absurdity—no matter how much the linguistic sequences look like those that human beings with mental states use to enter contracts that the law ought to enforce. For one thing, it’s not clear that the outputs of LLMs in this way mean anything on their own terms. And even if they do, whatever they mean is certainly not the kind of thing any reasonable legal system has normative reason to burn taxpayer dollars “enforcing.”
>To be clear, I do not deny that LLMs can play a useful role in contemporary contract drafting—if there’s one thing these models are good at, it’s generating boilerplate. 9 Executive A can agree with Executive B to bind themselves to the language generated by LLM C (presumably, for interpretive purposes, as attributed to some kind of fictional “reasonable” speaker)10—the law can certainly enforce these sorts of agreements.11 But this is different from the cases I’m talking about—there are actual mental states involved here; two entities with mental states (humans) actually agreed on something. The cases in which legal enforcement is absurd are those where two LLMs wholly disconnected from human mental states (except insofar as they were trained on human-generated corpora) appear to “agree” with one another—technically possible but, as far as I know, not a major component of the contemporary economy.