Read the terms and conditions of your model provider. The document you signed, regardless if you read or considered it, explicitly removes any negative consequences being passed to the AI provider.
Unless you have something equally as explicit, e.g. "we do not guarantee any particular outcome from the use of our service" (probably needs to be significantly more explicitly than that, IANAL) all responsibility ends up with the entity who itself, or it's agents, foists unreliable AI decisions on downstream users.
Remember, you SIGNED THE AGGREMENT with the AI company the explicitly says it's outputs are unreliable!!
And if you DO have some watertight T&C that absolves you of any responsibility of your AI-backed-service, then I hope either a) your users explicitly realize what they are signing up for, or b) once a user is significantly burned by your service, and you try to hide behind this excuse, you lose all your business
One in which you sell yourself into slavery, for example, would be illegal in the US.
All those "we take no responsibility for the [valet parking|rocks falling off our truck|exploding bottles]" disclaimers are largely attempts to dissuade people from trying.
As an example, NY bans liability waivers at paid pools, gyms, etc. The gym will still have you sign one! But they have no enforcement teeth beyond people assuming they're valid. https://codes.findlaw.com/ny/general-obligations-law/gob-sec...
"But the AI wrote the bug."
Who cares? It could be you, your relative, your boss, your underling, your counterpart in India, ... Your company provided some reasonable guarantee of service (whether explitly enumerated in a contact or not) and you cannot just blindly pass the buck.
Sure, after you've settled your claim with the user, maybe TRY to go after the upstream provider, but good luck.
(Extreme example) -- If your company produces a pacemaker dependent on AWS/GCP/... and everyone dies as soon as cloudflare has a routing outage that cascades to your provider, oh boy YOU are fucked, not cloudflare or your hosting provider.
Sure, if someone from GCP shows up at your business and breaks your leg or burns down your building, you can go after them, as it's outside the reasonable expectation of the business agreement you signed.
But you better believe they will never be legally responsible for damages caused by outages of their service beyond what is reasonable, and you better believe "reasonable outage" in this case is explicitly enumerated in the contact you or your company explicitly agreed to.
Sure they might give you free credits for the outage, but that's just to stop you from switching to a competitor, not any explicit acknowledgement they are on the hook for your lost business opportunity.
Sure, but not all liability can be reassigned; I linked a concrete example of this.
> But you better believe they will never be legally responsible for damages caused by outages of their service beyond what is reasonable, and you better believe "reasonable outage" in this case is explicitly enumerated in the contact you or your company explicitly agreed to.
Yes, on this we agree. It'd have to be something egregious enough to amount to intentional negligence.
It is the burden of a defendant to establish their defense. A defendant can't just say "I didn't do it". They need to show they did not do it. In this (stupid) hypothetical, the defendant would need to show the AI acted on its own, without prompting from anyone, in particular, themselves.
Licensed professionals are required to review their work product. It doesn't matter if the tools they use mess up--the human is required to fix any mistakes made by their tools. In the example given by the blog, the financial analyst is either required to professional review their work product or is low enough that someone else is required to review their work product. If they don't, they can be held strictly liable for any financial losses.
However, this blog post isn't about AI Hallucinations. It's about the AI doing something else separate from the output.
And that's not a defense either. The law already assigns liability in situations like this: the user will be held liable (or more correctly: their employer, for whom the user is acting as an agent). If they want to go after the AI tooling (i.e., an indemnification action) vendor the courts will happily let them do so after any plaintiffs are made whole (or as part of an impleader action).
> A computer must never make a management decision, because a computer cannot be held accountable.
JohnFen•2h ago
> That’s the defense. And here’s the problem: it’s often hard to refute with confidence.
Why is it necessary to refute it at all? It shouldn't matter, because whoever is producing the work product is responsible for it, no matter whether genAI was involved or not.
salawat•2h ago
AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.
pixl97•31m ago
The software world has been very allergic to getting anywhere near the vicinity of a system like that.
niyikiza•2h ago
And when sub-agents or third-party tools are involved, liability gets even murkier. Who's accountable when the action executed three hops away from the human? The article argues for receipts that make "I didn't authorize that" a verifiable claim
bulatb•1h ago
If you don't want to be responsible for what a tool that might do anything at all might do, don't use the tool.
The other option is admitting that you don't accept responsibility, not looking for a way to be "responsible" but not accountable.
tossandthrow•1h ago
Had it worked then we would have seen many more CEOs in prison.
NoMoreNicksLeft•1h ago
Such so magic forcefield exists for you, though.
walt_grata•1h ago
bulatb•1h ago
In practice, almost everyone is held potentially or actually accountable for things they never had a choice in. Some are never held accountable for things they freely choose, because they have some way to dodge accountability.
The CEOs who don't accept accountability were lying when they said they were responsible.
freejazz•1h ago
QuadmasterXLII•1h ago
staticassertion•1h ago
At what point is it negligent?
direwolf20•1h ago
extraduder_ire•1h ago
Though rm /$TARGET where $TARGET is blank is a common enough footgun that --preserve-root exists and is default.
niyikiza•5m ago
a_t48•3m ago
phoe-krk•1h ago
If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people".
Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.
im3w1l•1h ago
Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it.
phoe-krk•1h ago
It's scary that a nuclear exit starts looking like an enticing option when confronted with that.
groby_b•1h ago
"Three months later [...] But the prompt history? Deleted. The original instruction? The analyst’s word against the logs."
One, the analysts word does not override the logs, that's the point of logs. Two, it's fairly clear the author of the fine article has never worked close to finance. A three month retention period for AI queries by an analyst is not an option.
SEC Rule 17a-4 & FINRA Rule 4511 have entered the chat.
niyikiza•1h ago
groby_b•1h ago
LeifCarrotson•1h ago
No, it's trivial: "So you admit you uploaded confidential information to the unpredictable tool with wide capabilities?"
> Who's accountable when the action executed three hops away from the human?
The human is accountable.
pixl97•28m ago
----
A computer can never be held accountable
Therefore a computer must never make a management decision
groby_b•1h ago
It really doesn't. That falls straight on Governance, Risk, and Compliance. Ultimately, CISO, CFO, CEO are in the line of fire.
The article's argument happens in a vacuum of facts. The fact that a security engineer doesn't know that is depressing, but not surprising.
freejazz•1h ago
doctorpangloss•1h ago
nerdsniper•1h ago
ori_b•1h ago
Shalomboy•58m ago
victorbjorklund•53m ago
freejazz•48m ago
ori_b•28m ago
observationist•58m ago
There was a notorious dark web bot case where someone created a bot that autonomously went onto the dark web and purchased numerous illicit items.
https://wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww.bitnik.or...
They bought some ecstasy, a hungarian passport, and random other items from Agora.
>The day after they took down the exhibition showcasing the items their bot had bought, the Swiss police “arrested” the robot, seized the computer, and confiscated the items it had purchased. “It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited, by destroying them,” someone from !Mediengruppe Bitnik wrote on their blog.
In April, however, the bot was released along with everything it had purchased, except the ecstasy, and the artists were cleared of any wrongdoing. But the arrest had many wondering just where the line gets drawn between human and computer culpability.
b00ty4breakfast•43m ago
If I build an autonomous robot that swings a hunk of steel on the end of a chain and then program it to travel to where people are likely to congregate and someone gets hit in the face, I would rightfully be held liable for that.
niyikiza•10m ago
Yeah that's exactly the I think we should adopt for AI agent tool calls as well: cryptographically signed, task scoped "warrants" that can be traceable even in cases of multi-agent delegation chains
kazinator•3m ago
ibejoeb•51m ago
The new and interesting part is that while we have incentives and deterrents to keep our human agents doing the right thing, there isn't really an analog to check the non-human agent. We don't have robot prison yet.