frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Why "The AI Hallucinated" is the perfect legal defense

https://niyikiza.com/posts/hallucination-defense/
21•niyikiza•1h ago

Comments

JohnFen•1h ago
> “The AI hallucinated. I never asked it to do that.”

> That’s the defense. And here’s the problem: it’s often hard to refute with confidence.

Why is it necessary to refute it at all? It shouldn't matter, because whoever is producing the work product is responsible for it, no matter whether genAI was involved or not.

salawat•1h ago
Except for the fact that that very accountability sink is relied on by senior management/CxO's the world over. The only difference is that before AI, it was the middle manager's fault. We didn't tell anyone to break the law. We just put in place incentive structures that require it, and play coy, then let anticipatory obedience do the rest. Bingo. Accountability severed. You can't prove I said it in a court of law, and skeevy shit gets done because some poor bloke down the ladder is afraid of getting fired if he doesn't pull out all the stops to meet productivity quotas.

AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.

niyikiza•1h ago
You're right, they should be responsible. The problem is proving it. "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

And when sub-agents or third-party tools are involved, liability gets even murkier. Who's accountable when the action executed three hops away from the human? The article argues for receipts that make "I didn't authorize that" a verifiable claim

bulatb•1h ago
There's nothing to prove. Responsibility means you accept the consequences for its actions, whatever they are. You own the benefit? You own the risk.

If you don't want to be responsible for what a tool that might do anything at all might do, don't use the tool.

The other option is admitting that you don't accept responsibility, not looking for a way to be "responsible" but not accountable.

tossandthrow•1h ago
Sounds good in theory, doesn't work in reality.

Had it worked then we would have seen many more CEOs in prison.

NoMoreNicksLeft•52m ago
The veil of liability is built into statute, and it's no accident.

Such so magic forcefield exists for you, though.

walt_grata•48m ago
There being a few edge cases where it doesn't work in doesn't mean it doesn't work in the majority of cases and that we shouldn't try to fix the edge cases.
bulatb•27m ago
We're taking about different things. To take responsibility is volunteering to accept accountability without a fight.

In practice, almost everyone is held potentially or actually accountable for things they never had a choice in. Some are never held accountable for things they freely choose, because they have some way to dodge accountability.

The CEOs who don't accept accountability were lying when they said they were responsible.

freejazz•26m ago
This isn't a legal argument and these conversations are so tiring because everyone here is insistent upon drawing legal conclusions from these nonsense conversations.
QuadmasterXLII•57m ago
This doesn't seem conceptually different from running

    [ $[ $RANDOM % 6] = 0 ] && rm -rf / || echo "Click"
on your employer's production server, and the liability doesn't seem murky in either case
staticassertion•53m ago
What if you wrote something more like:

    # terrible code, never use ty
    def cleanup(dir):
      system("rm -rf {dir}")


    def main():
        work_dir = os.env["WORK_DIR"]
        cleanup(work_dir)
and then due to a misconfiguration "$WORK_DIR" was truncated to be just "/"?

At what point is it negligent?

direwolf20•51m ago
This is not hypothetical. Steam and Bumblebee did it.
extraduder_ire•47m ago
That was the result of an additional space in the path passed to rm, IIRC.

Though rm /$TARGET where $TARGET is blank is a common enough footgun that --preserve-root exists and is default.

phoe-krk•53m ago
> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people".

Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.

im3w1l•43m ago
> otherwise the concept of responsibility loses all value.

Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it.

phoe-krk•28m ago
A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.

It's scary that a nuclear exit starts looking like an enticing option when confronted with that.

groby_b•42m ago
"Our tooling was defective" is not, in general, a defence against liability. Part of a companys obligations is to ensure all its processes stay within lawful lanes.

"Three months later [...] But the prompt history? Deleted. The original instruction? The analyst’s word against the logs."

One, the analysts word does not override the logs, that's the point of logs. Two, it's fairly clear the author of the fine article has never worked close to finance. A three month retention period for AI queries by an analyst is not an option.

SEC Rule 17a-4 & FINRA Rule 4511 have entered the chat.

niyikiza•38m ago
Agree ... retention is mandatory. The article argues you should retain authorization artifacts, not just event logs. Logs show what happened. Warrants show who signed off on what
groby_b•31m ago
FFIEC guidance since '21: https://www.occ.gov/news-issuances/bulletins/2021/bulletin-2...
LeifCarrotson•39m ago
> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

No, it's trivial: "So you admit you uploaded confidential information to the unpredictable tool with wide capabilities?"

> Who's accountable when the action executed three hops away from the human?

The human is accountable.

groby_b•34m ago
"And when sub-agents or third-party tools are involved, liability gets even murkier."

It really doesn't. That falls straight on Governance, Risk, and Compliance. Ultimately, CISO, CFO, CEO are in the line of fire.

The article's argument happens in a vacuum of facts. The fact that a security engineer doesn't know that is depressing, but not surprising.

freejazz•27m ago
The burden of substantiating a defense is upon the defendant and no one else.
doctorpangloss•40m ago
Wait till you find out about “pedal confusion.”
nerdsniper•38m ago
The distinction some people are making is between copy/pasting text vs agentic action. Generally mistakes "work product" as in output from ChatGPT that the human then files with a court, etc. are not forgiven, because if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own that a "reasonable person" would not have expected it to. Often we forgive those kinds of software bloopers.
ori_b•32m ago
If you put a brick on the accelerator of a car and hop out, you don't get to say "I wasn't even in the car when it hit the pedestrian".
Shalomboy•24m ago
This is true for bricks, but it is not true if your dog starts up your car and hits a pedestrian. Collisions caused by non-human drivers are a fascinating edge case for the times we're in.
victorbjorklund•20m ago
I don’t know where you from but at least in Sweden you have strict liability for anything your dog does
freejazz•15m ago
Prima facie negligence = liability
observationist•24m ago
To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime. "It's my robot, it wasn't me" isn't a compelling defense - if you can prove that it behaved significantly outside of your informed or contracted expectations, then maybe the AI platform or the Robot developer could be at fault. Given the current state of AI, though, I think it's not unreasonable to expect that any bot can go rogue, that huge and trivially accessible jailbreak risks exist, so there's no excuse for deploying an agent onto the public internet to do whatever it wants outside direct human supervision. If you're running moltbot or whatever, you're responsible for what happens, even if the AI decided the best way to get money was to hack the Federal Reserve and assign a trillion dollars to an account in your name. Or if Grok goes mechahitler and orders a singing telegram to Will Stancil's house, or something. These are tools; complex, complicated, unpredictable tools that need skillfull and careful use.

There was a notorious dark web bot case where someone created a bot that autonomously went onto the dark web and purchased numerous illicit items.

https://wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww.bitnik.or...

They bought some ecstasy, a hungarian passport, and random other items from Agora.

>The day after they took down the exhibition showcasing the items their bot had bought, the Swiss police “arrested” the robot, seized the computer, and confiscated the items it had purchased. “It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited, by destroying them,” someone from !Mediengruppe Bitnik wrote on their blog.

In April, however, the bot was released along with everything it had purchased, except the ecstasy, and the artists were cleared of any wrongdoing. But the arrest had many wondering just where the line gets drawn between human and computer culpability.

b00ty4breakfast•9m ago
that darknet bot one always confuses me. The artists/programmers/whatever specifically instructed the computer, through the bot, to perform actions that would likely result in breaking the law. It's not a side-effect of some other, legal action which they were trying to accomplish, it's entire purpose was to purchase things on a marketplace known for hosting illegal goods and services.

If I build an autonomous robot that swings a hunk of steel on the end of a chain and then program it to travel to where people are likely to congregate and someone gets hit in the face, I would rightfully be held liable for that.

ibejoeb•17m ago
Yeah. Legal will need to catch up to deal with some things, surely, but the basic principles for this particular scenario aren't that novel. If you're a professional and have an employee acting under your license, there's already liability. There is no warrant concept (not that I can think of right now, at least) that will obviate the need to check the work and carry professional liability insurance. There will always be negligence and bad actors.

The new and interesting part is that while we have incentives and deterrents to keep our human agents doing the right thing, there isn't really an analog to check the non-human agent. We don't have robot prison yet.

RobotToaster•49m ago
If an employee does something during his employment, even if he wasn't told directly to do it, the company can be held vicariously liable, how is this any different?
apercu•43m ago
I agree with you but you can’t jail a gen-ai model, I guess, is where the difference lies?
phailhaus•37m ago
Nobody tries to jail Microsoft Word, they jail the person using it.
gorjusborg•27m ago
Nobody tries to jail the automobile being driven when it hits a pedestrian when on cruise control. The driver is responsible for knowing the limits of the tool and adjusting accordingly.
LeifCarrotson•36m ago
"The company can be held vicariously liable" means that in this analogy, the company represents the human who used AI inappropriately, and the employee represents the AI model that did something it wasn't directly told to do.
tboyd47•45m ago
Anytime someone gives you unverified information, they're asking you to become their guinea pig.
noitpmeder•33m ago
This is some absolute BS. In the current day and age you are 1000% responsible for the externalities of your use of AI.

Read the terms and conditions of your model provider. The document you signed, regardless if you read or considered it, explicitly removes any negative consequences being passed to the AI provider.

Unless you have something equally as explicit, e.g. "we do not guarantee any particular outcome from the use of our service" (probably needs to be significantly more explicitly than that, IANAL) all responsibility ends up with the entity who itself, or it's agents, foists unreliable AI decisions on downstream users.

Remember, you SIGNED THE AGGREMENT with the AI company the explicitly says it's outputs are unreliable!!

And if you DO have some watertight T&C that absolves you of any responsibility of your AI-backed-service, then I hope either a) your users explicitly realize what they are signing up for, or b) once a user is significantly burned by your service, and you try to hide behind this excuse, you lose all your business

ceejayoz•29m ago
T&Cs aren't ironclad.

One in which you sell yourself into slavery, for example, would be illegal in the US.

All those "we take no responsibility for the [valet parking|rocks falling off our truck|exploding bottles]" disclaimers are largely attempts to dissuade people from trying.

As an example, NY bans liability waivers at paid pools, gyms, etc. The gym will still have you sign one! But they have no enforcement teeth beyond people assuming they're valid. https://codes.findlaw.com/ny/general-obligations-law/gob-sec...

noitpmeder•21m ago
So I can pass on contact breaches due to bugs in software I maintain due to hallucinations by the AI that I used to write the software?? Absolutely no way.

"But the AI wrote the bug."

Who cares? It could be you, your relative, your boss, your underling, your counterpart in India, ... Your company provided some reasonable guarantee of service (whether explitly enumerated in a contact or not) and you cannot just blindly pass the buck.

Sure, after you've settled your claim with the user, maybe TRY to go after the upstream provider, but good luck.

(Extreme example) -- If your company produces a pacemaker dependent on AWS/GCP/... and everyone dies as soon as cloudflare has a routing outage that cascades to your provider, oh boy YOU are fucked, not cloudflare or your hosting provider.

ceejayoz•20m ago
More than one person/organization can be liable at once.
noitpmeder•14m ago
The point of signing contacts is you explicitly set expectations for service, and explicitly assign liability. You can't just reverse that and try to pass the blame.

Sure, if someone from GCP shows up at your business and breaks your leg or burns down your building, you can go after them, as it's outside the reasonable expectation of the business agreement you signed.

But you better believe they will never be legally responsible for damages caused by outages of their service beyond what is reasonable, and you better believe "reasonable outage" in this case is explicitly enumerated in the contact you or your company explicitly agreed to.

Sure they might give you free credits for the outage, but that's just to stop you from switching to a competitor, not any explicit acknowledgement they are on the hook for your lost business opportunity.

ceejayoz•4m ago
> The point of signing contacts is you explicitly set expectations for servkce, and explicitly assign liability.

Sure, but not all liability can be reassigned; I linked a concrete example of this.

> But you better believe they will never be legally responsible for damages caused by outages of their service beyond what is reasonable, and you better believe "reasonable outage" in this case is explicitly enumerated in the contact you or your company explicitly agreed to.

Yes, on this we agree. It'd have to be something egregious enough to amount to intentional negligence.

freejazz•11m ago
"Can" isn't the same as "is"
freejazz•29m ago
What a stupid article from someone that has no idea when liability attaches.

It is the burden of a defendant to establish their defense. A defendant can't just say "I didn't do it". They need to show they did not do it. In this (stupid) hypothetical, the defendant would need to show the AI acted on its own, without prompting from anyone, in particular, themselves.

gamblor956•27m ago
It's not a legal defense at all.

Licensed professionals are required to review their work product. It doesn't matter if the tools they use mess up--the human is required to fix any mistakes made by their tools. In the example given by the blog, the financial analyst is either required to professional review their work product or is low enough that someone else is required to review their work product. If they don't, they can be held strictly liable for any financial losses.

However, this blog post isn't about AI Hallucinations. It's about the AI doing something else separate from the output.

And that's not a defense either. The law already assigns liability in situations like this: the user will be held liable (or more correctly: their employer, for whom the user is acting as an agent). If they want to go after the AI tooling (i.e., an indemnification action) vendor the courts will happily let them do so after any plaintiffs are made whole (or as part of an impleader action).

rpodraza•22m ago
What problem is this guy trying to solve? Sorry, but in the end, someone's gonna have to be responsible and it's not gonna be a computer program. Someone approved the program's use, it's no different to any other software. If you know agent can make mistakes then you need to verify everything manually, simple as.
0xTJ•16m ago
Why would that be any better of a defense than "that preschooler said that I should do it"? People are responsible for their work.

Show HN: Ruby gem to create, validate, and package AI agent skills

https://github.com/rubyonai/agent_skills
1•nagstler•30s ago•0 comments

Apple Called Out in New 'Encrypt It Already' Campaign

https://www.macrumors.com/2026/01/29/apple-eff-encrypt-it-already-campaign/
1•mikece•36s ago•0 comments

OpenAI's Sora app is struggling after its stellar launch

https://techcrunch.com/2026/01/29/openais-sora-app-is-struggling-after-its-stellar-launch/
1•gradus_ad•37s ago•0 comments

Apple Drops $2B on Israeli AI Facial Tracking Company

https://gizmodo.com/apple-drops-2-billion-on-israeli-ai-facial-tracking-company-2000715708
2•mikece•2m ago•0 comments

If You Give an AI a Computer

https://www.blake.ist/posts/if-you-give-an-ai-a-computer/
1•erhuve•3m ago•0 comments

A short video from KIMI AI founder, Zhilin Yang

https://www.youtube.com/watch?v=5rithrDqeN8
1•rmason•3m ago•0 comments

Anthropic-Pentagon Clash over Limits on AI Imperils $200M Contract

https://www.wsj.com/tech/ai/anthropic-pentagon-clash-over-limits-on-ai-imperils-200-million-contr...
1•RyanShook•4m ago•0 comments

Show HN: Jack – a simple cross-platform CLI Multi-Tool for dev productivity

1•dimeskigj•5m ago•0 comments

How to Run Local LLMs with Claude Code and OpenAI Codex

https://unsloth.ai/docs/basics/claude-codex
1•ljosifov•5m ago•0 comments

Tim Berners-Lee says he is in a 'battle for the soul' of the internet

https://www.theguardian.com/technology/2026/jan/29/internet-inventor-tim-berners-lee-interview-ba...
2•rmason•5m ago•0 comments

Apple posts record-breaking quarterly earnings

https://www.apple.com/newsroom/2026/01/apple-reports-first-quarter-results/
2•linkage•7m ago•2 comments

The Trump Administration Is Publishing a Stream of Nazi Propaganda

https://www.theatlantic.com/national-security/2026/01/social-media-trump-administration-dhs/685659/
9•zerosizedweasle•8m ago•2 comments

Bitcoin 2-Month Low as Gold and Stocks Give Up Gains, Crypto Liquidations $800M

https://decrypt.co/356330/bitcoin-2-month-low-gold-stocks-give-up-gains-crypto-liquidations-800m
1•paulpauper•10m ago•0 comments

PicoIDE An open source IDE/ATAPI drive emulator for vintage computers

https://www.crowdsupply.com/polpotronics/picoide
1•helfire•10m ago•0 comments

Man impersonating FBI agent tried to get Luigi Mangione out of jail

https://apnews.com/article/luigi-mangione-jail-release-arrest-brooklyn-dc5ec1c87b636bfe99352ef329...
1•bradleyankrom•11m ago•0 comments

Sly Dunbar, Whose Drumming Brought Complex Beats to Reggae, Dies at 73

https://www.nytimes.com/2026/01/28/arts/music/sly-dunbar-dead.html
1•paulpauper•12m ago•0 comments

Is school worse for your kids than social media?

https://marginalrevolution.com/marginalrevolution/2026/01/is-school-worse-for-your-kids-than-soci...
2•paulpauper•12m ago•0 comments

YouTube wiped 4.7B+ views worth of AI brainrot

https://www.androidpolice.com/youtube-is-indeed-getting-rid-of-ai-slop/
1•SunshineTheCat•13m ago•0 comments

Is this dumb? A "mother may I" hook for Claude Code

https://github.com/dts/mother
1•Dstaudigel•15m ago•1 comments

Encrypted Peer-to-Peer Messaging via Decentralized Bluetooth Mesh

https://github.com/permissionlesstech/bitchat/blob/main/WHITEPAPER.md
1•us321•15m ago•0 comments

WriteFreely: An open source platform for building a writing space on the web

https://writefreely.org/
1•doener•16m ago•0 comments

Building Less Painful AI Code Production Lines

https://www.mitchschwartz.co/2026/01/designing-less-painful-ai-production.html
1•MitchSchwartz•17m ago•0 comments

The Hidden Costs of Additions to a System

https://leomax.fyi/blog/the-hidden-costs-of-additions-to-a-system/
1•mvestri•17m ago•0 comments

Procial – social instance for more advanced (pro) users, who enjoy Mastodon but

https://procial.tchncs.de/
1•doener•18m ago•0 comments

Explore some of the finest open-source software out there

https://tchncs.de/en/
1•doener•21m ago•0 comments

The Duelling Rhetoric at the AI Frontier

https://deadneurons.substack.com/p/the-duelling-rhetoric-at-the-ai-frontier
1•gmays•22m ago•0 comments

The Arc of the Practical Creator

https://home.moretothat.com/p/the-arc-of-the-practical-creator
1•herbertl•25m ago•0 comments

Cos goes FOSS The sorry state of scientific publishing and how we could

https://jekelylab.github.io/COS_goes_FOSS_publishing.html#/title-slide
1•nabla9•25m ago•0 comments

Rate Limiter with Caddy and Fail2ban

https://blog.schallbert.de/en/fail2ban-with-caddy/
1•indigodaddy•27m ago•0 comments

Parca Continuous Profiling

https://github.com/parca-dev/parca
2•shablulman•30m ago•0 comments