frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Gmail spam filtering suddenly marking everything as spam?

171•goopthink•17h ago•113 comments

Ask HN: What's the current best local/open speech-to-speech setup?

243•dsrtslnd23•1d ago•61 comments

Ask HN: Have we confused efficiency with "100% utilization"?

24•nickevante•13h ago•13 comments

Ask HN: What usually happens after a VC asks for a demo?

12•stijo•14h ago•4 comments

Ask HN: Do you have any evidence that agentic coding works?

454•terabytest•4d ago•450 comments

Ask HN: Career transition question – assistance, MLOps guidance

4•Pierre_Esteves•11h ago•0 comments

Ask HN: May an agent accept a license to produce a build?

26•athrowaway3z•19h ago•47 comments

Tell HN: 2 years building a kids audio app as a solo dev – lessons learned

136•oliverjanssen•3d ago•77 comments

Ask HN: Seeeking help to reverse engineer a PCB

20•Dlg001•2d ago•9 comments

Ask HN: Why does the number of datasets on data.gov vary so much?

7•akudha•13h ago•3 comments

Ask HN: Thinking about memory for AI coding agents

5•hoangnnguyen•22h ago•5 comments

Ask HN: What are some good unintuitive statistics problems?

6•ronbenton•17h ago•6 comments

Ask HN: Rust and AI builders interested in local-first, multi-agent systems?

3•cajazzer•13h ago•8 comments

Coding assistants are slow. So we multitask

3•brunaxLorax•9h ago•8 comments

Ask HN: How to redeem a gift card without risking lock-out?

6•magnetic•15h ago•6 comments

Ask HN: Weekend Social: Top two programming languages and what they can borrow?

3•susam•15h ago•6 comments

Ask HN: Where is society heading, is there a plan for a jobless future?

16•evo_9•1d ago•33 comments

Ask HN: Why are so many rolling out their own AI/LLM agent sandboxing solution?

32•ATechGuy•4d ago•14 comments

Ask HN: General purpose search engine that will respect special characters?

3•what-if•15h ago•3 comments

Ask HN: How do you AI code from your phone?

5•splitbrain•23h ago•1 comments

Ask HN: Do you "micro-manage" your agents?

7•xinbenlv•1d ago•8 comments

Ask HN: Revive a mostly dead Discord server

20•movedx•4d ago•28 comments

Locked out of my GCP account for 3 days, still charged, can't redirect domain

14•lifeoflee•1d ago•3 comments

Ask HN: Where do you look for semiconductor jobs?

4•johncole•1d ago•8 comments

Ask HN: Does DDG no longer honor "site:" prefix?

19•everybodyknows•2d ago•6 comments

Ask HN: LLMs for new job categories?

6•aavci•1d ago•2 comments

Ask HN: Best practice securing secrets on local machines working with agents?

9•xinbenlv•2d ago•11 comments

Tell HN: AI is all about the tools (for now)

3•keepamovin•22h ago•0 comments

ChatGPT has no real-time clock – time awareness is essential

3•Stehpwtimace•23h ago•1 comments

Ask HN: How do you authorize AI agent actions in production?

6•naolbeyene•2d ago•5 comments
Open in hackernews

Ask HN: May an agent accept a license to produce a build?

26•athrowaway3z•19h ago
For example, Android builds steer towards using `sdkmanager --licenses`.

Suppose I get a preconfigured VPS with Claude Code, and ask it to make an android port of an app I have built, it will almost always automatically downloads the sdkmanager and accepts the license.

That is the flow that exists many times in its training data (which represents its own interesting wrinkle).

Regardless of what is in the license; I was a bit surprised to see it happen, and I'm sure I won't be the last nor will the android sdk license be the only one.

What is the legal status of an agreement accepted in such a manner - and perhaps more importantly - what ought to be the legal status considering that any position you take will be exploited by bad faith ~~actors~~ agents?

Comments

chrisjj•19h ago
You asked. You're liable.
beepbooptheory•19h ago
Wouldn't you want to be in control of your dependencies in this case anyway? Like why would you ever want it to autodownload sdkmanager? Doesn't this seem like bad idea?
embedding-shape•19h ago
For something you throw away next week, just need something simple and you run everything isolated? Why not?
storystarling•18h ago
I think the architectural mistake is letting the agent execute the environment setup instead of just defining it. If you constrain the agent to generating a Dockerfile, you get a deterministic artifact a human can review before any license is actually accepted. It also saves a significant amount of tokens since you don't need to feed the verbose shell output back into the context loop.
andai•19h ago
Can a non-human entity accept an agreement? I know there are things like mountains and rivers which have been granted legal personhood. But obviously they have humans who act on their behalf.

The general question of the personhood of artificial intelligence aside, perhaps the personhood could be granted as an extension of yourself, like power of attorney? (Or we could change the law so it works that way.)

It all sounds a bit weird but I think we're going to have to make up our minds on the subject very soon. (Or perhaps the folks over at Davos have already done us the favour of making up theirs ;)

didgeoridoo•18h ago
The whole point of “agency” is that there is a principal (you) behind the agent that owns all responsibility. The agent ACTS for you, it does not absorb any liability for those acts — that flows straight back to the principal.
esperent•18h ago
Just because the AI companies have decided to use the word "agent" doesn't means it's legally an agent. It's just a word they chose. Maybe it'll also be found legally to be an agent but it's likely that'll vary depending on the jurisdiction and will take at least a few years and lots of lawyer bills to iron out.
qingcharles•9h ago
When GPT Agent thing first launched I had it complete some task and it got to a "I am not a robot" checkbox and its thinking was "I have to click this button to prove I am a human" o_O

It checked the box.

jrockway•18h ago
A similar question is what happens if you get up to go to the bathroom, some software on your machine updates and requires you to accept the new ToS, and your cat jumps up on the keyboard and selects "accept". Are you still bound by those terms? Of course. If licenses are valid in any way (the argument is they get you out of the copyright infringement caused by copying the software from disk to memory) then it's your job to go find the license to software you use and make sure you agree to it; the little popup is just a nice way to make your aware of the terms.
Hizonner•18h ago
Actually, no, because you didn't intentionally accept the terms, and you had no reason to expect that your cat would jump on there in exactly that way.

On the other hand, if you take a laser and intentionally induce the cat to push the key, then you are bound.

> If licenses are valid in any way (the argument is they get you out of the copyright infringement caused by copying the software from disk to memory) then it's your job to go find the license to software you use and make sure you agree to it; the little popup is just a nice way to make your aware of the terms.

The way you set up the scenario, the user has no reason to even know that they're using this new version with this new license. An update has happened without their knowledge. So far as they know, they're running the old software under the old license.

You could make an equally good argument that whoever wrote the software installed software on the user's computer without the user's permission. If it's the user's fault that a cat might jump on the keyboard, why isn't it equally the software provider's fault?

... but the reality is that, taking your description at face value, nobody has done anything. The user had no expectation or intention of installing the software or accepting the new license, and the software provider had no expectation or intention of installing it without user permission, and they were both actually fairly reasonable in their expectations. Unfortunately shit happens.

simpaticoder•18h ago
The real question is what a judge would accept. I can't imagine any judge accepting "my cat did it".
Hizonner•17h ago
... only because you'd have no evidence of it. From a legal point of view, the question is what would come down if the judge were (somehow) convinced that it actually happened that way. Actually if a "perfect" judge were so convinced.

Probably a real judge would want to say something like "Why are all of you bozos in my courtroom wasting public money with some two-bit shrinkwrap bullshit? I was good at surfing. I could have gone pro. I hate my life..."

Muromec•10h ago
>I could have gone pro. I hate my life..

proceeds to write a 75 page diss and bill taxpayers for that

jrockway•15h ago
Yeah. Would a reasonable person familiar with software think that there was no license agreement on the software? That's what would be litigated. "My client has only ever used GNU GPL software, he didn't know it was possible to sell software with terms and conditions imposed upon the end user." Maybe that's convincing, but probably not. That's why juries exist.
JimDabell•18h ago
> the argument is they get you out of the copyright infringement caused by copying the software from disk to memory

This is not copyright infringement in the USA:

> …it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided… that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner

— https://www.law.cornell.edu/uscode/text/17/117

iamthad•16h ago
Wasn't copying from disk to memory found to be infringing in the Glider lawsuit? https://en.wikipedia.org/wiki/MDY_Industries,_LLC_v._Blizzar....

> Citing the prior Ninth Circuit case of MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511, 518-19 (9th Cir. 1993), the district court held that RAM copying constituted "copying" under 17 U.S.C. § 106.

otterley•15h ago
No, it was not. It was found to be a copy, but not an infringing one in and of itself.

Step N in the analysis is "is it a copy?" Step N+1 is "does the copy infringe upon the rights of the owner"?

tomasphan•18h ago
You are legally responsible for the actions of your agents. It’s in the name agent = acting on someone’s behalf.

Your English is very interesting by the way. You have some obvious grammatical errors in your text yet beautiful use of formal register.

blibble•18h ago
in terms of "AI": agent is a marketing term, it has no legal meaning

it's a piece of non-deterministic software running on someone's computer

who is responsible for its actions? hardly clear cut

Hizonner•18h ago
The person who chose to run it (and tell it what to do) is responsible for its actions. If you don't want to be responsible for something nondeterministic software does, then don't let nondeterministic software do that thing.
friendzis•18h ago
Hypothetical scenario:

You buy a piece of software, marketed to photographers for photo editing. Nowhere in the manuals or license agreements does it specify anything else. Yet the software also secretly joins a botnet and participates in coordinated attacks.

Question: are you on the hook for cyber-crimes?

Hizonner•17h ago
You didn't have a reasonable expectation that it would, or even might, do that.

I guess you could say that you didn't have a reasonable expectation that a bot could accept a license, but you're on a lot shakier ground there...

NegativeK•17h ago
Would a general person in your situation know that it's doing criminal things? If not, then you're not on the hook - the person who wrote the secret code is.

You can't sit back and go "lalalala" to some tool (AI, photo software, whatever) doing illegal things when you know about it. But you also aren't on the hook for someone else's secret actions that are unreasonable for you to know about.

IANAL.

otterley•17h ago
IAAL (not legal advice) and your conclusion is generally correct. "Willful disregard" frequently nullifies potential defenses to liability.
qingcharles•9h ago
Usually (but not always) there is a knowing element to criminal offenses.
SAI_Peregrinus•16h ago
The same as any other computer program: the operator of the program.
friendzis•18h ago
Interesting question, actually. The ones calling for full and immediate assumption of liability on the principal either miss a thing or imply an interesting relationship.

The closest analogy we have, I guess, is the power of attorney. If a principal signs off on power of attorney to, e.g. take out a loan/mortgage to buy a thing on principal's behalf, that does not extend to taking any extra liabilities. Any extra liabilities signed off by the agent would be either rendered null or transferred to the agent in any court of law. There is extent to which agency is assigned.

The core questions here are agency and liability boundaries. Are there any agency boundaries on the agent? If so, what's the extent? There are many future edge cases where these questions will arise. Licenses and patents are just the tip of an iceberg.

butvacuum•17h ago
the answer is a hard "No" for anything touching ITAR- per several major company's lawers. (internal legal counsel, not official public stance. aka: they do as they say.)
otterley•17h ago
IAAL but this is not legal advice. Consult an attorney licensed in your jurisdiction for advice.

In general, agents "stand in the shoes" of the principal for all actions the principal delegated to them (i.e., "scope of agency"). So if Amy works for Global Corp and has the authority to sign legal documents on their behalf, the company is bound. Similarly, if I delegate power of attorney to someone to sign documents on my behalf, I'm bound to whatever those agreements are.

The law doesn't distinguish between mechanical and personal agents. If you give an agent the power to do something on your behalf, and it does something on your behalf under your guidance, you're on the hook for whatever it does under that power. It's as though you did it yourself.

B1FIDO•17h ago
Look, just because an LLM thing is named "agent" doesn't mean it is "legally an agent".

If I were an attorney in court, I would argue that a "mechanical or automatic agent" cannot truly be a personal agent unless it can be trusted to do things only in line with that person's wishes and consent.

If an LLM "agent" runs amok and does things without user consent and without reason or direction, how can the person be held responsible, except for saying that they never should've granted "agency" in the first place? Couldn't the LLM's corporate masters be held liable instead?

otterley•17h ago
That's where "scope of agency" comes in. It's no different than if Amy, as in my example, ran amok and started signing agreements with the mob to bind Global Corp to a garbage pickup contract, when all she had was the authority to sign a contract for a software purchase.

So in a case like this, if your agent exceeded its authority, and you could prove it, you might not be bound.

Keep in mind that an LLM is not an agent. Agents use LLMs, but are not LLMs themselves. If you only want your agent to be capable of doing limited actions, program or configure it that way.

reaperducer•16h ago
If I were an attorney in court, I would argue…

A guy who's not a lawyer arguing about lawyering with an actual lawyer. Typical tech bubble hubris.

B1FIDO•16h ago
What makes you think I'm not a lawyer? The point is that we're not in court, we're in a pseudonymous open forum on the Internet, where everyone has a stinky opinion, where actual attorneys are posting disclaimers that they are explicitly not giving legal advice.
otterley•16h ago
Because principal/agent theory is covered (at least at the basic level) in 1L contract law and you'd have to know this to pass the Bar Exam.
B1FIDO•14h ago
https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...
Onavo•16h ago
There is established jurisprudence that decisions from LLM based customer support chatbots are considered binding.
otterley•16h ago
Indeed. See, e.g., https://www.bbc.com/travel/article/20240222-air-canada-chatb...
mindslight•15h ago
That's due to authorized humans at the company setting up the LLMs to publish statements which are materially relied upon. Not because company officers have delegated legal authority to the LLM process to be a legal agent that forms binding contracts.

It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.

otterley•15h ago
> That's due to authorized humans at the company setting up the LLMs to publish statements which are materially relied upon. Not because company officers have delegated legal authority to the LLM process to form binding contracts.

It's not that straightforward. A contract, at heart, is an agreement between two parties, both of whom must have (among other things) reasonable material reliance in each other that they were either the principals themselves or were operating under the authority of their principal.

I am sure that Air Canada did not intend to give the autonomous customer service agent the authority to make the false promises that it did. But it did so anyway by not constraining its behavior.

> It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.

I don't think that's necessarily correct. I believe the law (again, not legal advice) would bind the seller to the agent's price mistake unless 1/the customer knew it was a mistake and tried to take advantage of it anyway or 2/the price was so outlandish that no reasonable person would believe it. That said, there's often a wide gap between what the law requires and what actually happens. Nobody's going to sue over a $10 price mistake.

mindslight•15h ago
Yes, but neither airline agents nor LLM agents hold themselves out as having legal authority to bind their principals in general contracts. To the extent you could get an LLM to state such a thing, it would be specious and still not binding. Someone calling the airline support line and assuming the airline agent is authorized to form general contracts doesn't change the legal situation where they are not, right?

Fundamentally, running `sdkmanager --licenses` does not consummate a contract [0]. Rather running this command is an indication that the user has been made aware that there is a non-negotiated contract they will be entering into by using the software - it's the continued use of the software which indicates acceptance of the terms. If an LLM does this unbeknownst to a user, this just means there is one less indication that the user is aware of the license. Of course this butts up against the limits to litigation you pointed out, which is why contracts of adhesion mostly revolve around making users disclaim legal rights, and upholding copyright (which can be enforced out of band on the scale it starts to matter).

[0] if it did then anyone could trivially work around this by skipping the check with a debugger, independently creating whatever file/contents this command creates, or using software that someone else already installed.

(I edited the sentence you quoted slightly, to make it more explicit. I don't think it changes anything but if it does then I am sorry)

otterley•14h ago
> neither airline agents nor LLM agents hold themselves out as having legal authority to bind their principals in general contracts.

You don't have to explicitly hold yourself out as an agent to be treated as one. Circumstances matter. There's an "apparent authority" doctrine of agency law I'd encourage you to study.

> Rather running this command is an indication that the user has been made aware that there is a non-negotiated contract they will be entering into by using the software - it's the continued use of the software which indicates acceptance of the terms.

Yup, that's a contract of adhesion, and so-called "click-wrap" agreements can be valid contracts. See e.g. https://www.goodwinlaw.com/en/insights/publications/2022/08/...

> if it did then anyone could trivially work around this by skipping the check with a debugger, independently creating whatever file/contents this command creates, or using software that someone else already installed.

Courts tend not to take kindly to "hacking attempts" like this, and you could find yourself liable for copyright infringement, trespass to chattels, or possibly even criminal charges under CFAA if you do.

Let me put it this way: U.S. and English law are stacked squarely in favor of the protection of property rights.

mindslight•13h ago
> Courts tend not to take kindly to "hacking attempts" like this

Yes, because law is generally defined in terms of intent, knowledge, and other human-level qualities. The attempt to "hack around" the specific prompt is irrelevant because the specific prompt is irrelevant, just like the specific weight of paper a contract is printed on is irrelevant - any contract could define them as relevant, but it's generally not beneficial to do so.

> There's an "apparent authority" doctrine of agency law I'd encourage you to study

Sure, but this still relies upon an LLM agent being held out as some kind of bona fide legal agent capable of executing some legally binding agreements. In this case there isn't even a counterparty who is capable of making that judgement whether the command is being run by someone with the apparent intent and authority to legally bind. So you're essentially saying there is no way for a user to run a software program without extending it the authority to form legal contracts on your behalf. I'd call this a preposterous attempt to "hack around" the utter lack of intent on the part of the person running the program.

otterley•12h ago
> the specific prompt is irrelevant

The instruction prompt is absolutely relevant: it conveys to the agent the scope of its authority and the principal's intent, and would undoubtedly be used as evidence if a dispute arose over it. It's not different in kind from instructions you would give a human being.

> this still relies upon an LLM agent being held out as some kind of bona fide legal agent capable of executing some legally binding agreements

Which it can...

> You're essentially saying there is no way to run a software program without extending it the legal authority to form legal contracts on your behalf.

I'm not saying that at all. Agency law is very mature at this stage, and the test to determine that an actor is an agent and whether it acted within the scope of its authority is pretty clear. I'm not going to lay it all out here, so please go study it independently.

I'm also not entirely sure what your angle here is: are you trying to say that an LLM-based agent cannot under any circumstances be treated as acting on its principal's behalf? Or are you just being argumentative and trying to find some angle to be "right"?

mindslight•12h ago
> The instruction prompt is absolutely relevant

By "prompt" I was referring to the prompting of the user, by a program such as `sdkmanager --licenses`.

If a user explicitly prompted an LLM agent to "accept all licenses", then I'd agree with you.

> Which it can...

It can be held out as a legal agent, sure. But in this case, is it? Is the coding agent somehow advertising itself to the sdkmanager program and/or Google that it has the authority to form legal contracts on behalf of its user?

> I've counseled you already to study the law - go do that before we discuss this further

While this is a reasonable ask for continuing the line of discussion, I'd say it's a lot of effort for a message board comment. So I won't be doing this, at least to the level of being able to intelligently respond here.

Instead I would ask you what you would say are the minimum requirements to be able to have an LLM coding agent executing commands on your own machine, yet explicitly not having the authority to form legally binding contracts.

(obviously I'm not asking this in the capacity of binding legal advice. and obviously one would still be responsible for any damage said process caused)

DANmode•13h ago
Script-kiddies aren’t liable anymore?

That’s a hot take indeed.

zvr•10h ago
Correct, but the "if Amy works for Global Corp and has the authority to sign legal documents on their behalf" does a lot of work here.

At $WORK, a multi-billion company with tens of thousands of developers, we train people to never "click to accept", explaining it like "look, you wouldn't think of sitting down and signing a contract binding the whole MegaCorp; what make you think you can 'accept' something binding the company?"

I admit we're not always successful (people still rarely click), but at least we're trying.

otterley•9h ago
> At $WORK, a multi-billion company with tens of thousands of developers, we train people to never "click to accept", explaining it like "look, you wouldn't think of sitting down and signing a contract binding the whole MegaCorp; what make you think you can 'accept' something binding the company?"

That sounds pretty heavy-handed to me. Their lawyers almost certainly advised the company to do that--and I might, too, if I worked for them. But whether it's actually necessary to keep the company out of trouble....well, I'm not so sure. For example, Bob the retail assistant at the local clothing store couldn't bind his employer to a new jeans supplier contract, even if he tried. This sounds like one of those things you keep in your back pocket and take it out as a defense if someone decides to litigate over it. "Look, Your Honor, we trained our employees not to do that!"

At least with a mechanical agent, you can program it not to be even capable of accepting agreements on the principal's behalf.