That's... quite a license term. I'm a big fan of tools that come with no restrictions on their use in their licenses. I think I'll stick with them.
If you compete with a vendor, or give aid and comfort to their competitors, do not expect the vendor to play nice with you, or even keep you on as a customer.
Certainly a mindset befitting microsoft and Oracle, if I ever saw one.
Don't forget a lot of their appeal is about (real or perceived) liability. If you use Postgres and you fuck up, you fuck up. If you use Oracle and you fucked up, you can blame Oracle and save face.
Besides, lol, who cares how fast a model-T can go when there are much nicer forms of transportation that don't actively hate you
I don’t believe it should be legal, but I see why they would be butt-hurt
source?
https://proact.eu/wp-content/uploads/2020/07/Visual-Basic-En...
It wasn't a blanket prohibition, but a restriction on some parts of the documentation and redistributable components. Definitely was weird to see that in the EULA for a toolchain. This was removed later on, though I forget if it's because they changed their mind or removed the components.
- Contracts can have unenforceable terms that can be declared null and void by a court, any decision not to renew the contract in future would have no bearing on the current one.
- there are plenty of restrictions on when/ whether you can turn down business for example FRAND contracts or patents don’t allow you choose to not work with a competitor and so on.
I see no reason why Anthropic can't arbitrarily ban OpenAI, regardless of my opinion on the decision. Anthropic hasn't "patented" access to the Claude API; there are no antitrust concerns that I can see; etc.
And no, it isn't clear to me that this contract term would hold up in court as Anthopic doesn't have copyright ownership in the AI output. I don't believe you can enforce copyright related contracts without copyright ownership.
I could be wrong of course, but I find it odd this topic comes up from time to time but apparently nobody has a blog post by a lawyer or similar to share on this issue.
They don't need copyright ownership of the AI output to make part of the the conditions for using their software running on their servers (API access) an agreement not to use it training a competing AI model.
There would have to be a law prohibiting that term, either in general or for a company in the specific circumstances Anthropic is in. (The “specific circumstances” thing is seen, e.g., when a term is otherwise permitted but used but a firm that is also a monopoly in a relevant market as a way of either defending or leveraging that monopoly, and thus it becomes illegal in that specific case.)
You are missing the point.
Copyright law and the copyright act, not general contract law, governs whether a contract provision relating to AI output can be enforced by Anthropic, and since copyright law says Anthropic has no copyright in the output, Anthropic will not win in court.
It's not different than if Anthropic included a provision saying you won't print out the text of Moby Dick. Anthropic doesn't own copyright on Moby Dick and can't enforce a contract provision related to it.
Like I said I can be convinced I'm wrong based on a legal analysis from a neutral party but you seem to be arguing from first principles.
No, I am disagreeing with the point, because it's completely wrong.
> Copyright law and the copyright act, not general contract law, governs whether a contract provision relating to AI output can be enforced by Anthropic
No, it doesn't. There is no provision of copyright law that limits terms of contracts covering AI outputs.
> It's not different than if Anthropic included a provision saying you won't print out the text of Moby Dick.
This is true, but undermines your point.
> Anthropic doesn't own copyright on Moby Dick and can't enforce a contract provision related to it.
Actually, providing services that allow you to produce output can enforce provisions prohibiting reproducing works they don't own the copyright to (and frequently do adopt and enforce rules prohibiting this for things other people own the copyright to).
> Like I said I can be convinced I'm wrong based on a legal analysis from a neutral party but you seem to be arguing from first principles.
You seem to be arguing from first principles that are entirely unrelated to the actual principles, statutory content of, or case law of contracts or copyrights, and I have no idea where they come from, but, sure, believe whatever you want, it doesn't cost me anything.
This isn't how legal reasoning works in a common law system... to discover the answer you usually find the most similar case to the current fact pattern and then compare it to the current issue.
If you are aware of such a case, even colluqually, point me in the right direction. It might be hard to analogize to another cases though, because Anthropic doesn't have a license for most of the training materials that made their model. I've also read you can't contract around a fair use defense.
If I'm wrong it isn't very helpful to shout "na uh" without further explanation. Give me some search engine keywords and I'll look up whatever you point me towards.
https://perkinscoie.com/insights/blog/does-copyright-law-pre...
It seems courts are split:
"In jurisdictions that follow the Second Circuit's more restrictive approach, plaintiffs may be limited to bringing copyright infringement claims when the scope of license terms or other contractual restrictions on the use of works has been exceeded. Plaintiffs who do not own or control the copyright interest in the licensed work, however, will not be able to bring such claims and may be left without an enforcement mechanism under traditional contracting approaches."
The biggest lesson I learnt from my law degree is that sure you might be legally entitled to it - but you can still be receiving a raw deal and have very little in the way of remedial action.
These unknown companies called Microsoft, Oracle, Salesforce, Apple, Adobe, … et al have all had these controversies at various points.
I wouldn't suggest building on Oracle's property as you drink its milkshake, but the ToS and EULAs don't restrict competition.
Imagine if Oracle was adding a restrictions on what you are allowed to build with Java, that would be a more similar comparison IMO.
E.x. if you make a product that works on multiple databases, you can't show the performance difference between them.
The ToS are not just about "reverse engineering" a competing model, they forbid using the service to develop competing systems at all.
Not sure what Apple lawyers were imagining but I guess barring Irani scientist from syncing their iPods with uranium refiner schematics set back their programme for decades.
Not just easy, but fun too!
https://en.wikipedia.org/wiki/End-user_license_agreement#Eur...
It also makes it dangerous to become dependent on these services. What if at some point in the future, your provider decides that something you make competes with something they make, and they cut off your access?
I don't know how companies currently navigate that.
They don't target and analyze specific user or organizations - that would be fairly nefarious.
The only exception would be if there are flags for trust and safety. https://support.anthropic.com/en/articles/8325621-i-would-li...
Unless it's actually some internal Claude API which OpenAI were using with an OpenAI benchmarking tool, this sounds like a hyped-up way for Wired to phrase it.
Almost like: `Woah man, OpenAI HACKED Claude's own AI mainframe until Sonnet slammed down the firewall man!` ;D Seriously though, why phrase API use of Claude as "special developer access"?
I suppose that it's reasonable to disagree on what is reasonable for safety benchmarking, e.g: where you draw a line and say, "hey, that's stealing" vs "they were able to find safety weak spots in their model". I wonder what the best labs are like at efficiently hunting for weak areas!
Funnily enough I think Anthropic have banned a lot of people from their API, myself included - and all I did was see if it could read a letter I got, and they never responded to my query to sort it out! But what does it matter if people can just use OpenRouter?
Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app
Compare with a sentence like “the elevator has special firefighter buttons” which does not mean that only some special type of firefighter uses the button.
The amount of care the author puts into their phrasing determines whether their point comes across as intended, or not. The average magazine reader can likely figure out that there's no such thing as "special" firefighters with "privileged" access to elevator buttons that other firefighters lack. They may not have the programming knowledge to do likewise with "developer access", even if they are reading a magazine like "Wired".
If you said that to anyone they'd assume there are non standard buttons beyond the normal "call" / "fire" buttons. Special changes the meaning in both sentences.
From the perspective of a non technical reader developer access isn’t normal, it’s special.
The HN audience doesn’t see that. But the phrase isn’t confusing to normal people.
I know people on HN might mot understand this, but programmatically using anything is a special fringe activity, even if the manner of programmatic use is normal for such use.
I understand that they were technically "using it to train models", which, given OpenAI's stance, I don't have much sympathy for, but it's not some "special developer hackery" that this is making it sound like.
Isn’t half the schtick of LLMs making software development available for the layman?
It's Software Developer Kit, not Special Developer Kit ;-)
They banned my account completely for violation of ToS and never responded to my query, following my 3 or 4 chats with Claude where i asked for music and sci-fi books recommendations.
Never violated ToS, account created through they ui, used literally few times.
Well, I don't use them at all except for very rare tests through open router indeed.
Yes, I'm frustrated with Anthropic killing my direct API account for silly reasons, with no response. But actually I really appreciate Anthropic's models for code, their deep safety research with Constitutional AI, interpretability studies etc.
They are certainly guilty of having scaling and customer service issues, and making the wrong call with a faulty moderation system (for you too, and many others it seems like)!
But a lot of serious AI safety research that could literally save all our skin is being done by Anthropic, some of the best.
On OpenAI's API Platform, I am on Tier 5! It's unfortunate Anthropic have acted less commercially savvy than OpenAI (at the time, at least). I have complained on HN and I think on Twitter before about my account to no avail, after emailing before. But yeah, usually I just use them via OpenRouter these days, it's a shame that I must use it for API access.
I get the impression that a lot of OpenAI researchers went to Anthropic, which essentially is the first OpenAI splinter group. I think this is a sign of a serious, more healthy intellectual culture. I'm looking forward to seeing what they do next.
Even more annoying is that I suspect its an issue linked to Google SSO and IP configurations.
I’m personally a big fan of Anthropic taking a more conservative approach compared to other tech companies that insist it’s not their responsibility - this is just a natural follow on where we get a lot of false positives.
Having said that desparate for my account to be unbanned so I can use it again!
Live by the sword, die by the sword.
* copyright law
* trademark law
* defamation law (ChatGTP often reports wrong facts about specific people, products, companies, ... most seriously claiming someone was guilty of murder. Getting ChatGPT to say obviously wrong things about products is trivial)
* contract law (bypassing scraping restrictions they had agreed as a compay beforehand)
* harassment (ChatGPT made pictures depicting specific individuals doing ... well you can guess where this is going. Everything you can imagine. Women, of course. Minors. Politics. Company politics ...)
So far, they seem to have gotten away with everything.
Not sure if you're serious... you think OpenAI should be held responsible for everything their LLM ever said? You can't make a token generator unless the tokens generated always happen to represent factual sentences?
If I told people you are a murderer, for money, I'd expect to be sued and I'd expect to be convicted.
Presumably an AI should know about the trademarks, they are part of the world too. There is no point shielding LLMs from trademarks in the wild. A text editor can also infringe trademarks, depending how you use it. AI is taking its direction from prompts, humans are driving it.
e.g. ignoring that I find that openai, Google, and anthropic in particular do take harassment and defamation extremely seriously (it takes serious effort to get chatgpt to write a bob saget joke let alone something seriously illegal), if they were bound by "normal" law it would be a sepulchral dalliance with safety-ism that would probably kill the industry OR just enthrone (probably) Google and Microsoft as the winners forever.
Seriously, make a browser extension that people can turn on and off (no need to be dishonest here), and pay people to upload their AI chats, and possibly all the other content they view.
If Reddit wont let you scrape, pay people to automatically upload the Reddit comments they view normally.
If Claude cuts you off, pay people to automatically upload their Claude conversations.
Am I crazy, am I hastening dystopia?
I think if you're not OpenAI/Anthropic sized (in which case you can do better) you're not going to get much value out of it
It's hard to usefully post-train on wildly varied inputs, and post-training is all most people can afford.
There's too much noise to improve things unless you do a bunch of cleaning and filtering that's also somewhat expensive.
If you constrain the task (for example, use past generations from your own product) you get much further along though.
I've thought about building a Chrome plugin to do something useful for ChatGPT web users doing a task relevant to what my product does, then letting them opt into sharing their logs.
That's probably a bit more tenable for most users since they're getting value, and if your extension can do something like produce prompts for ChatGPT, you'll get data that actually overlaps with what you're doing.
All the chatbots with free access do that, they pay you by running your arbitrary computations on their servers.
1/ Openai's technical staff were using Claude Code (API and not the max plans).
2/ Anthropic's spokesperson says API access for benchmarking and evals will be available to Openai.
3/ Openai said it's using the APIs for benchmarking.
I guess model benchmarking is fine, but tool benchmarking is not. Either openai were trying to see their product works better than Claude code (each with their own proprietary models) on certain benchmarks and that is something Anthropic revoked. How they caught it is far more remarkable. It's one thing to use sonnet 4 to solve a problem on Livebench, its slightly different to do it via the harness where Anthropic never published any results themselves. Not saying this is a right stance, but this seems to be the stance.
This is, ultimately, a great PR move by Anthropic. 'We are so good Open Ai uses us over their own'
They know full well OpenAI can just sign up again, but not under an official OpenAi account.
If anything, this is a bad look for Anthropic.
This smells more like a cheap PR potshot. Using an API for training vs developers using it for coding are very different interpretations of "build a competing product"
this company leadership is worrisome
I use AI to get direct, specific, and useful responses—sometimes even to feel understood when I can’t fully put my emotions into words. I don’t need a machine that keeps circling around the point and lecturing me like a polite advisor.
If ChatGPT ever starts sounding like Claude, I might seriously reconsider whether I still want to use it.
lossolo•6mo ago