That's... quite a license term. I'm a big fan of tools that come with no restrictions on their use in their licenses. I think I'll stick with them.
If you compete with a vendor, or give aid and comfort to their competitors, do not expect the vendor to play nice with you, or even keep you on as a customer.
Certainly a mindset befitting microsoft and Oracle, if I ever saw one.
Besides, lol, who cares how fast a model-T can go when there are much nicer forms of transportation that don't actively hate you
I don’t believe it should be legal, but I see why they would be butt-hurt
source?
https://proact.eu/wp-content/uploads/2020/07/Visual-Basic-En...
It wasn't a blanket prohibition, but a restriction on some parts of the documentation and redistributable components. Definitely was weird to see that in the EULA for a toolchain. This was removed later on, though I forget if it's because they changed their mind or removed the components.
- Contracts can have unenforceable terms that can be declared null and void by a court, any decision not to renew the contract in future would have no bearing on the current one.
- there are plenty of restrictions on when/ whether you can turn down business for example FRAND contracts or patents don’t allow you choose to not work with a competitor and so on.
I see no reason why Anthropic can't arbitrarily ban OpenAI, regardless of my opinion on the decision. Anthropic hasn't "patented" access to the Claude API; there are no antitrust concerns that I can see; etc.
And no, it isn't clear to me that this contract term would hold up in court as Anthopic doesn't have copyright ownership in the AI output. I don't believe you can enforce copyright related contracts without copyright ownership.
I could be wrong of course, but I find it odd this topic comes up from time to time but apparently nobody has a blog post by a lawyer or similar to share on this issue.
They don't need copyright ownership of the AI output to make part of the the conditions for using their software running on their servers (API access) an agreement not to use it training a competing AI model.
There would have to be a law prohibiting that term, either in general or for a company in the specific circumstances Anthropic is in. (The “specific circumstances” thing is seen, e.g., when a term is otherwise permitted but used but a firm that is also a monopoly in a relevant market as a way of either defending or leveraging that monopoly, and thus it becomes illegal in that specific case.)
These unknown companies called Microsoft, Oracle, Salesforce, Apple, Adobe, … et al have all had these controversies at various points.
I wouldn't suggest building on Oracle's property as you drink its milkshake, but the ToS and EULAs don't restrict competition.
It also makes it dangerous to become dependent on these services. What if at some point in the future, your provider decides that something you make competes with something they make, and they cut off your access?
They don't target and analyze specific user or organizations - that would be fairly nefarious.
The only exception would be if there are flags for trust and safety. https://support.anthropic.com/en/articles/8325621-i-would-li...
Unless it's actually some internal Claude API which OpenAI were using with an OpenAI benchmarking tool, this sounds like a hyped-up way for Wired to phrase it.
Almost like: `Woah man, OpenAI HACKED Claude's own AI mainframe until Sonnet slammed down the firewall man!` ;D Seriously though, why phrase API use of Claude as "special developer access"?
I suppose that it's reasonable to disagree on what is reasonable for safety benchmarking, e.g: where you draw a line and say, "hey, that's stealing" vs "they were able to find safety weak spots in their model". I wonder what the best labs are like at efficiently hunting for weak areas!
Funnily enough I think Anthropic have banned a lot of people from their API, myself included - and all I did was see if it could read a letter I got, and they never responded to my query to sort it out! But what does it matter if people can just use OpenRouter?
Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app
Compare with a sentence like “the elevator has special firefighter buttons” which does not mean that only some special type of firefighter uses the button.
The amount of care the author puts into their phrasing determines whether their point comes across as intended, or not. The average magazine reader can likely figure out that there's no such thing as "special" firefighters with "privileged" access to elevator buttons that other firefighters lack. They may not have the programming knowledge to do likewise with "developer access", even if they are reading a magazine like "Wired".
If you said that to anyone they'd assume there are non standard buttons beyond the normal "call" / "fire" buttons. Special changes the meaning in both sentences.
I understand that they were technically "using it to train models", which, given OpenAI's stance, I don't have much sympathy for, but it's not some "special developer hackery" that this is making it sound like.
Isn’t half the schtick of LLMs making software development available for the layman?
It's Software Developer Kit, not Special Developer Kit ;-)
Live by the sword, die by the sword.
* copyright law
* trademark law
* defamation law (ChatGTP often reports wrong facts about specific people, products, companies, ... most seriously claiming someone was guilty of murder. Getting ChatGPT to say obviously wrong things about products is trivial)
* contract law (bypassing scraping restrictions they had agreed as a compay beforehand)
* harassment (ChatGPT made pictures depicting specific individuals doing ... well you can guess where this is going. Everything you can imagine. Women, of course. Minors. Politics. Company politics ...)
So far, they seem to have gotten away with everything.
Not sure if you're serious... you think OpenAI should be held responsible for everything their LLM ever said? You can't make a token generator unless the tokens generated always happen to represent factual sentences?
If I told people you are a murderer, for money, I'd expect to be sued and I'd expect to be convicted.
Presumably an AI should know about the trademarks, they are part of the world too. There is no point shielding LLMs from trademarks in the wild. A text editor can also infringe trademarks, depending how you use it. AI is taking its direction from prompts, humans are driving it.
e.g. ignoring that I find that openai, Google, and anthropic in particular do take harassment and defamation extremely seriously (it takes serious effort to get chatgpt to write a bob saget joke let alone something seriously illegal), if they were bound by "normal" law it would be a sepulchral dalliance with safety-ism that would probably kill the industry OR just enthrone (probably) Google and Microsoft as the winners forever.
Seriously, make a browser extension that people can turn on and off (no need to be dishonest here), and pay people to upload their AI chats, and possibly all the other content they view.
If Reddit wont let you scrape, pay people to automatically upload the Reddit comments they view normally.
If Claude cuts you off, pay people to automatically upload their Claude conversations.
Am I crazy, am I hastening dystopia?
I think if you're not OpenAI/Anthropic sized (in which case you can do better) you're not going to get much value out of it
It's hard to usefully post-train on wildly varied inputs, and post-training is all most people can afford.
There's too much noise to improve things unless you do a bunch of cleaning and filtering that's also somewhat expensive.
If you constrain the task (for example, use past generations from your own product) you get much further along though.
I've thought about building a Chrome plugin to do something useful for ChatGPT web users doing a task relevant to what my product does, then letting them opt into sharing their logs.
That's probably a bit more tenable for most users since they're getting value, and if your extension can do something like produce prompts for ChatGPT, you'll get data that actually overlaps with what you're doing.
All the chatbots with free access do that, they pay you by running your arbitrary computations on their servers.
1/ Openai's technical staff were using Claude Code (API and not the max plans).
2/ Anthropic's spokesperson says API access for benchmarking and evals will be available to Openai.
3/ Openai said it's using the APIs for benchmarking.
I guess model benchmarking is fine, but tool benchmarking is not. Either openai were trying to see their product works better than Claude code (each with their own proprietary models) on certain benchmarks and that is something Anthropic revoked. How they caught it is far more remarkable. It's one thing to use sonnet 4 to solve a problem on Livebench, its slightly different to do it via the harness where Anthropic never published any results themselves. Not saying this is a right stance, but this seems to be the stance.
This is, ultimately, a great PR move by Anthropic. 'We are so good Open Ai uses us over their own'
They know full well OpenAI can just sign up again, but not under an official OpenAi account.
lossolo•9h ago