frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•51s ago•0 comments

Kernel Key Retention Service

https://www.kernel.org/doc/html/latest/security/keys/core.html
1•networked•56s ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
1•righthand•3m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•4m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•5m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•5m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•10m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•15m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•19m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•20m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•21m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
3•okaywriting•28m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•31m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•31m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•32m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•33m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•33m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•34m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•34m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•38m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•39m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•40m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•40m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•48m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•48m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•51m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•51m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•51m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
5•pseudolus•51m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•51m ago•0 comments
Open in hackernews

Anthropic revokes OpenAI's access to Claude

https://www.wired.com/story/anthropic-revokes-openais-access-to-claude/
294•minimaxir•6mo ago

Comments

lossolo•6mo ago
https://archive.is/m4uL7
chowells•6mo ago
> According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models”

That's... quite a license term. I'm a big fan of tools that come with no restrictions on their use in their licenses. I think I'll stick with them.

ygjb•6mo ago
Good luck with that! Most of the relevant model providers include similar terms (Grok, OpenAI, Anthropic, Mistal, basically everyone with the exception of some open model providers).
chowells•6mo ago
You're like 50% of the way there...
bitwize•6mo ago
For years it was a license violation to use Microsoft development tools to build a word processor or spreadsheet. It was also a violation of your Oracle license to publish benchmark results comparing Oracle to other databases.

If you compete with a vendor, or give aid and comfort to their competitors, do not expect the vendor to play nice with you, or even keep you on as a customer.

DaSHacka•6mo ago
Hmm so "because you split spending between us and a competitor, we'll force you to give the competitor the whole share instead!"

Certainly a mindset befitting microsoft and Oracle, if I ever saw one.

immibis•6mo ago
Well, the customer has to choose to give your competitor the whole share or give you the whole share, and these companies are betting on being important enough the customer chooses the latter.

Don't forget a lot of their appeal is about (real or perceived) liability. If you use Postgres and you fuck up, you fuck up. If you use Oracle and you fucked up, you can blame Oracle and save face.

sroussey•6mo ago
Doesn’t the ban on benchmarking Oracle still stand today?
mdaniel•6mo ago
Given the law firm in question which just happens to develop an RDBMS, I wouldn't want to find out

Besides, lol, who cares how fast a model-T can go when there are much nicer forms of transportation that don't actively hate you

david38•6mo ago
I can understand the benchmark issue. It often happens when someone benchmarks something, it’s biased or wrong in some way.

I don’t believe it should be legal, but I see why they would be butt-hurt

gruez•6mo ago
>For years it was a license violation to use Microsoft development tools to build a word processor or spreadsheet.

source?

ack_complete•6mo ago
You have to go pretty far back, it was in the Visual C++ 6.0 EULA, for instance (for lack of a better link):

https://proact.eu/wp-content/uploads/2020/07/Visual-Basic-En...

It wasn't a blanket prohibition, but a restriction on some parts of the documentation and redistributable components. Definitely was weird to see that in the EULA for a toolchain. This was removed later on, though I forget if it's because they changed their mind or removed the components.

dude250711•6mo ago
Well, Open AI had been whining about DeepSeek back in the day, so it is fair in a way.
valtism•6mo ago
Would something like that hold up in court?
compootr•6mo ago
they can choose who they do & don't want to do business with
manquer•6mo ago
Law does not work like that.

- Contracts can have unenforceable terms that can be declared null and void by a court, any decision not to renew the contract in future would have no bearing on the current one.

- there are plenty of restrictions on when/ whether you can turn down business for example FRAND contracts or patents don’t allow you choose to not work with a competitor and so on.

ronsor•6mo ago
People always say "this wouldn't hold up in court" and "the law doesn't work like that" when it comes to contract, but in reality, contracts can mostly contain whatever you want.

I see no reason why Anthropic can't arbitrarily ban OpenAI, regardless of my opinion on the decision. Anthropic hasn't "patented" access to the Claude API; there are no antitrust concerns that I can see; etc.

staticman2•6mo ago
Nobody was asking if Anthropic can ban OpenAI. I believe they were asking if the contract that can ban using the output to train an AI would hold up in court.

And no, it isn't clear to me that this contract term would hold up in court as Anthopic doesn't have copyright ownership in the AI output. I don't believe you can enforce copyright related contracts without copyright ownership.

I could be wrong of course, but I find it odd this topic comes up from time to time but apparently nobody has a blog post by a lawyer or similar to share on this issue.

dragonwriter•6mo ago
> And no, it isn't clear to me that this contract term would hold up in court as Anthopic doesn't have copyright ownership in the AI output

They don't need copyright ownership of the AI output to make part of the the conditions for using their software running on their servers (API access) an agreement not to use it training a competing AI model.

There would have to be a law prohibiting that term, either in general or for a company in the specific circumstances Anthropic is in. (The “specific circumstances” thing is seen, e.g., when a term is otherwise permitted but used but a firm that is also a monopoly in a relevant market as a way of either defending or leveraging that monopoly, and thus it becomes illegal in that specific case.)

staticman2•6mo ago
"They don't need copyright ownership of the AI output to make part of the conditions for using their software running on their servers (API access) an agreement not to use it training a competing AI model."

You are missing the point.

Copyright law and the copyright act, not general contract law, governs whether a contract provision relating to AI output can be enforced by Anthropic, and since copyright law says Anthropic has no copyright in the output, Anthropic will not win in court.

It's not different than if Anthropic included a provision saying you won't print out the text of Moby Dick. Anthropic doesn't own copyright on Moby Dick and can't enforce a contract provision related to it.

Like I said I can be convinced I'm wrong based on a legal analysis from a neutral party but you seem to be arguing from first principles.

dragonwriter•6mo ago
> You are missing the point.

No, I am disagreeing with the point, because it's completely wrong.

> Copyright law and the copyright act, not general contract law, governs whether a contract provision relating to AI output can be enforced by Anthropic

No, it doesn't. There is no provision of copyright law that limits terms of contracts covering AI outputs.

> It's not different than if Anthropic included a provision saying you won't print out the text of Moby Dick.

This is true, but undermines your point.

> Anthropic doesn't own copyright on Moby Dick and can't enforce a contract provision related to it.

Actually, providing services that allow you to produce output can enforce provisions prohibiting reproducing works they don't own the copyright to (and frequently do adopt and enforce rules prohibiting this for things other people own the copyright to).

> Like I said I can be convinced I'm wrong based on a legal analysis from a neutral party but you seem to be arguing from first principles.

You seem to be arguing from first principles that are entirely unrelated to the actual principles, statutory content of, or case law of contracts or copyrights, and I have no idea where they come from, but, sure, believe whatever you want, it doesn't cost me anything.

staticman2•6mo ago
>>There is no provision of copyright law that limits terms of contracts covering AI outputs.

This isn't how legal reasoning works in a common law system... to discover the answer you usually find the most similar case to the current fact pattern and then compare it to the current issue.

If you are aware of such a case, even colluqually, point me in the right direction. It might be hard to analogize to another cases though, because Anthropic doesn't have a license for most of the training materials that made their model. I've also read you can't contract around a fair use defense.

If I'm wrong it isn't very helpful to shout "na uh" without further explanation. Give me some search engine keywords and I'll look up whatever you point me towards.

staticman2•6mo ago
Replying again to say this article appears to directly address similar cases:

https://perkinscoie.com/insights/blog/does-copyright-law-pre...

It seems courts are split:

"In jurisdictions that follow the Second Circuit's more restrictive approach, plaintiffs may be limited to bringing copyright infringement claims when the scope of license terms or other contractual restrictions on the use of works has been exceeded. Plaintiffs who do not own or control the copyright interest in the licensed work, however, will not be able to bring such claims and may be left without an enforcement mechanism under traditional contracting approaches."

AWPAGE•6mo ago
IMO its all well and good for Anthropic to adhere and or justify a ban under it’s TOS. It’s been super annoying when (myself and others) have been arbitrarily banned with very little communication on how to remedy or adhere to ToS for what feels like it’s intended purpose?

The biggest lesson I learnt from my law degree is that sure you might be legally entitled to it - but you can still be receiving a raw deal and have very little in the way of remedial action.

realharo•6mo ago
But who's going to enforce this for them? And would they even find out if the service is otherwise available to the general public?
palata•6mo ago
Can't we say it's "fair use"? They do whatever they want saying it's "fair use", I don't see why I couldn't.
ijusthadto•6mo ago
Exactly this. Strange that this comment got downwoted. AI companies are scrapping the entire internet disregarding copyright and pirating books. Without it, models will be useless.
dougSF70•6mo ago
Also Twitter TOS when accessing firehose was that you could not recreate a Twitter client.
johnisgood•6mo ago
Same with Discord, for example. In fact, in another instance, my account got disabled for having used it for bots.
ethan_smith•6mo ago
These anti-competitive clauses are becoming standard across all major AI providers - Google, Microsoft, and Meta have similar terms. The industry is converging on a licensing model that essentially creates walled gardens for model development.
werrett•6mo ago
You guys are tripping. EULAs have had anti-competition, anti-benchmarking, anti-reverse engineering and anti-disparagement clauses since the late 90s.

These unknown companies called Microsoft, Oracle, Salesforce, Apple, Adobe, … et al have all had these controversies at various points.

heavyset_go•6mo ago
I am not a fan of Apple or Oracle, but you are not contractually prevented from competing with them if you use Macs or Oracle Cloud to build software.

I wouldn't suggest building on Oracle's property as you drink its milkshake, but the ToS and EULAs don't restrict competition.

JamesBarney•6mo ago
Oracle licenses 100% restrict reverse engineering it's product to build a competing once, which is probably the closest to what these AI giants are trying to restrict.
forty•6mo ago
Oracle db products are not meant to build databases, unlike LLM code generator which are meant to build any kind of software, so the restriction sounds a bit different.

Imagine if Oracle was adding a restrictions on what you are allowed to build with Java, that would be a more similar comparison IMO.

whaleofatw2022•6mo ago
Yeah but did you know you also can't publish benchmarks?

E.x. if you make a product that works on multiple databases, you can't show the performance difference between them.

nilamo•6mo ago
That's just because they can't beat sqlite and they're too embarrassed by it.
qcnguy•6mo ago
You can you just have to ask. And that's not an oracle thing. All the commercial databases have that rule. It's too easy to make misleading benchmarks for such complicated products so that's why they do it.
heavyset_go•6mo ago
IMO the closest analogy would be using JetBrains IDEs and being contractually obligated to not develop competing IDEs.

The ToS are not just about "reverse engineering" a competing model, they forbid using the service to develop competing systems at all.

wodenokoto•6mo ago
Yeah, if I remember correctly iTunes had a clause it couldn’t be used for nuclear development.

Not sure what Apple lawyers were imagining but I guess barring Irani scientist from syncing their iPods with uranium refiner schematics set back their programme for decades.

zorked•6mo ago
I think Apple had it in all their software. It's a good stance and easy to ridicule by taking iTunes as an example.
astrange•6mo ago
It's not their decision, it's US law.
wodenokoto•6mo ago
> and easy to ridicule by taking iTunes as an example.

Not just easy, but fun too!

johnisgood•6mo ago
That is hilarious if true.
privatelypublic•6mo ago
Blame ITAR.
rendx•6mo ago
Glad to live in a sane jurisdiction, where provisions made available only after purchase and those that go against typical customer expectations are simply invalid, so I never had to care about EULAs.

https://en.wikipedia.org/wiki/End-user_license_agreement#Eur...

thayne•6mo ago
"Everyone else is doing it" doesn't make it right.

It also makes it dangerous to become dependent on these services. What if at some point in the future, your provider decides that something you make competes with something they make, and they cut off your access?

sebastiennight•6mo ago
When that provider's ToS allows them full access to the inputs/outputs you're sending through their system, there is a strong incentive to build something competitive with you once you're proven profitable enough.

I don't know how companies currently navigate that.

stingraycharles•6mo ago
Does this mean you can’t make a potential competitor to Claude Code using Claude Code, though?
beefnugs•6mo ago
Dumbest thing they could do, why would you cut off insight into what your competitors are doing?
ramoz•6mo ago
Because they don't blatantly read people's prompts. They have a confidential inference architecture.

They don't target and analyze specific user or organizations - that would be fairly nefarious.

The only exception would be if there are flags for trust and safety. https://support.anthropic.com/en/articles/8325621-i-would-li...

swalsh•6mo ago
Oh I wonder if that applies to me? I've been using claude to do experiments with using SNN's for language models. Doubt anything will come of it... has mostly just been a fun learning experience, but it is technically a "competing product" (one that doesn't work yet)
whatevaa•6mo ago
If you release it, it will be a competing product, experiments are just research.
stingraycharles•6mo ago
But he’s building it right now, and the building part is what’s illegal. It’s a very gray area.
fn-mote•6mo ago
You definitely forgot the scare quotes around "illegal".
stingraycharles•6mo ago
You're correct. Illegal was the wrong term; a potential violation of their ToS would have been a better choice of words.
malloryerik•6mo ago
Ah, yes, just like a good robots.txt do-not-use-me-to-train-your-ai term of service that the LLM companies adhere to strictly?
AlwaysRock•6mo ago
So it begins!
throwawayoldie•6mo ago
Let's hope.
bethekidyouwant•6mo ago
This article says absolutely nothing and appears to be an ad for anthropic
rs186•6mo ago
Do you have adblocker on?
luke-stanley•6mo ago
"OpenAI was plugging Claude into its own internal tools using special developer access (APIs)"

Unless it's actually some internal Claude API which OpenAI were using with an OpenAI benchmarking tool, this sounds like a hyped-up way for Wired to phrase it.

Almost like: `Woah man, OpenAI HACKED Claude's own AI mainframe until Sonnet slammed down the firewall man!` ;D Seriously though, why phrase API use of Claude as "special developer access"?

I suppose that it's reasonable to disagree on what is reasonable for safety benchmarking, e.g: where you draw a line and say, "hey, that's stealing" vs "they were able to find safety weak spots in their model". I wonder what the best labs are like at efficiently hunting for weak areas!

Funnily enough I think Anthropic have banned a lot of people from their API, myself included - and all I did was see if it could read a letter I got, and they never responded to my query to sort it out! But what does it matter if people can just use OpenRouter?

dylan604•6mo ago
> Seriously though, why phrase API use of Claude as "special developer access"?

Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app

viraptor•6mo ago
Because it's not "special developer access". It's just "normal developer access". The phrasing gives an impression they accessed something other users cannot.
CPLX•6mo ago
It would be normal standard English to assume that special modifies the word access. That would make the sentence semantically be the same as “special access, specifically the type of access used by developers”

Compare with a sentence like “the elevator has special firefighter buttons” which does not mean that only some special type of firefighter uses the button.

toomanyrichies•6mo ago
Counter-point: as a wordsmith, it's incumbent on the article's author to make their point in an unambiguous way. In your example, rather than write "the elevator has special firefighter buttons", the author could choose to write "the elevator has buttons which are only available to firefighters". Or alternatively, "the elevator has buttons which are only available to certain firefighters".

The amount of care the author puts into their phrasing determines whether their point comes across as intended, or not. The average magazine reader can likely figure out that there's no such thing as "special" firefighters with "privileged" access to elevator buttons that other firefighters lack. They may not have the programming knowledge to do likewise with "developer access", even if they are reading a magazine like "Wired".

Bluestein•6mo ago
It's Newswriting 101-level stuff.-
Calavar•6mo ago
In your example, "firefighter buttons" is a noun phrase which refers to a particular type of button. "Special" applies to the whole of "firefighter buttons," not just to "firefighter" and not just to "buttons." The same would apply for "special developer access."
nilamo•6mo ago
So it isn't normal developer access, like an API, there's something special about it that most developers with access to the API could not access.
tick_tock_tick•6mo ago
> the elevator has special firefighter buttons

If you said that to anyone they'd assume there are non standard buttons beyond the normal "call" / "fire" buttons. Special changes the meaning in both sentences.

Jare•6mo ago
Firefighter buttons are meant to only be used in very rare special occasions (emergencies) so "special" is just emphasis, whereas developer access is a completely normal way to use the product and thus "special" suggests additional significance. Sure not everyone uses the product as developers, but then not everyone uses the 18th floor button either.
CPLX•6mo ago
Developer access isn’t normal to a lay audience. That’s my point. To a lay audience developers are special computer expert people who do completely different things than they do.

From the perspective of a non technical reader developer access isn’t normal, it’s special.

The HN audience doesn’t see that. But the phrase isn’t confusing to normal people.

saghm•6mo ago
I think it's pretty reasonable to assume that if someone bothers to say "special developer access" rather than just " developer access", there must be some difference between the two. There's clearly not any reason that " developer access" wouldn't be sufficient to describe using APIs though, so it's hard not to read the word "special" as being at least redundant if not actively misleading.
Veen•6mo ago
In content intended for an audience of developers, it's reasonable to assume "special developer access" means access for special developers. If the audience is the general public, it would be sensible to interpret it as "special access for developers," in contrast to the normal sort of access most other people use.
luke-stanley•6mo ago
Yeah and Wired could just write it in a clear way so that no disambiguation or head scratching is needed.
luke-stanley•6mo ago
If Wired wants to portray normal access to Anthropic's API platform as a special fringe activity, rather than a normal way to programmatically use AI, it really says something about Wired. And this is Hacker News, right? Should we be on some watch list or something for thinking having control via API access is normal dev access!? MCP isn't even that old yet! ;D It's possible to write clearly and not that hard, I'm pretty sure they are hyping.
dragonwriter•6mo ago
> If Wired wants to portray normal access to Anthropic's API platform as a special fringe activity, rather than a normal way to programmatically use AI,

I know people on HN might mot understand this, but programmatically using anything is a special fringe activity, even if the manner of programmatic use is normal for such use.

stavros•6mo ago
If I'm an OpenAI employee, and I use Claude Code via the API, I'm not doing some hacker-fu, I'm just using a tool a company released for the purpose they released it.

I understand that they were technically "using it to train models", which, given OpenAI's stance, I don't have much sympathy for, but it's not some "special developer hackery" that this is making it sound like.

dgfitz•6mo ago
Normal users use the API constantly, they just don’t realize it.

Isn’t half the schtick of LLMs making software development available for the layman?

slacktivism123•6mo ago
>That's like asking why is an SDK phrased as a special kit

It's Software Developer Kit, not Special Developer Kit ;-)

708145_•6mo ago
Most people reading Wired probably don't know what an API is.
poemxo•6mo ago
I agree, it does sound like they're hyping it up. But maybe the author was confused. In the API ecosystem, there are special APIs that some customers get access to that the normal riff-raff do not. If someone called those "special developer access" I don't think it'd be wrong
luke-stanley•6mo ago
Yeah that's possible! Would love more details if true.
subscribed•6mo ago
Yeah, Anthropic are morons.

They banned my account completely for violation of ToS and never responded to my query, following my 3 or 4 chats with Claude where i asked for music and sci-fi books recommendations.

Never violated ToS, account created through they ui, used literally few times.

Well, I don't use them at all except for very rare tests through open router indeed.

arcfour•6mo ago
I'm not sure this is a useful data point or how it makes them morons?
Twirrim•6mo ago
Agreed, not sure it makes them morons. The way their website scraper works is what makes me think they're morons. I opted to honeypot them in iocaine, because their access rate is absurd. On a typical day on my small website, they'll make nearly 550k calls, so averaging just above 6rps. Almost everyone else I honeypotted barely pushes 1 rps. Almost everything I see from Anthropic gives me the impression they're self-entitled jerks.
luke-stanley•6mo ago
Anthropic are wicked smart, it's not fair to call them morons, BUT the ToS ban of subscribed, myself, and OpenAI might be useful data points suggesting Anthropic's API moderation behaviour can be strict. Anthropic are doing great work overall, but they have erred on the side of policy robustness, versus keeping API users happy. I said more in my sibling comment reply.
luke-stanley•6mo ago
Well I don't think Anthropic are morons, that's not the point I was making.

Yes, I'm frustrated with Anthropic killing my direct API account for silly reasons, with no response. But actually I really appreciate Anthropic's models for code, their deep safety research with Constitutional AI, interpretability studies etc.

They are certainly guilty of having scaling and customer service issues, and making the wrong call with a faulty moderation system (for you too, and many others it seems like)!

But a lot of serious AI safety research that could literally save all our skin is being done by Anthropic, some of the best.

On OpenAI's API Platform, I am on Tier 5! It's unfortunate Anthropic have acted less commercially savvy than OpenAI (at the time, at least). I have complained on HN and I think on Twitter before about my account to no avail, after emailing before. But yeah, usually I just use them via OpenRouter these days, it's a shame that I must use it for API access.

I get the impression that a lot of OpenAI researchers went to Anthropic, which essentially is the first OpenAI splinter group. I think this is a sign of a serious, more healthy intellectual culture. I'm looking forward to seeing what they do next.

AWPAGE•6mo ago
I’ve unfortunately had the same thing happen to me and am trying to run the gauntlet of getting a response.

Even more annoying is that I suspect its an issue linked to Google SSO and IP configurations.

I’m personally a big fan of Anthropic taking a more conservative approach compared to other tech companies that insist it’s not their responsibility - this is just a natural follow on where we get a lot of false positives.

Having said that desparate for my account to be unbanned so I can use it again!

modeless•6mo ago
OpenAI Services Agreement: "Customer will not [...] use Output to develop artificial intelligence models that compete with OpenAI’s products and services"

Live by the sword, die by the sword.

spwa4•6mo ago
Didn't a whole bunch of AI companies make the news that they refuse to respect X law in AI training. So far, X has been:

* copyright law

* trademark law

* defamation law (ChatGTP often reports wrong facts about specific people, products, companies, ... most seriously claiming someone was guilty of murder. Getting ChatGPT to say obviously wrong things about products is trivial)

* contract law (bypassing scraping restrictions they had agreed as a compay beforehand)

* harassment (ChatGPT made pictures depicting specific individuals doing ... well you can guess where this is going. Everything you can imagine. Women, of course. Minors. Politics. Company politics ...)

So far, they seem to have gotten away with everything.

raincole•6mo ago
> defamation law

Not sure if you're serious... you think OpenAI should be held responsible for everything their LLM ever said? You can't make a token generator unless the tokens generated always happen to represent factual sentences?

spwa4•6mo ago
Given that they publish everything their AI says? That that's in fact the business model? (in other words, they publish everything their AI says for money) Quite frankly, yes.

If I told people you are a murderer, for money, I'd expect to be sued and I'd expect to be convicted.

llbbdd•6mo ago
If you had a disclaimer (like OpenAI) does, probably different story
shakna•6mo ago
Disclaimers do not shield you from the majority of tort law. There's a reason South Park makes fun of such disclaimers.
llbbdd•6mo ago
Nope - they do not charge you for truth
mhh__•6mo ago
But you aren't an LLM (that we know of I suppose)
spwa4•6mo ago
I am a company though. Well, that's how I work. So why should it be different for me vs OpenAI or anyone else?
visarga•6mo ago
> trademark law

Presumably an AI should know about the trademarks, they are part of the world too. There is no point shielding LLMs from trademarks in the wild. A text editor can also infringe trademarks, depending how you use it. AI is taking its direction from prompts, humans are driving it.

mhh__•6mo ago
Most of these things are extremely valuable for humanity and therefore I think it's a very good thing that we have had a light touch approach to it so far in the west.

e.g. ignoring that I find that openai, Google, and anthropic in particular do take harassment and defamation extremely seriously (it takes serious effort to get chatgpt to write a bob saget joke let alone something seriously illegal), if they were bound by "normal" law it would be a sepulchral dalliance with safety-ism that would probably kill the industry OR just enthrone (probably) Google and Microsoft as the winners forever.

Buttons840•6mo ago
Who will pay me for my AI chat histories?

Seriously, make a browser extension that people can turn on and off (no need to be dishonest here), and pay people to upload their AI chats, and possibly all the other content they view.

If Reddit wont let you scrape, pay people to automatically upload the Reddit comments they view normally.

If Claude cuts you off, pay people to automatically upload their Claude conversations.

Am I crazy, am I hastening dystopia?

bit1993•6mo ago
Than I would simply use AI to generate chat histories and get paid (:
manquer•6mo ago
That is not a problem if the price paid is lower than what generating synthetic data of similar size will cost .
bit1993•6mo ago
Great point. Verifying the synthetic data also has a cost, I wonder if it is cheaper than generating it?
mhh__•6mo ago
They could probably pay you based on loss / some similar metric during training.
BoorishBears•6mo ago
I've done a lot of post training and data collection for post-training

I think if you're not OpenAI/Anthropic sized (in which case you can do better) you're not going to get much value out of it

It's hard to usefully post-train on wildly varied inputs, and post-training is all most people can afford.

There's too much noise to improve things unless you do a bunch of cleaning and filtering that's also somewhat expensive.

If you constrain the task (for example, use past generations from your own product) you get much further along though.

I've thought about building a Chrome plugin to do something useful for ChatGPT web users doing a task relevant to what my product does, then letting them opt into sharing their logs.

That's probably a bit more tenable for most users since they're getting value, and if your extension can do something like produce prompts for ChatGPT, you'll get data that actually overlaps with what you're doing.

IAmGraydon•6mo ago
Two things. First, no one wants your AI chat histories. They want to interact with the LLM themselves. Second, their business models break down when they can't steal content to train on. Paying for training data on a large scale is out of the question.
bmacho•6mo ago
> Who will pay me for my AI chat histories?

All the chatbots with free access do that, they pay you by running your arbitrary computations on their servers.

Buttons840•6mo ago
But then only one company is paying me for something that is of equal benefit to all.
maven29•6mo ago
There is an A16z company that does exactly this, called yupp.ai. They need genuine labelling/feedback in return, but you get to either spend credits on expensive APIs or cash out. Likewise, openrouter has free endpoints from some providers who will retain your sessions for training.
ankit219•6mo ago
The article does not say anything substantial, but just some opposite viewpoints.

1/ Openai's technical staff were using Claude Code (API and not the max plans).

2/ Anthropic's spokesperson says API access for benchmarking and evals will be available to Openai.

3/ Openai said it's using the APIs for benchmarking.

I guess model benchmarking is fine, but tool benchmarking is not. Either openai were trying to see their product works better than Claude code (each with their own proprietary models) on certain benchmarks and that is something Anthropic revoked. How they caught it is far more remarkable. It's one thing to use sonnet 4 to solve a problem on Livebench, its slightly different to do it via the harness where Anthropic never published any results themselves. Not saying this is a right stance, but this seems to be the stance.

hinkley•6mo ago
Feels like something a Jepsen or such should be doing instead of competitors trying to clock each other directly. I can see why they would feel uncomfortable about this situation.
v5v3•6mo ago
>Nulty says that Anthropic will “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.”

This is, ultimately, a great PR move by Anthropic. 'We are so good Open Ai uses us over their own'

They know full well OpenAI can just sign up again, but not under an official OpenAi account.

navigate8310•6mo ago
It's not easy for a multi-billon dollar company to hide and evade ban. If they are found doing so, they can be easily dragged in courts.
brokegrammer•6mo ago
Wow, that's why Copilot has been acting funny on Vscode. Code quality dropped and requests keep failing. But I'm still seeing Claude models in the selector. Can't read this article because of paywall. Are they exaggerating?
spoaceman7777•6mo ago
no, that's just because claude has been having persistent brown-outs for a while now. Microsoft and Anthropic messed up by bailing out of full-tilt AI hyperscaling a fair bit. (i.e., Microsoft cancelled a bunch of AI datacenter projects earlier this year)
Palmik•6mo ago
I would expect every AI company to use other companies' models in "its own internal tools using special developer access (APIs)", at the they least for evals.

If anything, this is a bad look for Anthropic.

Havoc•6mo ago
>OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5,

This smells more like a cheap PR potshot. Using an API for training vs developers using it for coding are very different interpretations of "build a competing product"

revskill•6mo ago
Does openai team use claude to vibe code ? Their models qre too stupid to code.
justinclift•6mo ago
Claude's AI models aren't great for any kind of coding other than vibe coding either. :(
retinaros•6mo ago
after what dario said on china, the windsurf story then this... it shows that any company using anthropic AI that Anthropic considers a competitor will be banned. Their goal being AGI , everyone is a potential competitor in a specific vertical be it code, maths or even finance, news, translation,...

this company leadership is worrisome

numbersense•6mo ago
Didn't some people from OpenAI mention all the labs work together but they just don't make that information public?
abby0214•6mo ago
I don’t understand why OpenAI would ever want to “take cues from Claude.” As a heavy user, I honestly find Claude’s tone way too much like a customer service bot—no emotion, no stance, and constantly saying things like “it depends on your perspective.”

I use AI to get direct, specific, and useful responses—sometimes even to feel understood when I can’t fully put my emotions into words. I don’t need a machine that keeps circling around the point and lecturing me like a polite advisor.

If ChatGPT ever starts sounding like Claude, I might seriously reconsider whether I still want to use it.

abby0214•6mo ago
Please, OpenAI — don’t make ChatGPT sound like Claude. I’m not here for overly cautious, vague “customer service” replies. I’m here for clarity, precision, and something that actually feels like it understands me. If GPT turns into Claude, I’m seriously out. GPT is what I go to when I want to talk and think. Claude is what I use when I need code or research. If OpenAI turns GPT into Claude… then what am I left with?
abby0214•6mo ago
I’m not asking for GPT to beat Claude. I’m asking it to stay GPT — the one that can actually talk to me like a human, not just give sanitized textbook responses.