Very diplomatic of them to say "we respect that other AI companies might reasonably reach different conclusions" while also taking a dig at OpenAI on their youtube channel
It appears they trend in the right direction:
- Have not kissed the Ring.
- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).
- Committing to no ads.
- Willing to risk defense department contract over objections to use for lethal operations [1]
The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]
- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])
It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.
I'm curious, how do others here think about Anthropic?
[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...
[3]https://investors.palantir.com/news-details/2024/Anthropic-a...
They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
https://www.anthropic.com/news/anthropic-s-recommendations-o...
Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.
I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.
LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.
Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.
This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".
- Blocking access to others (cursor, openai, opencode)
- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs
- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.
at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.
> Asking to regulate hardware chips more
> partnerships with [the military-industrial complex]
> only labs doing good in that front are Chinese labs
That last one is a doozy.
Obviously it's a play, honing in on privacy/anti-ad concerns, like a Mozilla type angle, but really it's a huge ad buy just to slag off the competitors. Worth the expense just to drive that narrative?
Ads playlist https://www.youtube.com/playlist?list=PLf2m23nhTg1OW258b3XBi...
I wonder how they can get away without showing Ads when ChatGPT has to be doing it. Will the enterprise business be that profitable that Ads are not required?
Maybe OpenAI is going for something different - democratising access to vast majority of the people. Remember that ChatGPT is what people know about and what people use the free version of. Who's to say that making Ads by doing this but also prodiding more access is the wrong choice?
Also, Claude holds nothing against ChatGPT in search. From my previous experiences, ChatGPT is just way better at deep searches through the internet than Claude.
(Props for them for doing this, don't know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue/margin pressures)
> AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce
I don't think they have an accurate model for what they're doing - they're treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They're not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.
The total mismatch between what they're doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He's savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn't seem like he's got the right mental map for AI, for all he talks a good game.
- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, 1998
And yes, it will surely happen to Claude. Be wary of how much of your cognition and independence to anyone or anything.
This is exactly what chatgpt 5 was about. By tweaking both the model selector (thinking/non-thinking), and using a significantly sparser thinking model (capping max spend per conversation turn), they massively controlled costs, but did so at the expense of intelligence, responsiveness, curiosity, skills, and all the things I've valued in O3. This was the point I dumped openai, and went with claude.
This business model issue is a subtle one, but a key reason why advertisement revenue model is not compatible (or competitive!) with "getting the best mental tools" -margin-maximization selects against businesses optimizing for intelligence.
This is going to be tough to compete against - Anthropic would need to go stratospheric with their (low margin) enterprise revenue.
It's great that Anthropic is targeting the businesses of the world. It's a little insincere to than declare "no ads", as if that decision would obviously be the same if the bulk of their (not paying) users.
There are, as far as ads go, perfectly fine opportunities to do them in a limited way for limited things within chatbots. I don't know who they think they are helping by highlighting how to do it poorly.
A lot of people are ok with ad supported free tiers
(Also is it possible to do ads in a privacy respecting way or do people just object to ads across the board?)
You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.
For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.
Either way, both companies are hemorrhaging money.
But I’m happy with position and will cancel my ChatGPT and push my family towards Claude for most things. This taste effect is what I think pushes apple devices into households. Power users making endorsements.
And I think that excess margin is enough to get past lowered ad revenue opportunity.
https://x.com/ns123abc/status/2019074628191142065
In any case, they draw undue attention to openAI rather than themselves. Not good advertising
Both openAI and Anthropic should start selling compute devices instead. There is nothing stoping open-source LLMs from eating their lunch mid-term
Great by Anthropic, but I put basically no long term trust in statements like this.
mynti•7h ago