frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Goldman Sachs Global Macro Research: Gen AI: too much spend, too little benefit [pdf]

https://www.goldmansachs.com/static-libs/pdf-redirect/prod/index.html?path=/images/migrated/insig...
1•u1hcw9nx•59s ago•0 comments

Deleting Code for Performance

https://dbushell.com/2025/12/04/deleting-code-for-performance/
1•speckx•2m ago•0 comments

AI usage policy for Ghostty contributions

https://github.com/ghostty-org/ghostty/pull/10412
1•bpierre•2m ago•0 comments

OpenAI will do "outcome-based pricing will share in the value created"

https://openai.com/index/a-business-that-scales-with-the-value-of-intelligence/
2•moomoo11•4m ago•1 comments

Show HN: ProblemHunt – A place to collect real problems before building startups

https://problemhunt.pro
2•gostroverhov•5m ago•0 comments

Half of NIH's institutes due to freeze billions in funding by 2027

https://www.nature.com/articles/d41586-026-00183-x
1•kozlov8•7m ago•0 comments

Show HN: Elden Ring–style "Git Pushed" screen when you Git push in VS Code

https://github.com/iiviie/CODE_PUSHED_darkSouls
2•iiviie•7m ago•0 comments

How Tim Cook Is Battle-Hardened to Win AI's Biggest Prize: The Trust Layer

https://pitchfreaks.substack.com/p/napster-zuck-and-studio-ghibli-how
1•ThePitchfreak•7m ago•1 comments

Ask HN: Modern test automation software (Python/Go/TS)?

3•rajkumar14•11m ago•0 comments

A Claimed Quantum Computing Breakthrough Was Just Debunked

https://scitechdaily.com/scientists-say-a-major-quantum-computing-breakthrough-was-not-what-it-se...
2•xthe•12m ago•1 comments

Why it's still hard to track Y Combinator Companies

https://www.researchly.at/post/y-combinator-companies-finden-tracken
2•leo_researchly•12m ago•1 comments

Cursor 2.4

https://cursor.com/changelog/2-4
2•leerob•13m ago•0 comments

Us-vs-Them Bias in Large Language Models

https://arxiv.org/abs/2512.13699
1•geox•14m ago•0 comments

Dependency Churn and You

https://dan.turnerhallow.co.uk/dependency-churn-and-you.html
2•speckx•14m ago•0 comments

Sprites – Stateful Sandboxes

https://sprites.dev/
1•varun_chopra•14m ago•0 comments

Theory X and Theory Y

https://en.wikipedia.org/wiki/Theory_X_and_Theory_Y
1•baxtr•15m ago•0 comments

Show HN: Figr – AI that thinks through product problems before designing

https://figr.design/
1•Mokshgarg003•15m ago•0 comments

Show HN: CLI for working with Apple Core ML models

https://github.com/schappim/coreml-cli
1•schappim•15m ago•0 comments

YouTube Keeps Blocking This Space Video. We're Showing It Anyway [video]

https://www.youtube.com/watch?v=WoNQ257OUNc
3•consumer451•16m ago•0 comments

Show HN: AI Gakuen – Specialist agents for Claude Code via compiled knowledge

https://github.com/ntombisol/aigakuen
1•ntombisol•17m ago•0 comments

Microsoft 365 Outage

https://twitter.com/MSFT365Status/status/2014422298506285161
2•6uJYSrt8M•17m ago•0 comments

Taming P99s in OpenFGA: How we built a self-tuning strategy planner

https://auth0.com/blog/self-tuning-strategy-planner-openfga/
3•elbuo•17m ago•0 comments

The 80% Problem: Why the Energy Transition Isn't What You Think

https://twitter.com/IamPranavJ/status/2014429406077583665
1•pranavj•18m ago•0 comments

Fastverse: A Suite of High-Performance and Low-Dependency R Packages

https://fastverse.org/fastverse/
1•PaulHoule•19m ago•0 comments

A clear visual explanation of what HTTPS protects

https://howhttps.works/why-do-we-need-https/
2•birdculture•20m ago•0 comments

Can AI Do My Bookkeeping?

https://theautomatedoperator.substack.com/p/can-ai-do-my-bookkeeping
1•idopmstuff•20m ago•2 comments

Show HN: macOS CLI tool for managing Calendar events and Reminders via EventKit

https://github.com/schappim/ekctl
2•schappim•21m ago•0 comments

Why the return of the MP3 player is a mini vinyl revival in the making

https://www.loudersound.com/tech/exploring-the-mp3-player-revival
1•speckx•22m ago•0 comments

Meesho Rebuilt Their Real-Time Analytics Platform

https://www.meesho.io/blog/inside-meeshos-real-time-analytics-platform-a-deep-dive-into-driving-p...
1•dashdoesdata•23m ago•1 comments

Mana LLM OS

https://www.mana.space/landing
11•behzadhaghgoo•24m ago•7 comments
Open in hackernews

I was banned from Claude for scaffolding a Claude.md file?

https://hugodaniel.com/posts/claude-code-banned-me/
153•hugodan•1h ago

Comments

lifetimerubyist•1h ago
bow down to our new overlords - dont' like it? banned, with no recourse - enjoy getting left behind, welcome to the future old man
properbrew•1h ago
I didn't even get to send 1 prompt to Claude and my "account has been disabled after an automatic review of your recent activities" back in 2024, still blocked.

Even filled in the appeal form, never got anything back.

Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.

anothereng•1h ago
just use a different email or something
ggoo•52m ago
This happened to me too, you need a phone number unfortunately
codazoda•37m ago
Since you were forced, are you getting good results from them?

I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.

Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.

Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.

properbrew•2m ago
For writing decent code, absolutely not, maybe a simple bash script or the obscure flags to a command that I only need to run once and couldn't be bothered to google or look through the man page etc. I'm using smaller models for less coding related stuff.

Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI. I don't think you're ever going to see that level of power locally (I very much hope to be wrong about that). I will move over to using a cloud provider with a large gpt-oss model or whatever is the current leader at the time if/when my OpenAI account gets blocked for no reason.

The M-series chips in Macs are crazy, if you have the available memory you can do some cool things with some models, just don't be expecting to one shot a complete web app etc.

falloutx•36m ago
you are never gonna hear back from Anthropic, they don't have any support. They are a company who feels like their model is AGI now they dont need humans except when it comes to paying.
lazyfanatic42•54m ago
this has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore.
preinheimer•1h ago
> AI moderation is currently a "black box" that prioritizes safety over accuracy to an extreme degree.

I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.

munk-a•41m ago
You say that - and yet it has successfully guarded Elon from any of those pesky truths that might harm his fervently held beliefs. You just forgot to consider that Grok is a tool that prioritizes Elon's emotional safety over all other safeties.
oasisbob•1h ago
> Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else

This blog post could have been a tweet.

I'm so so so tired of reading this style of writing.

red_hare•1h ago
Alas, the 2016 tweet is the 2026 blog post prompt.
LPisGood•1h ago
What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?
cortesoft•1h ago
I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

I think I kind of have an idea what the author was doing, but not really.

tobyhinloopen•1h ago
I had to read it twice as well, I was so confused hah. I’m still confused
rtkwe•57m ago
They probably organize individual accounts the same as organization accounts for larger groups of users at the same company internally since it all rolls up to one billing. That's my first pass guess at least.
alistairSH•1h ago
You're not alone.

I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...

lazyfanatic42•56m ago
Author really comes off unhinged throughout the article to be frank.
superb_dev•53m ago
Did we read the same article? The author comes of as pretty frustrated but not unhinged
ryandrake•26m ago
I wouldn't say "unhinged" either, but maybe just struggling to organize and express thoughts clearly in writing. "Organizations of late capitalism, unite"?
pjbeam•50m ago
My take was more a kind of amusing laughing-through-frustration but also enjoying the ride just a little bit insouciance. Tastes vary of course, but I enjoyed the author's tone and pacing.
staticman2•40m ago
Author thinks he's cute to do things like mention Google without typing Google but I wouldn't call him unhinged.
rvba•54m ago
What is wrong with circular prompt injection?

The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.

redeeman•49m ago
i have no idea what he was actually doing either, and what exactly is it one isnt allowed to use claude to do?
falloutx•38m ago
This tracks with Anthropic, they are actively hostile to security researchers.
Romario77•36m ago
One Claude agent told other Claude agent via CLAUDE.md to do things certain way.

The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.

And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.

anigbrowl•58m ago
Agreed, I found this rather incoherent and seeming to depend on knowing a lot more about author's project/background.
cr3ative•57m ago
Right. This is almost unreadable. There are words, but the author seems to be too far down a rabbit hole to communicate the problem properly…
superb_dev•54m ago
The author was using instance A of Claude to update a `claude.md` while another instance B of Claude was consuming that file. When Claude B did something wrong, the author asked Claude A to update the `claude.md` so that Claude B didn’t make the same mistake again
raincole•39m ago
Which shouldn't be bannable imo. Rate throttle is a more reasonable response. But Anthropic didn't reply to the author, so we don't even know if it's the real reason they got banned.
Aurornis•38m ago
More likely explanation: Their account was closed for some other reason, but it went into effect as they were trying this. They assumed the last thing they were doing triggered the ban.
tstrimple•19m ago
This does sound sus. I have CC update other project's claude.md files all the time. I've got a game engine that I'm tinkering with. The engine and each of the game concepts I play around with have their own claude.md. The purpose of writing the games is to enhance the engine, so the games have to be familiar with the engine and often engine features come from the game CC rather than the engine CC. To keep the engine CC from becoming "lost" about features implemented each game project has instructions to update the engine's claude.md when adding / updating features. The engine CC bootstraps new game projects with a claude.md file instructing it how to keep the engine in sync with game changes as well as details of what that particular game is designed to test or implement within the engine. All sorts of projects writing to other project's claude.md files.
exitb•52m ago
Normally you can customize the agents behavior via a CLAUDE.md file. OP automated that process by having another agent customize the first agent. The customizer agent got pushy, the customized agent got offended, OP got banned.
Aurornis•39m ago
Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

There are so many things about this article that don't make sense:

> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

I can't even understand what they're trying to communicate. I guess they're referring to Google?

There is, without a doubt, more to this story than is being relayed.

dragonwriter•32m ago
The excerpt you don’t understand is saying that if it has been Google rather than Anthropic, the blast radius of the no-explanation account nuking would have been much greater.

It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.

fluoridation•30m ago
"I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."

Non-disabled organization = the first party provider

Disabled organization = me

I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.

Romario77•39m ago
You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.
dragonwriter•27m ago
> Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)

ryandrake•13m ago
I've always kind of hated that anti-pattern in other software I use for peronal/hobby purposes, too. "What is your company name? [required]" I don't have a company! I'm just playing around with your tool on my own! I'm not an organization!
ankit219•29m ago
My rudimentary guess is this. When you write in all caps, it triggers sort of a alert at Anthropic, especially as an attempt to hijack system prompt. When one claude was writing to other, it resorted to all caps, which triggered the alert, and then the context was instructing the model to do something (which likely would be similar to a prompt injection attack) and that triggered the ban. not just caps part, but that in combination of trying to change the system characteristics of claude. OP does not know much better because it seems he wasn't closely watching what claude was writing to other file.

if this is true, the learning is opus 4.5 can hijack system prompts of other models.

kstenerud•10m ago
> When you write in all caps, it triggers sort of a alert at Anthropic

I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?

pavel_lishin•1h ago
They don't actually know this is why they were banned:

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

> Or I don't know. This is all just a guess from me.

And no response from support.

tobyhinloopen•1h ago
So you were generating and evaluating the performance of your CLAUDE.md files? And you got banned for it?
alistairSH•1h ago
It reads like he had a circular prompt process running, where multiple instances of Claude were solving problems, feeding results to each other, and possibly updating each other's control files?
epolanski•1h ago
What would be bad in that?

Writing the best possible specs for these agents seems the most productive goal they could achieve.

NitpickLawyer•41m ago
I think the idea is fine, but what might end up happening is that one agent gets unhinged and "asks" another agent to do more and more crazy stuff, and they get in a loop where everything gets flagged. Remember that "bots configured to add a book at +0.01$ on amazon, reached 1M$ for the book" a while ago. Kinda like that, but with prompts.
epolanski•38m ago
I still don't get it, get your models better for this far fetched case, don't ban users for a legitimate use case.
andrelaszlo•42m ago
Could anyone explain to me what the problem is with this? I thought I was fairly up to date on these things, but this was a surprise to me. I see the sibling comment getting downvoted but I promise I'm asking this in good faith, even if it might seem like a silly question (?) for some reason.
Aurornis•37m ago
I think it's more likely that their account was disabled for other reasons, but they blamed the last thing they were doing before the account was closed.
red_hare•1h ago
This feels... reasonable? You're in their shop (Opus 4.5) and they can kick you out without cause.

But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.

mrweasel•53m ago
Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.

I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.

viccis•48m ago
Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.

Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."

areoform•1h ago
I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

lukan•1h ago
I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.
atonse•59m ago
My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

But at the same time, they have been hiring folks to help with Non Profits, etc.

WarmWash•58m ago
Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.
embedding-shape•54m ago
> Anthropic's strategy seems to be to just focus on coding, and they do it well.

Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview

Ethee•50m ago
Isn't this the case for almost every product ever? Company makes product -> markets as widely as possible -> only niche group become power users/find market fit. I don't see a problem with this. Marketing doesn't always have to tell the full story, sometimes the reality of your products capabilities and what the people giving you money want aren't always aligned.
WarmWash•39m ago
Anthropic isn't bothering with image models, audio models, video models, world models. They don't have science/math models, they don't bother with mathematics competitions, and they don't have open model models either.

Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.

arcanemachiner•46m ago
Interesting. Would anyone care to chime in with their opinion of the best all-rounder model?
WarmWash•42m ago
You'll get 30 different opinions and all those will disagree with each other.

Use the top models and see what works for you.

0xbadcafebee•43m ago
Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.

eightysixfour•48m ago
> Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

swiftcoder•46m ago
> shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

0xferruccio•36m ago
to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

and these are people are not junior developers working on trivial apps

swiftcoder•21m ago
Yeah, I've watched a few peers go down this spiral as well. I'm not sure why, because my experience is that Claude Code and friends are building a lifetime of job security for staff-level folks, unscrewing every org that decided to over-delegate to the machine
pinkmuffinere•33m ago
Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.
eightysixfour•31m ago
Consultant to, so yes. It could have replaced me and a ton of the work of the people I was supporting.
pinkmuffinere•24m ago
Ah I see, that definitely lends some weight claim then.
eightysixfour•32m ago
I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.

Terr_•13m ago
IMO we can augment this critique by asking what the "AI" was doing that impressed them in the first place:

1. "To evaluate these tools, I'll apply them to tasks I consider common and am familiar with."

2. "Wow! It composes memos and skims e-mails was crazy! These tools are amazing!"

3. "Anyway, this is gonna be big for replacing totally easy stuff, like customer support, and whatever it is they do all day."

4. "Oh, me? I'm not worried, because my job is Leadership! Composing memos and skimming e-mails? No, that isn't my job, it's just what I do every day for unrelated reasons. Plus my wage is less than that of a dozen support folks."

danielbln•37m ago
There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.
eightysixfour•25m ago
Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.

munk-a•44m ago
> They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.

throwawaysleep•29m ago
> to send their most frustrated customers through a chatbot

But do those frustrated customers matter?

munk-a•25m ago
I just checked - frustrated customers isn't a metric we track for performance incentives so no, they do not.
throwawaysleep•8m ago
Even if you do track them, if 0.1% of customers are unhappy and contacting support, that's not worth any kind of thought when AI is such an open space at the moment.
magicmicah85•41m ago
https://support.claude.com/en/articles/9015913-how-to-get-su...

Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.

throwawaysleep•30m ago
Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

Are there enough people who need support that it matters?

furyofantares•29m ago
> I recently found out that there's no such thing as Anthropic support.

The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.

kmoser•18m ago
If you want to split hairs, it seem that Anthropic has support as a noun but not as a verb.
csours•23m ago
Human attention will be the luxury product of the next decade.
Lerc•48s ago
There is a discord, but I have not found it to be the friendliest of places.

At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.

It seems now they have a policy of

    Warning on First Offense → Ban on Second Offense
    The following behaviors will result in a warning. Continued violations will result in a permanent ban:

    Disrespectful or dismissive comments toward other members

    Personal attacks or heated arguments that cross the line
    Minor rule violations (off-topic posting, light self-promotion)
    Behavior that derails productive conversation
    Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.

I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.

moomoo11•1h ago
Just stop using Anthropic. Claude Code is crap because they keep putting in dumb limits for Opus.
ipaddr•1h ago
You are lucky they refunded you. Imagine they didn't ban you and you continued to pay 220 a month.

I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.

For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.

bee_rider•47m ago
LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?
causalmodels•28m ago
Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.
joshribakoff•5m ago
Anthropic is lucky their credit card processor has not cut them off due to excessive disputes that stem from their non existent support.
languagehacker•1h ago
Thinking 220GBP for a high-limit Claude account is the kind of thinking that really takes for granted the amount of compute power being used by these services. That's WITH the "spending other people's money" discount that most new companies start folks off with. The fact that so many are painfully ignorant of the true externalities of these technologies and their real price never ceases to amaze me.
rtkwe•37m ago
That's the problem with all the LLM based AI's the cost to run them is huge compared to what people actually feel they're worth based on what they're able to do and the gap seems pretty large between the two imo.
landryraccoon•52m ago
This blog post feels really fishy to me.

It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.

For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.

swiftcoder•42m ago
> It should have been straightforward for the author to excerpt some of the prompts he was submitting

If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.

foxglacier•10m ago
It doesn't even matter. The point is you can't just use SAAS product freely like you can use local software because they all have complex vague T&C and will ban you for whatever reason they feel like. You're force to stifle your usage and thinking to fit the most banal acceptable-seeming behavior just in case.

Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.

ta988•10m ago
There will always be the "ones" that come with their victim blaming...
mikkupikku•4m ago
It's not "victim blaming" to point out that we lack sufficient information to really know who the victim even is, or if there's one at all. Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

(My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow.)

heliumtera•48m ago
Well at least they didn't email the press and called the FBI on you?
onraglanroad•45m ago
So you have two AIs. Let's call them Claude and Hal. Whenever Claude gets something wrong, Hal is shown what went wrong and asked to rewrite the claude.md prompt to get Claude to do it right. Eventually Hal starts shouting at Claude.

Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.

(Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)

staticman2•35m ago
I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.
gpm•29m ago
I assume old failures aren't kept in the context window at all, for the simple reason that the context window isn't that big.
rsync•39m ago
You mean the throwaway pseudonym you signed up with was banned, right?

… right ?

quantum_state•36m ago
Is it time to move to open source and run model locally with an DGX Spark?
blindriver•32m ago
Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more and unreliable. I'm using the large parameters ones on a 512GB Mac Studio and the results are still poor.
f311a•35m ago
Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

What are you gonna do with the results that are usually slop?

blindriver•33m ago
There needs to be a law that prevents companies from simply banning you, especially when it's an important company. There should be an explanation and they shouldn't be allowed to hide behind some veil. There should be a real process with real humans that allow for appeals etc instead of scripts and bots and automated replies.
writeslowly•32m ago
I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.
lukashahnart•30m ago
> I got my €220 back (ouch that's a lot of money for this kind of service, thanks capitalism).

I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.

Isn't that the point of capitalism?

kmeisthax•28m ago
Another instance of "Risk Department Maoism".

If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."

Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.

Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.

jitl•27m ago
I always take these sorts of "oh no I was banned while doing something innocent" posts with a large helping of salt. At least the ones where someone is complaining about a ban from Stripe, usually it turns out they are doing something that either violates the terms of service or is actually fraudulent. None the less its quite frustrating dealing with these because either way.
ryandrake•7m ago
It would at least be nice to know exactly what you did wrong. This whole "You did something wrong. Please read our 200 page Terms of Service doc and guess which one you violated." crap is not helpful and doesn't give me (as an unrelated third party) any confidence that I won't be the next person to step on a land mine.
jordemort•16m ago
Forget the ethical or environmental concerns, I don't want to mess with LLMs because it seems like everyone who goes heavy on them ends up sounding like they're on the verge of cracking up.
omer_balyali•15m ago
Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.

Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.

Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.

As their ads say: "Keep thinking. There has never been a better time to have a problem."

I've been thinking since then, what was the problem. But I guess I will "Keep thinking".

bastard_op•12m ago
I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".

syntaxing•5m ago
Serious question, why is codex and mistral(vibe) not a real alternative?
throwup238•4m ago
Anthropic has been flying by the seat of their pants for a while now and it shows across the board. From the terminal flashing bug that’s been around for months to the lack of support to instabilities in Claude mobile and Code for the web (I get 10-20% message failure rates on the former and 5-10% on CC for web).

They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.

spike021•3m ago
> where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive

I had this start happening around August/September and by December or so I chose to cancel my subscription.

I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.

syntaxing•2m ago
While it sucks, I had great results replacing Sonnet 4.5 with GLM 4.7 in Claude code. Vastly more affordable too ($3 a month for the pro equivalent). Can’t say much about Opus though. Claude code forces me to put a credit card on file so they can charge over usage. I don’t mind they charge me, I do mind that there’s no apparent spending limit and hard to tell how much “inclusive” opus tokens I have left.