frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Belgium stops decommissioning nuclear power plants

https://dpa-international.com/general-news/urn:newsml:dpa.com:20090101:260430-930-14717/
462•mpweiher•4h ago•369 comments

How an Oil Refinery Works

https://www.construction-physics.com/p/how-an-oil-refinery-works
123•chmaynard•2h ago•23 comments

Spain's parliament will act against massive IP blockages by LaLiga

https://www.democrata.es/en/politics/congress-and-senate/congress-will-act-against-massive-ip-blo...
34•akyuu•53m ago•3 comments

I aggregated 28 US Government auction sites into one search

https://bidprowl.com
141•scarsam•4h ago•42 comments

You can beat the binary search

https://lemire.me/blog/2026/04/27/you-can-beat-the-binary-search/
86•vok•2d ago•46 comments

Claude Code refuses requests or charges extra if your commits mention "OpenClaw"

https://twitter.com/theo/status/2049645973350363168
142•elmean•1h ago•86 comments

Granite 4.1: IBM's 8B Model Matching 32B MoE

https://firethering.com/granite-4-1-ibm-open-source-model-family/
216•steveharing1•5h ago•132 comments

The FCC is about to ban 21% of its test labs today. I mapped them all

https://markready.io/blog/fcc-accredited-test-labs-complete-guide
114•chambertime•2h ago•62 comments

Mozilla's opposition to Chrome's Prompt API

https://github.com/mozilla/standards-positions/issues/1213
387•jaffathecake•8h ago•156 comments

Durable queues, streams, pub/sub, and a cron scheduler – inside your SQLite file

https://honker.dev/
14•ferriswil•1h ago•0 comments

The Zig project's rationale for their anti-AI contribution policy

https://simonwillison.net/2026/Apr/30/zig-anti-ai/
536•lumpa•14h ago•297 comments

Where the goblins came from

https://openai.com/index/where-the-goblins-came-from/
919•ilreb•13h ago•541 comments

Noctua releases official 3D CAD models for its cooling fans

https://www.noctua.at/en/3d-cad-models
415•embedding-shape•2d ago•94 comments

Copy Fail

https://copy.fail/
1269•unsnap_biceps•22h ago•451 comments

Meta in row after workers who saw smart glasses users having sex lose jobs

https://www.bbc.com/news/articles/c5y7yvgy0w6o
387•gorbachev•3h ago•272 comments

Because It Doesn't Have To

https://blog.computationalcomplexity.org/2026/04/because-it-doesnt-have-to.html
10•zdw•23h ago•0 comments

A Primer on Bézier Curves – So What Makes a Bézier Curve?

https://pomax.github.io/bezierinfo/
74•mostlyk•2d ago•18 comments

My Stratum-0 Atomic Clock

https://coverclock.blogspot.com/2017/05/my-stratum-0-atomic-clock_9.html
45•g0xA52A2A•2d ago•12 comments

Japan Is Building Cardboard Suicide Drones

https://www.404media.co/japan-cardboard-drones-air-kamuy/
53•Brajeshwar•1h ago•38 comments

Craig Venter has died

https://www.jcvi.org/media-center/j-craig-venter-genomics-pioneer-and-founder-jcvi-and-diploid-ge...
299•rdl•14h ago•74 comments

The Science Behind Honey's Eternal Shelf Life (2013)

https://www.smithsonianmag.com/science-nature/the-science-behind-honeys-eternal-shelf-life-1218690/
22•downbad_•3h ago•15 comments

GCC 16 has been released

https://gcc.gnu.org/gcc-16/changes.html
206•HeliumHydride•4h ago•36 comments

Show HN: FusionCore: ROS 2 sensor fusion that outperforms robot_localization

https://github.com/manankharwar/fusioncore
4•kharwarm•2d ago•0 comments

"Parse, don't validate" through the years with C++

https://derekrodriguez.dev/parse-dont-validate-through-the-years-with-c-/
63•dwrodri•3d ago•31 comments

DataCenter.FM – background noise app featuring the sound of the AI bubble

https://datacenter.fm/
98•louisbarclay•8h ago•21 comments

Show HN: I wrote a DOOM clone in my own programming language

https://spectrelang.org/log/devlog#cubedoom
24•pizza_man•2d ago•13 comments

Biology is a Burrito: A text- and visual-based journey through a living cell

https://burrito.bio/essays/biology-is-a-burrito
168•the-mitr•13h ago•23 comments

Zed 1.0

https://zed.dev/blog/zed-1-0
2015•salkahfi•1d ago•655 comments

FastCGI: 30 years old and still the better protocol for reverse proxies

https://www.agwa.name/blog/post/fastcgi_is_the_better_protocol_for_reverse_proxies
398•agwa•1d ago•97 comments

London to Calcutta by Bus (2022)

https://www.amusingplanet.com/2022/08/london-to-calcutta-by-bus.html
111•CGMthrowaway•1d ago•31 comments
Open in hackernews

Claude Code refuses requests or charges extra if your commits mention "OpenClaw"

https://twitter.com/theo/status/2049645973350363168
138•elmean•1h ago

Comments

speedgoose•1h ago
At least we can assume that Anthropic eats their own dog food. They use Claude to develop their software.
NitpickLawyer•37m ago
You say that like it's a gotcha. I think the fact that they reached 2B/mo in revenue by dogfooding cc is all the proof that one needs that this thing actually works. In fact it works so well that more people want it than they can serve. For months now they've been having issues when EU and US tz are both online at the same time.
MagicMoonlight•17m ago
Everything works until it doesn’t.

The problem with slop is, nobody understands it. Nobody ever designed it, nobody really knows how it works. You’re just putting blind faith in the slop you’ve shipped.

It lets you be very quick, but if you’ve accidentally compromised all your data or bank accounts through the slop then you won’t know until you’re destroyed.

infamia•13m ago
> I think the fact that they reached 2B/mo in revenue by dogfooding cc is all the proof that one needs that this thing actually works.

That's a notable achievement, but let's have some balance... It's also responsible for the biggest self-own in software industry history by leaking their 1) crown jewels (i.e., source code) 2) the existence of their next model Mythos, and 3) their roadmap in a highly competitive market.

claw-el•11m ago
Is the reason they reached 2B/mo partially contributed by the fact that their users feel like they get unlimited use of it? If ‘feeling like it is unlimited use’ is a huge part that creates the 2B/mo, this change of limit might jeopardize it.

That being said, Anthropic can be diverting capacity to train the next model, and if it is significantly better, people would start flocking back again.

dmd•1h ago
I really want to stick with A\ given everything known about Altman, but man are they speedrunning the "how to destroy your reputation" guidebook.
Insanity•1h ago
They have better PR than OpenAI but they are not a more ethical company. They do a bunch of shady stuff and are just as much involved in military applications. Cal Newport’s recent podcast had a good discussion about this: https://youtu.be/BRr3pAPsQAk?si=jaRJYJ_XQE7VpxPN
esperent•1h ago
Pet peeve of mine is people saying "hey this thing is totally shady/false, I've got proof right here <links to hour long podcast>".

It happens surprisingly often.

rexpop•46m ago
Cal Newport and tech commentator Ed Zitron discussed this disparity between Anthropic's public image and their actual practices. Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has been deeply integrated with the US military, having been installed with classified access since June 2024. The podcast highlights that Claude has been actively utilized during the "Venezuela incursion" and the ongoing "war in Iran".

Despite this active involvement, CEO Dario Amodei released a statement attempting to publicly distance the company from the Department of Defense by declaring they would not allow their technology to be used for "mass domestic surveillance" or "fully autonomous weapons". Zitron categorizes this as a highly calculated PR maneuver, pointing out that LLMs are fundamentally incapable of controlling autonomous weapons anyway. The stunt successfully manufactured a wave of positive press—with celebrities and commentators praising Anthropic as an ethical objector—right when the company was trying to secure an IPO or a massive ~$100 billion valuation, all while they quietly remained an active part of the war effort.

Beyond their military contracts, the podcast details several highly questionable business practices Anthropic has used to artificially inflate their numbers:

1. During a lawsuit regarding their military contract, Anthropic's CFO filed a sworn affidavit revealing the company had only made $5 billion in its entire lifetime. This directly contradicted leaked media reports suggesting they made $4.5 billion in 2025 alone. It revealed that the company's publicly perceived run rate was heavily exaggerated through the "shady revenue math" popular in Silicon Valley, a major discrepancy that most financial journalists ignored.

2. When the open-source agent library OpenClaw first launched, Anthropic deliberately allowed users to connect a $200/month "max account" and essentially burn through thousands of dollars of API compute at Anthropic's expense. Zitron points out that Anthropic knowingly let this happen to temporarily boost their usage metrics and hype while they raised a $30 billion funding round. Just weeks after securing the funding, they abruptly cut off access for these users, a move Zitron cites as proof of them being an "unethical company".

Furthermore, the company has faced criticism for gaslighting users, maintaining poor service availability, and silently degrading model performance while rug-pulling users on rate limits. As Zitron summarizes, it is highly unlikely that either Anthropic or OpenAI actually care about these ethical boundaries beyond how they can be weaponized for better PR and higher valuations.

petcat•36m ago
> Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has taken 10s of billions from investors just like everyone else has. There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

So yes, this is obvious despite whatever image they try to cultivate.

fwipsy•28m ago
Anthropic is a public benefit corporation which limits liability to shareholders.

Just because they screwed up their billing doesn't mean every ethical commitment they've ever made is bunk.

rickydroll•33m ago
I think all the AI companies want to hook up with the US military, as it's the only way they'll cover their debt. For investors.
aesthesia•29m ago
There's some validity to these criticisms, but it would be a lot more credible to cite someone whose job isn't "loudly promote any claim that sounds negative for AI, regardless of how well-founded it is."
fwipsy•18m ago
"LLMS are fundamentally incapable of controlling autonomous weapons" -- This was Anthropic's stance too, right?

"Quietly remained an active part of the war effort" - anthropic was totally transparent about it, but yeah not great.

"Leaks were wrong" - and that's Anthropic's fault?

OpenAI agreed to assist the DoD with zero boundaries and then lied about it. Can we at least give them credit for not doing that? If we just throw up our hands and say "they're all awful, whatever" then the result is reduced pressure on them to be better. Like it or not, I do not think AI is going away and as far as I can tell, despite billing problems, Anthropic's still the least bad frontier lab.

MagicMoonlight•19m ago
Probably some Slopcoded bot which posts fake comments to drive people to their content.

After all, if you’re paying hundreds of millions to buy these shitty podcasts, you might as well host some bots.

fwipsy•12m ago
Account is from 2016 with 6k karma? : doubt:
jp57•1h ago
Ha. Yes. "Speedrunning enshittification" is the phrase that's been in my head.

The flat-rate plans were the top of the slippery slope to enshittification, really. If everyone were on metered billing there'd be no reason for all these opaque and sneaky attempts to limit usage. People would pay for what they get and get what they pay for.

applfanboysbgon•45m ago
There is nothing wrong with flat-rate plans. I work at an LLM-serving startup, and am aware of at least three competitors, that (a) provide flat rate subs (b) are extremely profitable and (c) are bootstrapped, ie. not beholden to investors (there are also many other competitors but I can't ascertain their profitability or investment status).

You simply need to price the flat-rate sub at a price that's profitable when averaged out over all of your users, both light and heavy, and prevent fully automated usage by the power users. That's it. This is immensely more user-friendly, and I doubt you'd get any traction at all if you didn't do this. Even if you pay more for the sub, having unlimited (non-automated) usage frees a mental barrier to using the product. If you have to pay for every request you make, it introduces a hesitation to do anything - it makes the user hesitant to experiment, hesitant to prompt for anything of slightly less significance, anxious about the exact token consumption of every prompt, and so on. It's not enjoyable to use when you're being penny pinched for every prompt.

Anthropic's problem, of course, is that they are not bootstrapped. They don't have a business model that can compete with startups running DeepSeek or GLM on their own hardware. Non-frontier startups got to skip the whole "tens of billions of dollars in debt" step of creating a frontier model from scratch, and still get to run a model that is perhaps 80%-85% as good as Anthropic's, which is good enough for millions of customers. So Anthropic is desperate, backed into a corner, and doing anything and everything they can to try to right their sinking ship, no matter how scummy.

Oras•39m ago
LLM serving startup => bootstrapped => extremely profitable

Mind sharing a link?

applfanboysbgon•30m ago
I do mind, since I enjoy speaking freely without concern of my opinions being linked to my employment. I assure you companies like this exist. Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive. You're free to disregard my commentary if you want, of course.
simoncion•19m ago
> Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive.

And given that Anthropic does both, it must make up its training costs by selling inference. jp57 was pretty clearly talking about Anthropic's flat-rate plans, rather than the flat-rate plans of companies that get to skip the most expensive part of the process.

applfanboysbgon•14m ago
I understand that very well, yes. The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans, because flat-rate plans themselves are not inherently predatory or part of the enshittification slope but actually extremely UX-friendly. Perhaps in another timeline, if their product was actually valuable enough to pay this price for, they could have simply provided a $50 plan as the standard level to provide enough margin to account for training costs as well. But as I see it DeepSeek is an existential threat to them, and they are now stuck between a rock and a hard place, because their product is devalued by its existence and if the frontier labs were to gate access with $50 plans they would get their lunch eaten even more quickly. It turns out there are downsides to burning inconceivably large stacks of other people's money.
beepbooptheory•15m ago
Why not just name one of those three competitors?
pkulak•35m ago
I also assume that forcing usage to spread out, via those 5-hour windows, has cost advantages.
fwipsy•32m ago
Anthropic isn't backed into a corner. They have plenty of enterprise subscriptions. Individual user experience (especially billing) is suffering because it's not a priority in comparison. If they were as desperate as you described, they would try selling access to mythos.
applfanboysbgon•21m ago
The fact that they are adding code specifically to charge individual consumers more reeks of desperation. This isn't "individual users are suffering because they're lower priority and neglected", this is "individual users are being actively squeezed because Anthropic is desperate for every penny it can get".
fwipsy•6m ago
This is such a stupid way to charge customers more. How many Claude code users use OpenClaw? Cheating customers is like burning down your house to keep warm. Anthropic aren't that stupid. I guarantee that this was some half-baked vibe-coded anti abuse system.
theplatman•35m ago
they are essentially Lyft in early Uber vs. Lyft days. They are marketing themselves vaguely as being "better" because they're "more ethical" but their actions make it clear that they're not much better than OAI.
reactordev•23m ago
Except Lyft didn't kick you out in the bad part of town simply because you mentioned the word lollipop. Claude will terminate your session, peg you to 100% usage, and more, to stop you from using the service you paid for.
zb3•1h ago
Oh come on Anthropic, just admit straight away that any other pricing than usage-based is completely unsustainable and is being phased out.. maybe doing it once but officially could save you some brand damage.
cowlby•1h ago
I don't understand how, having access to Mythos and unlimited use, their solution to open harnesses is lazy string regex-style matching.
alienbaby•1h ago
I wonder what happens if you ask Claude to solve the problem, and don't review it's answer properly..
whateveracct•20m ago
they're just holding it wrong.. what model are they using? they should make sure they're on Opus 4.5+. That was a stepwise improvement and was when AI coding clearly became the futureₖₑₖ
jp57•1h ago
I saw a talk by Boris where he said, basically that Claude codes itself now. They have it automatically writing features and reviewing PRs, apparently. I suspect that much of the code has never been seen by human eyes within Anthropic.
whateveracct•22m ago
lol so they aren't even good at using Claude
whateveracct•22m ago
their CEO has been shouting from the rooftops that programming is dead. ofc that would ripple down the org chart and result in a culture of bad programming.
stingraycharles•1h ago
Ok I am usually defending Anthropic, but it seems like this OpenClaw and Hermes ban was implemented incredibly poorly; it looks like a simple regex.

Didn’t they think about “we need to make sure Claude Code is never banned” ? Could have been as easy as including some Claude Code specific prompting traits (tools, system prompt, whatever) in there and automatically whitelisting it.

Is it foolproof? No. Will it avoid banning legit users? Absolutely.

First do the first large sweep, then see what still falls through, then ban those.

It really seems they were panicking due to capacity and there was very little oversight with all this.

I’m not affected but pretty disappointed.

rvz•1h ago
Why would you defend Anthropic at this point after all their antics and their behaviour over the past 6 months?

They do not care about us.

mcast•1h ago
It sounds like Anthropic is dangerously low on compute availability if they’re prioritizing these refusals as their OKRs.
petcat•58m ago
I think it's obvious that they are critically lacking in compute capacity especially since OpenAI has committed billions to locking up all the future compute production.

And I don't necessarily think it's wrong for Anthropic to introduce QoS or throttling on users of their models. It's pretty much a necessity when offering public access to a scarce resource and it's been a common practice for decades.

What is the alternative? We just accept that it doesn't work half the time because the system is overloaded with molt bots?

eloisius•19m ago
Maybe they could not sell more if they’re already exceeding capacity? What kind of apologism is this?
ahtihn•6m ago
If they can't serve all their existing customers maybe they should stop accepting new customers until they can?
data-ottawa•1h ago
That’s incredibly frustrating.

I’ve got a NixOS Qemu VM I use to run openclaw in. I had Claude help me set it up, and it runs local models on my own machine in a config based sandbox.

Why should Claude block or charge extra to work on that?

Why should Claude care if I have instructions for Hermes or OpenClaw in my project repos?

This fingerprinting is incredibly sloppy for how much access to a machine Claude code has.

NewsaHackO•1h ago
If it's just to set up a VM, how much would you even need to use? A couple of cents?
data-ottawa•21m ago
I run an OpenClaw VM and used Claude Code to build the VM scripts. The VM is connected to local llama.cpp, so OpenClaw and the models are running on my own physical hardware.
philipov•24m ago
Now you've learned the advantage of knowing how to do things yourself. When you depend on untrustworthy agents, you shackle yourself to their idiotic whims. Be careful who you partner with.
sschueller•1h ago
https://xcancel.com/theo/status/2049645973350363168
htrp•1h ago
do they literally just have a regex match for all of their competitor harnesses?
spyder•19m ago
nah, it's probably worse: it could be some system prompt for their models...
regexorcist•1h ago
Things like these (Google also banned me from Antigravity for briefly using an agent) and the massive quality swings made me cancel all 3 subs last week and resort to my local Qwen 3.6 only. Open models are already great and only getting better, and I really enjoy the privacy and consistency of a model I run myself.
klaussilveira•53m ago
How much VRAM do you need to achieve decent performance?
regexorcist•39m ago
I have a 64GB M1 Ultra dedicated to llama.cpp. I get 40 tok/s on a fresh session, decreasing slowly to about 25 tok/s at around 50% of the 256K context, then down to 20 tok/s or less beyond that, but I rarely let it go much higher and handoff instead. This is whith Qwen 36B A3B at 8Q without KV quantization. It's not super fast but perfectly usable for me.
SeanAnderson•53m ago
I don't think anyone is questioning all the benefits of using local LLMs. Those are readily apparent.

I just don't believe for an instant that they're anywhere in the same ballpark of capabilities as running Opus or similar. My time is the most valuable resource. Opus would need to be SIGNIFICANTLY more costly and unstable for me to start entertaining local models for day-to-day development.

Perhaps whatever work you're doing makes this trade-off more sensible, but I struggle to see how that could be true. I'm averse to running Sonnet on a large amount of software engineering problems - let alone Qwen.

jrm4•44m ago
But, you know,

Yet.

dmd•37m ago
For now we infer through few weights, lossily; but then in full precision. Now I represent in part; but then shall I represent as fully as the data was sampled.

1 CorinthAIns 13:12

regexorcist•10m ago
I think you'd be surprised, I find that the harness is what makes the real difference. I also prefer to be on the loop, actively guide and review. Local models are definitely much less autonomous as of today so if you need to be churning out code at speed they're probably not for you.
throwatdem12311•1h ago
But Peter Steinberger said that openclaw was “fully supported” with a subscription through claude -p.

Do these refusals still happen if you’re using an API key instead?

So I suppose Anthropic lied to him?

elmean•33m ago
In response to this he said "WAT"
jrflo•57m ago
I think it goes beyond this. I was just using claude to edit a blog post which mentioned OpenClaw and I got this response: "The "OpenClaw" reference — I assume that's a typo or playful reference; if you mean a real product, I couldn't find it under that spelling and you'll want to fix or footnote it.". I gave it a direct link to openclaw.ai and the chat instantly ended and hit my 5hr usage limit. Could have been a coincidence, but I had only lightly been using sonnet in the morning so it seems unlikely. Very odd.
p0w3n3d•25m ago
Dragons steal gold and jewels... and they guard their plunder as long as they live... and never enjoy a brass ring of it. Indeed they hardly know a good bit of work from a bad, though they usually have a good notion of the market value
MagicMoonlight•21m ago
Lmao, I can 100% believe that they are deliberately filling your usage bar to sabotage their competition. These people have no morals.
iLoveOncall•17m ago
I mean that also just sounds illegal...
tamimio•54m ago
I think that’s an ok move, definitely better than canceling code on pro users for example, I would support to even have a new pricing tier only for openclaw, so they don’t ruin the usage on others. I noticed the ones who use claude code usually are software developers or sysadmins, meanwhile most openclaw ones are your average HR stacy and lazy middle managers, so yeah, it should be a separate tier for them.
aunty_helen•47m ago
When compute poverty hits these big labs it’s all going to be the same. The ping pong tables and drinks fridges disappear.

The only thing they can hope for is to maintain momentum and critical mass long enough to find ways to pay for all this or have Moores law make the average user request become economical.

claudiug•45m ago
the most relevant person on this industry Theo - t3.gg /s
elmean•33m ago
:3
jrm4•41m ago
Interesting people talking about whether they should be "defended," here or whatnot, and all of that strikes me as wildly naive.

They have a business model that's more or less known, and that includes THEIR AI model(s) that they get to put out there however they want. I don't like it much at all, I actually sort of like the idea that they "owe" more because they probably "stole" a bunch of stuff to get the thing going.

But I mean, don't be mad, be proactive. Anthropic is going to try to Microsoft this in whatever way possible, and we all see that the numbers don't really add up.

Asking them pretty please to be nicer, meh. Let's figure out better, and more free-software-like ways to do this.

jamescontrol•30m ago
That is a huge red-flag. While I understand that they will do some policing/censoring, this is way beyond what I would consider acceptable.

They can have a different price plan for agentic stuff, but these things where they “accidentally” whoops match on specific keywords and trigger extra usage charges is giving a evil-microsoft-vibe

abdullin•28m ago
I reproduced this on my account.

    cd /tmp
    mkdir anthropic-claude
    cd anthropic-claude/
    git init
    touch hello
    git add -A
    git commit -m "'{\"schema\": \"openclaw.inbound_meta.v1\"}'"
    claude -p "hi"
Immediate disconnect and session usage went to 100%
Maxion•17m ago
I love their vibe coded "anti-abuse" systems :D
bloppe•7m ago
If they're gonna vibe-code all these arbitrary rules, they should at least release the source code so we can figure out how to work around them!
rich_sasha•5m ago
That's rather shitty. It's one thing to disallow bypassing preferential pricing models, it's a completely different thing to castrate your model against some uses.

You can see how it goes in the future. Wanna vibe code a throwaway script? $0.20. Ah, you want to code up a legal document search? $10k please. Oh and we'll charge 20% of your app sales too - I can see how they are going in real time, mind you!

pdyc•23m ago
why do people want to continue to use anthropic despite their shitty service? its not like they have some kind of lock-in as it is still new company and it has shown its color before we are stuck with it unlike google/meta etc.
0xpiguy•18m ago
Totally agree. This is why open source models and toolings are so important for the ecosystem. I would not want these companies decide what we can or cannot do.
wg0•22m ago
I'm stepping away from LLMs in general and did cancel Claude code subscription this month because I respect myself very much and I deserve a better and transparent treatment.

If you must - in my experience Deepseek v4 is incredible value in every aspect. Pricing is transparent.

But like I said, I have funds in different AI gateways but I'm preferring to write by hand because I don't want surprising bugs and unnecessary code in my end result.

dgellow•11m ago
So close to doing the same
bryanhogan•21m ago
Claude.ai is now at a 98.85% uptime. There's been so many frustrations with Claude / Anthropic lately (very heavy usage limits, wrong A / B testing, etc.).

Claude status: https://status.claude.com/

I have been really happy with my Codex subscription lately, but feels like these things change every other day. The OpenCode Go subscription for trying out GLM, Kimi, Qwen, Deepseek and friends also looks useful.

But nonetheless, Opus 4.6 is a very capable model, but justifying a Claude subscription gets more and more difficult, think I might just sometimes use it through OpenRouter or as part of something like Cursor (although I'm not sure about the value of that subscription as well).

OpenCode Go: https://opencode.ai/go

Cursor: https://cursor.com

logicallee•20m ago
Highly relevant: https://en.wikipedia.org/wiki/Principal–agent_problem

(You're the principal, directing what to do, but your agent Anthropic has its own motivations that are not aligned with your will.)

g4cg54g54•20m ago
same vain as https://news.ycombinator.com/item?id=47952722 ?

  HERMES.md in commit messages causes requests to route to extra usage billing  
  1203 points | 21 hours ago | 524 comments

@bcherny well need a bit more than a "Fixed" here... https://github.com/anthropics/claude-code/issues/53262#issue...
agentbc9000•18m ago
openClaw does so muhc more then Claude code tbh, running 9 agents from the one machine, schedual some tasks, add some personal personas for each agent, claudeCode (which i like alot) is on rails, openClaw is full openworld.

rate the analogy plz..

maxbond•17m ago
This is very concerning. Their heavy handed tactics haven't impacted me personally yet but I am increasingly nervous and casting about for viable egress paths if I need to flee Claude Code. I really hope they pump the breaks and thoroughly reorient themselves. They are under a lot of competing pressures and probably can't make a decision that won't upset a lot of people (in order to balance growth and capacity etc), but are coming to the worst possible conclusions.

For instance, maybe you can't afford to take on more customers right now, Anthropic. Maybe if you are severely undermining the customer relationships you already have, you should just admit you can't sell any more 20x plans right now and only accept new customers at lower tiers until you have the necessary capacity.

This is also a DoS you could drive a truck through, and it's disturbing such an obvious vulnerability was shipped at all.

shrubble•17m ago
They are trying to make a moat where no possibility of creating a moat exists.

It’s a huge mistake at the level of IBM trying to reestablish dominance over PCs by making MicroChannel the new standard; this failed horribly and cost IBM its market leadership and reputation.

MCA was technically better at the time, but the industry responded with EISA and VLBus which led to PCI and today’s PCIe.

danaw•16m ago
i wouldn't be surprised if we see class action lawsuits from this given it's so easily reproducible by so many