frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

XML Is a Cheap DSL

https://unplannedobsolescence.com/blog/xml-cheap-dsl/
69•y1n0•1h ago•33 comments

1M context is now generally available for Opus 4.6 and Sonnet 4.6

https://claude.com/blog/1m-context-ga
854•meetpateltech•20h ago•330 comments

Baochip-1x: What It Is, Why I'm Doing It Now and How It Came About

https://www.crowdsupply.com/baochip/dabao/updates/what-it-is-why-im-doing-it-now-and-how-it-came-...
88•timhh•2d ago•9 comments

Megadev: A Development Kit for the Sega Mega Drive and Mega CD Hardware

https://github.com/drojaazu/megadev
42•XzetaU8•4h ago•0 comments

Python: The Optimization Ladder

https://cemrehancavdar.com/2026/03/10/optimization-ladder/
19•Twirrim•3d ago•3 comments

Please Do Not A/B Test My Workflow

https://backnotprop.com/blog/do-not-ab-test-my-workflow/
115•ramoz•1h ago•109 comments

Qatar helium shutdown puts chip supply chain on a two-week clock

https://www.tomshardware.com/tech-industry/qatar-helium-shutdown-puts-chip-supply-chain-on-a-two-...
639•johnbarron•1d ago•537 comments

The Isolation Trap: Erlang

https://causality.blog/essays/the-isolation-trap/
80•enz•2d ago•23 comments

Wired headphone sales are exploding

https://www.bbc.com/future/article/20260310-wired-headphones-are-better-than-bluetooth
207•billybuckwheat•2d ago•342 comments

Show HN: Channel Surfer – Watch YouTube like it’s cable TV

https://channelsurfer.tv
539•kilroy123•2d ago•158 comments

RAM kits are now sold with one fake RAM stick alongside a real one

https://www.tomshardware.com/pc-components/ram/fake-ram-bundled-with-real-ram-to-create-a-perform...
49•edward•3h ago•32 comments

Mouser: An open source alternative to Logi-Plus mouse software

https://github.com/TomBadash/MouseControl
347•avionics-guy•18h ago•108 comments

A Survival Guide to a PhD (2016)

http://karpathy.github.io/2016/09/07/phd/
130•vismit2000•4d ago•75 comments

Hammerspoon

https://github.com/Hammerspoon/hammerspoon
305•tosh•19h ago•110 comments

Recursive Problems Benefit from Recursive Solutions

https://jnkr.tech/blog/recursive-benefits-recursive
32•luispa•3d ago•14 comments

I found 39 Algolia admin keys exposed across open source documentation sites

https://benzimmermann.dev/blog/algolia-docsearch-admin-keys
139•kernelrocks•14h ago•36 comments

How Lego builds a new Lego set

https://www.theverge.com/c/23991049/lego-ideas-polaroid-onestep-behind-the-scenes-price
25•Michelangelo11•2h ago•3 comments

Parallels confirms MacBook Neo can run Windows in a virtual machine

https://www.macrumors.com/2026/03/13/macbook-neo-runs-windows-11-vm/
286•tosh•23h ago•399 comments

Can I run AI locally?

https://www.canirun.ai/
1277•ricardbejarano•1d ago•313 comments

You gotta think outside the hypercube

https://lcamtuf.substack.com/p/you-gotta-think-outside-the-hypercube
84•surprisetalk•3d ago•23 comments

Digg is gone again

https://digg.com/
237•hammerbrostime•18h ago•222 comments

The Browser Becomes Your WordPress

https://wordpress.org/news/2026/03/announcing-my-wordpress/
10•vidyesh•40m ago•3 comments

Michael Faraday: Scientist and Nonconformist (1996)

http://silas.psfc.mit.edu/Faraday/
14•o4c•3d ago•1 comments

Atari 2600 BASIC Programming (2015)

https://huguesjohnson.com/programming/atari-2600-basic/
36•mondobe•2d ago•8 comments

I beg you to follow Crocker's Rules, even if you will be rude to me

https://lr0.org/blog/p/crocker/
91•ghd_•14h ago•135 comments

Games with loot boxes to get minimum 16 age rating across Europe

https://www.bbc.com/news/articles/cge84xqjg5lo
240•gostsamo•13h ago•142 comments

AEP (API Design Standard and Tooling Ecosystem)

https://aep.dev/
22•rambleraptor•3d ago•8 comments

Coding after coders: The end of computer programming as we know it?

https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?smid=u...
150•angst•2d ago•205 comments

Optimizing Content for Agents

https://cra.mr/optimizing-content-for-agents/
54•vinhnx•11h ago•21 comments

Emacs and Vim in the Age of AI

https://batsov.com/articles/2026/03/09/emacs-and-vim-in-the-age-of-ai/
170•psibi•4d ago•115 comments
Open in hackernews

Please Do Not A/B Test My Workflow

https://backnotprop.com/blog/do-not-ab-test-my-workflow/
111•ramoz•1h ago

Comments

Razengan•1h ago
I knew it: https://news.ycombinator.com/item?id=47274796
reconnecting•1h ago
A professional tool is something that provides reliable and replicable results, LLMs offer none of this, and A/B testing is just further proof.
danielbln•1h ago
I don't get your point. Web tools have been doing A/B feature testing all the time, way before we had LLMs.
reconnecting•1h ago
This is very different from the A/B interface testing you're referring to, what LLMs enable is A/B testing the tool's own output — same input, different result.

Your compiler doesn't do that. Your keyboard doesn't do that. The randomness is inside the tool itself, not around it. That's a fundamental reliability problem for any professional context where you need to know that input X produces output X, every time.

orf•1h ago
It’s exactly the same as A/B testing an interface. This is just testing 4 variants of a “page” (the plan), measuring how many people pressed “continue”.
stavros•1h ago
You've groupped LLMs into the wrong set. LLMs are closer to people than to machines. This argument is like saying "I want my tools to be reliable, like my light switch, and my personal assistant wasn't, so I fired him".

Not to mention that of course everyone A/B tests their output the whole time. You've never seen (or implemented) an A/B test where the test was whether to improve the way e.g. the invoicing software generates PDFs?

applfanboysbgon•1h ago
> LLMs are closer to people than to machines.

jfc. I don't have anything to say to this other than that it deserves calling out.

> You've never seen (or implemented) an A/B test where the test was whether to improve the way e.g. the invoicing software generates PDFs?

I have never in my life seen or implemented an a/b test on a tool used by professionals. I see consumer-facing tests on websites all the time, but nothing silently changing the software on your computer. I mean, there are mandatory updates, which I do already consider to be malware, but those are, at least, not silent.

johnisgood•1h ago
Why are you calling it out? You are interpreting the statement too literally. The point is probably about behavior, not nature. LLMs do not always produce identical outputs for identical prompts, which already makes them less like deterministic machines and superficially closer to humans in interaction. That is it. The comparison can end here.
applfanboysbgon•59m ago
They actually can, though. The frontier model providers don't expose seeds, but for inferencing LLMs on your own hardware, you can set a specific seed for deterministic input and evaluate how small changes to the context change the output on that seed. This is like suggesting that Photoshop would be "more like a person than a machine" if they added a random factor every time you picked a color that changed the value you selected by +-20%, and didn't expose a way to lock it. "It uses a random number generator, therefore it's people" is a bit of a stretch.
mikkupikku•55m ago
What other tool can I have a conversation with? I can't talk to a keyboard as if it were a coworker. Consider this seriously, instead of just letting your gut reaction win. Coding with claude code is much closer to pair programming than it is to anything else.
applfanboysbgon•51m ago
You could have a conversation with Eliza, SmarterChild, Siri, or Alexa. I would say surely you don't consider Eliza to be closer to person than machine, but then it takes a deeply irrational person to have led to this conversation in the first place so maybe you do.
mikkupikku•40m ago
Not productive conversations. If you had ever made a serious attempt to use these technologies instead of trying to come up with excuses to ignore it, you would not even think of comparing a modern LLM coding agent to some gimmick like Alexa or ELIZA. Seriously, get real.
applfanboysbgon•36m ago
Not only have I used the technology, I've worked for a startup that serves its own models. When you work with the technology, it could not be more obvious that you are programming software, and that there is nothing even remotely person-like about LLMs. To the extent that people think so, it is sheer ignorance of the basic technicals, in exactly the same way that ELIZA fooled non-programmers in the 1960s. You'd think we'd have collectively learned something in the 60 years since but I suppose not.
mikkupikku•30m ago
I really don't care where you've worked, to seriously argue that LLMs aren't more capable of conversation than ELIZA, aren't capable of pair programming even, is gargantuan levels of cope.
applfanboysbgon•27m ago
I didn't make any claims about their utility. I said that they are not like people. They are machines through and through. Regular software programs. Programs that are, I suppose, a little bit too complex for the average human to understand, so now we have the Eliza effect applying to an entirely new generation.

"I had not realized ... exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." -- Eliza's creator

doc_ick•1h ago
As far as I can tell, llms never give the exact same output every time.
johnisgood•1h ago
> same input, different result.

What is your point? You get this from LLMs. It does not mean that it is not useful.

freeone3000•1h ago
Yes! And it was bad then too!!

I want software that does a specific list of things, doesn’t change, and preferentially costs a known amount.

NotGMan•1h ago
By that definition humans are not professional since we hallucinate and make mistakes all the time.
dkersten•1h ago
Anthropic have done a lot of things that would give me pause about trusting them in a professional context. They are anything but transparent, for example about the quota limits. Their vibe coded Claude code cli releases are a buggy mess too. Also the model quality inconsistency: before a new model release, there’s a week or two where their previous model is garbage.

A/B testing is fine in itself, you need to learn about improvements somehow, but this seems to be A/B testing cost saving optimisations rather than to provide the user with a better experience. Less transparency is rarely good.

This isn’t what I want from a professional tool. For business, we need consistency and reliability.

r_lee•15m ago
> vibe coded Claude code cli releases are a buggy mess too

this is what gets me.

are they out of money? are so desperate to penny pinch that they can't just do it properly?

what's going on in this industry?

hrmtst93837•1h ago
Replicability is a spectrum not a binary and if you bake in enough eval harnessing plus prompt control you can get LLMs shockingly close to deterministic for a lot of workloads. If the main blocker for "professional" use was unpredictability the entire finance sector would have shutdown years ago from half the data models and APIs they limp along on daily.
onion2k•1h ago
A professional tool is something that provides reliable and replicable results, LLMs offer none of this, and A/B testing is just further proof.

The author's complaint doesn't really have anything to do with the LLM aspect of it though. They're complaining that the app silently changes what it's doing. In this case it's the injection of a prompt in a specific mode, but it could be anything really. Companies could use A/B tests on users to make Photoshop silently change the hue a user selects to be a little brighter, or Word could change the look of document titles, or a game could make enemies a bit stronger (fyi, this does actually happen - players get boosts on their first few rounds in online games to stop them being put off playing).

The complaint is about A/B tests with no visible warnings, not AI.

reconnecting•1h ago
There's a distinction worth making here. A/B testing the interface button placement, hue of a UI element, title styling — is one thing. But you wouldn't accept Photoshop silently changing your #000000 to #333333 in the actual file. That's your output, not the UI around it. That's what LLMs do. The randomness isn't in the wrapper, it's in the result you take away.
doc_ick•1h ago
It’s an assistant, answering your question and running some errands for you. If you give it blind permission to do a task, then you’re not worrying about what it does.
duskdozer•1h ago
Honestly I find it kind of surprising that anyone finds this surprising. This is standard practice for proprietary software. LLMs are very much not replicable anyway.
applfanboysbgon•1h ago
This is in no way standard practice for proprietary software, WTF is with you dystopian weirdos trying to gaslight people? Adobe's suite incl. Photoshop does not do this, Microsoft Office incl. Excel does not do this, professional video editing software does not do this, professional music production software does not do this, game engines do not do this. That short list probably covers 80-90% of professional software usage alone. People do this when serving two versions of a website, but doing this on software that runs on my machine is frankly completely unacceptable and in no way normal.
WillAdams•1h ago
Yeah, I've been using Copilot to process scans of invoices and checks (w/ a pen laid across the account information) converted to a PDF 20 at a time and it's pretty rare for it to get all 20, but it's sufficiently faster than opening them up in batches of 50 and re-saving using the Invoice ID and then using a .bat file to rename them (and remembering to quite Adobe Acrobat after each batch so that I don't run into the bug in it where it stops saving files after a couple of hundred have been so opened and re-saved).
_heimdall•1h ago
LLMs are nondeterministic by design, but that has nothing to do with A/B testing.
ordersofmag•1h ago
Any tool that auto-updates carries the implication that behavior will change over time. And one criteria for being a skilled professional is having expert understanding of ones tools. That includes understanding the strengths and weaknesses of the tools (including variability of output) and making appropriate choices as a result. If you don't feel you can produce professional code with LLM's then certainly you shouldn't use them. That doesn't mean others can't leverage LLM's as part of their process and produce professional results. Blindly accepting LLM output and vibe coding clearly doesn't consistently product professional results. But that's different than saying professionals can't use LLM in ways that are productive.
johnisgood•1h ago
Well put. I would upvote this many times if I could.
Mtinie•1h ago
What would you do differently if LLM outputs were deterministic?

Perhaps I approach this from a different perspective than you do, so I’m interested to understand other viewpoints.

I review everything that my models produce the same way I review work from my coworkers: Trust but verify.

cebert•1h ago
This is really frustrating.
handfuloflight•1h ago
The ToS you agreed to gives Anthropic the right to modify the product at any time to improve it. Did you have your agent explain that to you, or did you assume a $200 subscription meant a frozen product?
ramoz•1h ago
I understand. Just with AI, I don't think the behavior should change so drastically. Which I understand is paradoxical because we enjoy it when it can 10x or 1000x our workflow. I think responsible AI includes more transparency and capability control.
doc_ick•1h ago
You rent ai, you don’t own it (unless you self host).
witx•1h ago
That ship has sailed. These models were trained unethically on stollen data, they pollute tremendously and are causing a bubble that is hurting people.

"Responsible" and "Ethic" are faaar gone.

cerved•1h ago
Is the a b test tired to the installation or the user?
onion2k•1h ago
Section 6.b of the Claude Code terms says they can and will change the product offering from time to time, and I imagine that means on a user segment basis rather than any implied guarantee that everyone gets the same thing.

b. Subscription content, features, and services. The content, features, and other services provided as part of your Subscription, and the duration of your Subscription, will be described in the order process. We may change or refresh the content, features, and other services from time to time, and we do not guarantee that any particular piece of content, feature, or other service will always be available through the Services.

It's also worth noting that section 3.3 explicitly disallows decompilation of the app.

To decompile, reverse engineer, disassemble, or otherwise reduce our Services to human-readable form, except when these restrictions are prohibited by applicable law.

Always read the terms. :)

embedding-shape•1h ago
> To decompile, reverse engineer, disassemble, or otherwise reduce our Services to human-readable form, except when these restrictions are prohibited by applicable law.

Luckily, it doesn't seem like any service was reverse-engineered or decompiled here, only a software that lived on the authors disk.

onion2k•1h ago
Again, read the terms. Service has a specific meaning, and it isn't what you're assuming.

Don't assume things about legal docs. You will often be wrong. Get a lawyer if it's something important.

embedding-shape•1h ago
Thanks for the additional context, I'm not a user of CC anymore, and don't read legal documents for fun. Seems I made the right choice in the first place :)
applfanboysbgon•1h ago
Not "service" in human speech. Service, in bullshit legalese. They define their software as

> along with any associated apps, software, and websites (together, our “Services”)

As far as I understand, these terms actually hold up in court, too. Which is complete fucking nonsense that, I think, could only be the result of a technologically illiterate class making the decisions. Being penalised for trying to understand what software is doing on your machine is so wholly unreasonable that it should not be a valid contractual term.

johnisgood•46m ago
> Being penalised for trying to understand what software is doing on your machine is so wholly unreasonable that it should not be a valid contractual term.

Yeah, seriously.

doc_ick•1h ago
“ I dug into the Claude Code binary.”
doc_ick•1h ago
^ this, I was about to double check on it when I saw you did. None of these practices sound abnormal, maybe a little sketchy but that comes with using llms.
ozgrakkurt•1h ago
Why should anyone care about their TOS while they are laundering people’s work at a massive scale?
mcherm•59m ago
There are a bunch of reasons.

Perhaps their TOS involves additional evils they are performing in the world, and it would be good to know about that.

Perhaps their TOS is restricting the US military from misusing the product and create unmonitored killbots.

Perhaps the person (as I do) does not feel that "laundering people's work at a massive scale" is unethical, any more than using human knowledge is unethical when those humans were allowed to spend decades reading copyrighted material in and out of school and most of what the human knows is derived from those materials and other conversations with people who didn't sign release forms before conversing.

Just because you think one thing is bad about someone doesn't mean no one should ever discuss any other topic about them.

ramoz•58m ago
I understand. Thank you for sharing. I didn't uncover all of this until Claude told me its specific system instructions when I asked it to conduct introspection. I'll revise the blog so that I don't encourage anybody else to do deeper introspection with the tool.
phreeza•1h ago
Seems completely unsurprising?
nemo44x•1h ago
They lose money at $200/month in most cases. Again, the old rules still apply. You are the product.
gruez•1h ago
>They lose money at $200/month in most cases.

Source? Every time I see claims on profitability it's always hand wavy justifications.

lwhi•1h ago
'Hand wavy' is one of my LLMs favourite terms.
nemo44x•25m ago
There’s a lot of articles about it. It costs them $500+ for heavy users. They do this to capture market share and also to train their agent loops with human reinforcement learning.

https://ezzekielnjuguna.medium.com/why-anthropic-is-practica...

gruez•8m ago
>There’s a lot of articles about it. ....

>https://ezzekielnjuguna.medium.com/why-anthropic-is-practica...

You chose a bad one. It just asserts the 95% figure without evidence and then uses it as the premise for the rest of the article. That just confirms what I said earlier about how "Every time I see claims on profitability it's always hand wavy justifications.". Moreover the article reeks of LLM-isms.

simonw•1h ago
I'm confident "in most cases" is not correct there. If they lose money on the $200/month plan it's only with a tiny portion of users.
Havoc•1h ago
Moved from CC to opencode a couple months ago because the vibes were not for me. Not bad per se but a bit too locked in and when I was looking at the raw prompts it was sending down the wire it was also quite lets call it "opinionated".

Plus things like not being able to control where the websearches go.

That said I have the luxury of being a hobbyist so I can accept 95% of cutting edge results for something more open. If it was my job I can see that going differently.

krisbolton•1h ago
The framing of A/B testing as a "silent experimentation on users" and invoking Meta is a little much. I don't believe A/B testing is an inherent evil, you need to get the test design right, and that would be better framing for the post imo. That being said, vastly reducing an LLMs effectiveness as part of an A/B test isn't acceptable which appears to be the case here.
ramoz•1h ago
I apologize for doing this - and I agree. I will revise
tomalbrc•1h ago
Would love to know why you would consider invoking Meta “a little much”. Sounds more than appropriate.
krisbolton•40m ago
Not to start an internet argument -- I don't think it is appropriate in this context. A/B testing the features of a web app is not unexpected or unethical. So invoking the memory of cambridge analytica (etc) is disproportionate. It's far more legitimate to just discuss how much A/B testing should negatively affect a user. I don't have an answer and it's an interesting and relevant question.
mschuster91•31m ago
> A/B testing the features of a web app is not unexpected or unethical.

It's not "unexpected" but it is still unethical. In ye olde days, you had something like "release notes" with software, and you could inform yourself what changed instead of having to question your memory "didn't there exist a button just yesterday?" all the time. Or you could simply refuse to install the update, or you could run acceptance tests and raise flags with the vendor if your acceptance tests caused issues with your workflow.

Now with everything and their dog turning SaaS for that sweet sweet recurring revenue and people jerking themselves off over "rapid deployment", with the one doing the most deployments a day winning the contest? Dozens if not hundreds of "releases" a day, and in the worst case, you learn the new workflow only for it to be reverted without notice again. Or half your users get the A bucket, the other half gets the B bucket, and a few users get the C bucket, so no one can answer issues that users in the other bucket have. Gaslighting on a million people scale.

It sucks and I wish everyone doing this only debilitating pain in their life. Just a bit of revenge for all the pain you caused to your users in the endless pursuit for 0.0001% more growth.

SlinkyOnStairs•1h ago
> I don't believe A/B testing is an inherent evil, you need to get the test design right, and that would be better framing for the post imo.

I disagree in the case of LLMs.

AI already has a massive problem in reproducibility and reliability, and AI firms gleefully kick this problem down to the users. "Never trust it's output".

It's already enough of a pain in the ass to constrain these systems without the companies silently changing things around.

And this also pretty much ruins any attempt to research Claude Code's long term effectiveness in an organisation. Any negative result can now be thrown straight into the trash because of the chance Anthropic put you on the wrong side of an A/B test.

> That being said, vastly reducing an LLMs effectiveness as part of an A/B test isn't acceptable which appears to be the case here.

The open question here is whether or not they were doing similar things to their other products. Claude Code shitting out a bad function is annoying but should be caught in review.

People use LLMs for things like hiring. An undeclared A-B test there would be ethically horrendous and a legal nightmare for the client.

garciasn•49m ago
> And this also pretty much ruins any attempt to research Claude Code's long term effectiveness in an organisation. Any negative result can now be thrown straight into the trash because of the chance Anthropic put you on the wrong side of an A/B test.

LLMs are non-deterministic anyway, as you note above with your comment on the 'reproducibility' issue. So; any sort of research into CC's long-term effectiveness would already have taken into account that you can run it 15x in a row and get a different response every time.

johnisgood•44m ago
Then do not use LLMs for hiring, or use a specific LLM, or self-host your own!
raw_anon_1111•40m ago
Would you rather they change things for everyone at once without testing?
londons_explore•26m ago
I think you would be hard pushed to find any big tech company which doesn't do some kind of A B testing. It's pretty much required if you want to build a great product.
embedding-shape•6m ago
Yeah, that's why we didn't have anything anyone could possibly consider as a "great product" until A/B testing existed as a methodology.

Or, you could, you know, try to understand your users without experimenting on them, like countless of others have managed to do before, and still shipped "great products".

steve-atx-7600•22m ago
Long term effectiveness? LLMs are such a fast moving target. Suppose anthropic reached out to you and gave you a model id you could pin down for the next year to freeze any a/b tests. Would you really want that? Next month a new model could be released to everyone else - or by a competitor - that’s a big step difference in performance in tasks you care about. You’d rather be on your own path learning about the state of the world that doesn’t exist anymore? nov-ish 2025 and after, for example, seemed like software engineering changed forever because of improvements in opus.
steve-atx-7600•14m ago
If you really want to keep non-determinism down, you could try (1) see if you can fix the installed version of the clause code client app (I haven’t looked into the details to prevent auto-updating..because bleeding edge person) and (2) you can pin to a specific model version which you think would have to reduce a/b test exposure to some extent https://support.claude.com/en/articles/11940350-claude-code-...

Edit: how to disable auto updates of the client app https://code.claude.com/docs/en/setup#disable-auto-updates

airza•12m ago
Isn’t the horrendous ethical and legal decision delegating your hiring process to a black box?
everdrive•39m ago
>I don't believe A/B testing is an inherent evil,

Evil might be a stretch, but I really hate A/B testing. Some feature or UI component you relied on is now different, with no warning, and you ask a coworker about it, and they have no idea what you're talking about.

Usually, the change is for the worse, but gets implemented anyway. I'm sure the teams responsible have "objective" "data" which "proves" it's the right direction, but the reality of it is often the opposite.

mschuster91•34m ago
> The framing of A/B testing as a "silent experimentation on users" and invoking Meta is a little much.

No. Users aren't free test guinea pigs. A/B testing cannot be done ethically unless you actively point out to users that they are being A/B tested and offering the users a way to opt out, but that in turn ruins a large part of the promise behind A/B tests.

hollow-moe•11m ago
Tech companies really have issues with "informed and conscious consent" doesn't they
himata4113•1h ago
I have noticed opus doing A/B testing since the performance varies greatly. While looking for jailbreaks I have discovered that if you put a neurotoxin chemical composition into your system prompt it will default to a specific variant of the model presumeably due to triggering some kind of safety. Might put you on a watchlist so ymmv.
rusakov-field•1h ago
On one side I am frustrated with LLMs because they derail you by throwing grammatically correct bullshit and hallucinations at you, where if you slip and entertain some of it momentarily it might slow you down.

But on the other hand they are so useful with boilerplate and connecting you with verbiage quickly that might guide you to the correct path quicker than conventional means. Like a clueless CEO type just spitballing terms they do not understand but still that nudging something in your thought process.

But you REALLY need to know your stuff to begin with for they to be of any use. Those who think they will take over are clueless.

EMM_386•1h ago
> But you REALLY need to know your stuff to begin with for they to be of any use. Those who think they will take over are clueless.

Or - there are enough people who know their stuff that the people who don't will be replaced and they will take over anyway.

risyachka•1h ago
> there are enough people who know their stuff

unless the bar for "know their stuff" is very very low - this is not the case in the nearest future

Mc_Big_G•1h ago
>Those who think they will take over are clueless.

You're underestimating where it's headed.

rusakov-field•57m ago
Do you think it will reach "understanding of semantics", true cognition, within our lifetimes ? Or performance indistinguishable from that even if not truly that.

Not sure. I am not so optimistic. People got intoxicated with nuclear powered cars , flying cars , bases on the moon ,etc all that technological euphoria from the 50's and 60's that never panned out. This might be like that.

I think we definitely stumbled on something akin to the circuitry in the brain responsible for building language or similar to it. We are still a long way to go until artificial cognition.

qazxcvbnmlp•20m ago
One of the main skills of using the llm well is knowing the difference between useful output and ai slop.
helsinkiandrew•1h ago
Presumably Anthropic has to make lots of choices on how much processing each stage of Claude Code uses - if they maxed everything out, they'd make more of a loss/less of a profit on each user - $200/month would cost $400/month.

Doing A/B tests on each part of the process to see where to draw the line (perhaps based on task and user) would seem a better way of doing it than arbitrarily choosing a limit.

dep_b•1h ago
I think stable API versions are going to be really big. I’d rather have known bugs u can work around than waking up to whatever thing got fixed that made another thing behave differently.
bushido•1h ago
I have no issues with A/B tests.

I do have an issue with the plan mode. And nine out of ten times, it is objectively terrible. The only benefit I've seen in the past from using plan mode is it remembers more information between compactions as compared to the vanilla - non-agent team workflow.

Interestingly, though, if you ask it to maintain a running document of what you're discussing in a markdown file and make it create an evergreen task at the top of its todo list which references the markdown file and instructs itself to read it on every compaction, you get much better results.

mikkupikku•59m ago
Huh, very much not my experience with plan mode. I use plan mode before almost anything more than truly trivial task because I've found it to be far more efficient. I want a chance to see and discuss what claude is planning to do before it races off and does the thing, because there are often different approaches and I only sometimes agree with the approach claude would decide on by itself.
bushido•45m ago
Planning is great. It's plan mode that is unpredictable in how it discusses it and what it remembers from the discussion.

I still have discussions with the agents and agent team members. I just force it to save it in a document in the repo itself and refer back to the document. You can still do the nice parts of clearing context, which is available with plan mode, but you get much better control.

At all times, I make the agents work on my workflow, not try and create their own. This comes with a whole lot of trial and error, and real-life experience.

There are times when you need a tiger team made up of seniors. And others when you want to give a overzealous mid-level engineer who's fast a concrete plan to execute an important feature in a short amount of time.

I'm putting it in non-AI terms because what happens in real life pre-AI is very much what we need to replicate with AI to get the best results. Something which I would have given a bigger team to be done over two to eight sprints will get a different workflow with agent teams or agents than something which I would give a smaller tiger team or a single engineer.

They all need a plan. For me plan mode is insufficient 90% of the times.

I can appreciate that many people will not want to mess around with workflows as much as I enjoy doing.

andrewaylett•42m ago
> on every compaction

I've only hit the compaction limit a handful of times, and my experience degraded enough that I work quite hard to not hit it again.

One thing I like about the current implementation of plan mode is that it'll clear context -- so if I complete a plan, I can use that context to write the next plan without growing context without bound.

mikkupikku•27m ago
Agreed. The only time I don't clear context after a plan has been agreed on is when I'm doing a long series of relatively small but very related changes, such as back-and-forth tweaking when I don't yet know what I really want the final result to be until I've tried stuff out. In those cases, it has very rarely been useful to compact the context, but usually I don't get close.
samdjstephens•25m ago
I really like this too - having the previous plan and implementation in place to create the next plan, but then clearing context once that next plan exists feels like a great way to have exactly the right context at the right time.

I often do follow ups, that would have been short message replies before, as plans, just so I can clear context once it’s ready. I’m hitting the context limit much less often now too.

shawnz•1h ago
While I agree with the sentiment here, you might be interested to see that there are a couple hack approaches to override Claude Code feature flags:

https://github.com/anthropics/claude-code/issues/21874#issue...

https://gist.github.com/gastonmorixe/9c596b6de1095b6bd3b746c...

terralumen•56m ago
Curious what the A/B test actually changed -- the article mentions tool confirmation dialogs behaving inconsistently, which lines up with what I noticed last week. Would be nice if Anthropic published a changelog or at least flagged when behavior is being tested.
ramoz•48m ago
This stemmed from me asking Claude itself why it was writing such _weird_ plans with no detail (just a bunch of projected code changes).

Claude stated: in its system prompt, it had strict instructions to provide no context or details. Keep plans under forty lines of code. Be terse.

pshirshov•53m ago
> I pay $200/month for Claude Code

Which is still very cheap. There are other options, local Qwen 3.5 35b + claude code cli is, in my opinion, comparable in quality with Sonnet 4..4.5 - and without a/b tests!

sunaookami•48m ago
In what world is 200$ per month cheap?
pshirshov•42m ago
Where the value you extract out of the model is orders of magnitude higher than the price of 2..6 hours of your time.
Kiro•42m ago
It's not cheap but it's also not unusual for devs to burn $200 a day on tokens.
raw_anon_1111•34m ago
The last time I did contract work when I was between jobs I made $100/hour.

And I won’t say how much my employer charges for me. But you can see how much the major consulting companies charge here

https://ceriusexecutives.com/management-consultants-whats-th...

letier•51m ago
They do show me “how satisfied are you with claude code today?” regularly, which can be seen as a hint. I did opt out of helping to improve claude after all.
gnfargbl•50m ago
For anyone else wondering why the article ends in a non-sequitur: it looks like the author wrote about decompiling the Claude Code binaries and (presumably) discovering A/B testing paths in the code.

HN user 'onion2k pointed out that doing this breaks Anthropic's T&Cs: https://news.ycombinator.com/item?id=47375787

takahitoyoneda•47m ago
Treating a developer CLI like a consumer social feed is a fundamental misunderstanding of the target audience. We tolerate invisible feature flags in mobile apps to optimize onboarding conversion, but in our local environments, determinism is a non-negotiable requirement. If Claude Code is silently altering its core tool usage or file parsing behavior based on a server-side A/B bucket, reproducing a bug or sharing a prompt workflow with a colleague becomes literally impossible.
johnisgood•41m ago
Apparently the blog stripped the decompilation details for ToS reasons, which sucks because those are exactly the hack-y bits that make this interesting for HN.

> It told me it was following specific system instructions to hard-cap plans at 40 lines, forbid context sections, and “delete prose, not file paths.

Yeah, would be nice to be able to view and modify these instructions.

casey2•38m ago
This blog looks like an ad for Claude, all it's posts are about Claude and it was made in 2026
sigbottle•26m ago
OHHHH. That actually explains a lot why CC was going to shit recently. Was genuinely frustrated with that.
belabartok39•26m ago
How else are they supposed to get an authentic user test? Doctors use placebos because it doesn't work if the user knows about it.
pinum•9m ago
Here’s the original article which was much more informative and interesting:

https://web.archive.org/web/20260314105751/https://backnotpr...

Can’t believe HN has become so afraid of generic probably-unenforceable “plz don’t reverse engineer” EULAs. We deserve to know what these tools are doing.

I’ve seen poor results from plan mode recently too and this explains a lot.

vova_hn2•5m ago
Two thoughts:

1. Open source tools solve the problem of "critical functions of the application changing without notice, or being signed up for disruptive testing without opt-in".

2. This makes me afraid that it is absolutely impossible for open source tools to ever reach the level of proprietary tools like Claude Code precisely because they cannot do A/B tests like this which means that their design decisions are usually informed by intuition and personal experience but not by hard data collected at scale.