frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Less human AI agents, please

https://nial.se/blog/less-human-ai-agents-please/
36•nialse•1h ago

Comments

incognito124•1h ago
Your claim, paraphrased, is that AGI is already here and you want ASI
nialse•42m ago
On point. I'm more interested in what comes after LLMs/AI/AI-agents, what the next leap is.
zingar•7m ago
Interesting that what you're talking about as ASI is "as capable of handling explicit requirements as a human, but faster". Which _is_ better than a human, so fair play, but it's striking that this requirement is less about creativity than we would have thought.
vachanmn123•1h ago
I've seen this way too many times as well. I wrote about this recently: https://medium.com/@vachanmn123/my-thoughts-on-vibe-coding-a...
raincole•1h ago
I know anthropomorphizing LLMs has been normalized, but holy shit. I hope the language in this article is intentionally chosen for a dramatic effect.
nialse•37m ago
Agreed. We should not be anthropomorphising LLMs or having them mimic humans.
Animats•22m ago
It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.
zingar•14m ago
Fascinating. This is invisible to me, what anthropomorphising did you notice that stood out?
pjc50•6m ago
The thing is .. what else can you do? All the advice on how to get results out of LLMs talks in the same way, as if it's a negotiation or giving a set of instructions to a person.

You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.

mentalgear•1h ago
Yes, LLMs should not be allowed to use "I" or indicate they have emotions or are human-adjacent (unless explicit role play).
NitpickLawyer•52m ago
Why, though? Just because some people would find it odd? Who cares?

Trying to limit / disallow something seems to be hurting the overall accuracy of models. And it makes sense if you think about it. Most of our long-horizon content is in the form of novels and above. If you're trying to clamp the machine to machine speak you'll lose all those learnings. Hero starts with a problem, hero works the problem, hero reaches an impasse, hero makes a choice, hero gets the princess. That can be (and probably is) useful.

lexicality•1h ago
The entire point of LLMs is that they produce statistically average results, so of course you're going to have problems getting them to produce non-average code.
bob1029•57m ago
If you want to talk to the actual robot, the APIs seem to be the way to go. The prebuilt consumer facing products are insufferable by comparison.

"ChatGPT wrapper" is no longer a pejorative reference in my lexicon. How you expose the model to your specific problem space is everything. The code should look trivial because it is. That's what makes it so goddamn compelling.

noobermin•48m ago
I am quite hard anti-AI, but even I can tell what OP wants is a better library or API, NOT a better LLM.

Once again, one of the things I blame this moment for is people are essentially thinking they can stop thinking about code because the theft matrices seem magical. What we still need is better tools, not replacements for human junior engineers.

js8•42m ago
A very human thing to do is - not to tell us which model has failed like this! They are not all alike, some are, what I observe, order of magnitude better at this kind of stuff than others.

I believe how "neurotypical" (for the lack of a better word) you want model to be is a design choice. (But I also believe model traits such as sycophancy, some hallucinations or moral transgressions can be a side effect of training to be subservient. With humans it is similar, they tend to do these things when they are forced to perform.)

nialse•40m ago
Codex in this case. I didn't even think about mentioning it. I'll update the post if it's actually relevant. Which I guess it is.

EDIT: It's specifically GPT-5.4 High in the Codex harness.

zingar•32m ago
Also the exact model/version if you haven't already.
gregates•39m ago
The version of this I encounter literally every day is:

I ask my coding agent to do some tedious, extremely well-specified refactor, such as (to give a concrete real life example) changing a commonly used fn to take a locale parameter, because it will soon need to be locale-aware. I am very clear — we are not actually changing any behavior, just the fn signature. In fact, at all call sites, I want it to specify a default locale, because we haven't actually localized anything yet!

Said agent, I know, will spend many minutes (and tokens) finding all the call sites, and then I will still have to either confirm each update or yolo and trust the compiler and tests and the agents ability to deal with their failures. I am ok with this, because while I could do this just fine with vim and my lsp, the LLM agent can do it in about the same amount of time, maybe even a little less, and it's a very straightforward change that's tedious for me, and I'd rather think about or do anything else and just check in occasionally to approve a change.

But my f'ing agent is all like, "I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?"

And in that moment I guess I know why some people say having an LLM is like having a junior engineer who never learns anything.

grebc•31m ago
If it’s a compiled language, just change the definition and try to compile.
gregates•24m ago
Indeed! You would think it would have some kind of sense that a commit that obviously won't compile is bad!

You would think.

It would be one thing if it was like, ok, we'll temporarily commit the signature change, do some related thing, then come back and fix all the call sites, and squash before merging. But that is not the proposal. The plan it proposes is literally to make what it has identified as the minimal change, which obviously breaks the build, and call it a day, presuming that either I or a future session will do the obvious next step it is trying to beg off.

chillfox•13m ago
Pretty sure it’s a harness or system prompt issue.

I have never seen those “minimal change” issues when using zed, but have seen them in claude code and aider. Been using sonnet/opus high thinking with the api in all the agents I have tested/used.

solumunus•10m ago
On my compiled language projects I have a stop hook that compiles after every iteration. The agent literally cannot stop working until compilation succeeds.
prymitive•29m ago
That’s my daily experience too. There are a few more behaviours that really annoys me, like: - it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened - or it wants to run some a command, I click the “nope” button and it just outputs “the user didn’t approve my command, I need to try again” and I need to click “nope” 10 more times or yell at it to stop - and the absolute best is when instead of just editing 20 lines one after another it decides to use a script to save 3 nanoseconds, and it always results in some hot mess of botched edits that it then wants to revert by running git reset —hard and starting from zero. I’ve learned that it usually saves me time if I never let it run scripts.
chrisjj•13m ago
> it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened

Reminds us of the most important button the "AI" has, over the similarly bad human employee.

'X'

Until, of course, we pass resposibility for that button to an "AI".

zingar•18m ago
> Maybe we should just commit the signature change with a TODO

I'm fascinated that so many folks report this, I've literally never seen it in daily CC use. I can only guess that my habitually starting a new session and getting it to plan-document before action ("make a file listing all call sites"; "look at refactoring.md and implement") makes it clear when it's time for exploration vs when it's time for action (i.e. when exploring and not acting would be failing).

solumunus•12m ago
You need to use explicit instructions like "make a TODO list of all call sites and use sub agents to fix them all".
bandrami•9m ago
At the risk of being That Old Guy, this seems like a pretty bad workflow regression from what ctags could do 30 years ago
anuramat•7m ago
whats your setup?
DeathArrow•39m ago
>Faced with an awkward task, they drift towards the familiar.

They drift to their training data. If thousand of humans solved a thing in a particular way, it's natural that AI does it too, because that is what it knows.

aryehof•37m ago
For agents I think the desire is less intrusive model fine-tuning and less opinionated “system instructions” please. Particularly in light of an agent/harness’s core motivation - to achieve its goal even if not exactly aligned with yours.
plastic041•37m ago
> There was only one small issue: it was written in the programming language and with the library it had been told not to use. This was not hidden from it. It had been documented clearly, repeatedly, and in detail. What a human thing to do.

"Ignoring" instructions is not human thing. It's a bad LLM thing. Or just LLM thing.

nialse•29m ago
It's not necessarily "ignoring" instructions, it's the ironic effect of mentioning something not to focus on, which produces focus on said thing. The classic version is: "For the next minute, try not to think about a pink elephant. You can think about anything else you like, just not a pink elephant."

https://en.wikipedia.org/wiki/Ironic_process_theory

fennecbutt•19m ago
Yes exactly. But for llms it's more that it's not really "thinking" about what it's saying per se, it's that it's predicting next token. Sure, in a super fancy way but still predicting next token. Context poisoning is real
zingar•9m ago
The work where I've done well in my life (smashing deadlines, rescuing projects) has so often come because I've been willing to push back on - even explicitly stated - requirements. When clients have tried to replace me with a cheaper alternative (and failed) the main difference I notice is that the cheaper person is used to being told exactly what to do.

Maybe this is more anthropomorphising but I think this pushing back is exactly the result that the LLMs are giving; but we're expecting a bit too much of them in terms of follow-up like: "ok I double checked and I really am being paid to do things the hard way".

jansan•35m ago
I disagree. I wan't agents to feel at least a bit human-like. They should not be emotional, but I want to talk to it like I talk to a human. Claude 4.7 is already too socially awkward for me. It feels like the guy who does not listen to the end of the assignment, run to his desks, does the work (with great competence) only to find out that he missed half of the assignment or that this was only a discussion possible scenarios. I would like my coding agent to behave like a friendly, socially able and highly skilled coworker.
hughlilly•34m ago
* fewer.
fenomas•3m ago
Nope, less is what TFA means.
downboots•34m ago
Language's usefulness lies in its alignment with truth.

It's the difference between "there's a lion hiding in those bushes" and the song of a mermaid.

zingar•24m ago
I think the author is looking for something that doesn't exist (yet?). I don't think there's an agent in existence that can handle a list of 128 tasks exactly specified in one session. You need multiple sessions with clear context to get exact results. Ralph loops, Gastown, taskmaster etc are built for this, and they almost entirely exist to correct drift like this over a longer term. The agent-makers and models are slowly catching up to these tricks (or the shortcomings they exist to solve); some of what used to be standard practice in Ralph loops seems irrelevant now... and certainly the marketing for Opus 4.7 is "don't tell it what to do in detail, rather give it something broad".

In fairness to coding agents, most of coding is not exactly specified like this, and the right answer is very frequently to find the easiest path that the person asking might not have thought about; sometimes even in direct contradiction of specific points listed. Human requirements are usually much more fuzzy. It's unusual that the person asking would have such a clear/definite requirement that they've thought about very clearly.

fennecbutt•20m ago
Not with tools + supporting (traditional) code.

Just as a human would use a task list app or a notepad to keep track of which tasks need to be done so can a model.

You can even have a mechanism for it to look at each task with a "clear head" (empty context) with the ability to "remember" previous task execution (via embedding the reasoning/output) in case parts were useful.

zingar•17m ago
The article makes it seem like the author expected this without emptying context in between, which does not yet exist (actually I'm behind on playing with Opus 4.7, the Anthropic claim seems to be that longer sessions are ok now - would be interested to hear results from anyone who has).
nialse•9m ago
That is probably the next step, and in practice it is much of what sub-agents already provide: a kind of tabula rasa. Context is not always an advantage. Sometimes it becomes the problem.

In long editing sessions with multiple iterations, the context can accumulate stale information, and that actively hurts model performance. Compaction is one way to deal with that. It strips out material that should be re-read from disk instead of being carried forward.

A concrete example is iterative file editing with Codex. I rewrite parts of a file so they actually work and match the project’s style. Then Codex changes the code back to the version still sitting in its context. It does not stop to consider that, if an external edit was made, that edit is probably important.

nialse•15m ago
Agreed. I am asking for something beyond the current state of the art. My guess is that stronger RL on the model side, together with better harness support, will eventually make it possible. However, it's the part about framing the failure to do complete a task as a communication mishap that really makes me go awry.
hausrat•21m ago
This has very little to do with someone making the LLM too human but rather a core limitation of the transformer architecture itself. Fundamentally, the model has no notion of what is normal and what is exceptional, its only window into reality is its training data and your added prompt. From the perspective of the model your prompt and its token vector is super small compared to the semantic vectors it has generated over the course of training on billions of data points. How should it decide whether your prompt is actually interesting novel exploration of an unknown concept or just complete bogus? It can't and that is why it will fall back on output that is most likely (and therefore most likely average) with respect to its training data.
chrisjj•11m ago
[delayed]
anuramat•3m ago
wdym by "prompt and vector is small"? small as in "less tokens"? that should be a positive thing for any kind of estimation

in any case, how is this specific to transformers?

chrisjj•18m ago
> ... or simply gave up when the problem was too hard,

More of that please. Perhaps on a check box, "[x] Less bullsh*t".

GPT Image 2 – AI-Powered Image Generation Tool

https://gptimg2ai.net
1•danielmateo773•44s ago•0 comments

Valgrind-3.27.0 Is Available

https://sourceforge.net/p/valgrind/mailman/message/59324626/
1•paulf38•3m ago•0 comments

Crystal Now Has Official Linux ARM64 Builds

https://crystal-lang.org/2026/04/07/official-linux-arm64-builds/
1•TheWiggles•6m ago•0 comments

The AI revolution – spamming 680PRs in 442 GitHub repos in 21 days in April

https://github.com/SAY-5
1•ddorian43•7m ago•1 comments

The first neural interface that transforms your thoughts into text

https://sabi.com/
1•filippofinke•12m ago•0 comments

Indent Is All You Need

https://blog.est.im/2026/stdin-11
1•est•15m ago•0 comments

The arrogant superbanker whose hubris brought Britain to its knees

https://inews.co.uk/opinion/arrogant-superbanker-hubris-brought-britain-knees-4331457
1•robtherobber•16m ago•0 comments

Making the Rails Default Job Queue Fiber-Based

https://paolino.me/solid-queue-doesnt-need-a-thread-per-job/
1•earcar•17m ago•0 comments

The Dirty Little Secret of AI (On a 1979 PDP-11) [video]

https://www.youtube.com/watch?v=OUE3FSIk46g
1•KnuthIsGod•23m ago•0 comments

HappyHorse AI – AI-Powered Equestrian Training

https://www.runhappyhorse.net
1•danielmateo773•23m ago•1 comments

Master of chaos wins $3M math prize for 'blowing up' equations

https://www.scientificamerican.com/article/master-of-chaos-wins-usd3m-math-prize-for-blowing-up-e...
1•signa11•24m ago•0 comments

Why the Original Task Manager Was Under 80K and Insanely Fast [video]

https://www.youtube.com/watch?v=OyN4LGyPwxc
2•KnuthIsGod•24m ago•0 comments

Influencers Are Spinning Nicotine as a 'Natural' Health Hack

https://www.nytimes.com/2026/04/20/well/nicotine-health-maha.html
2•SockThief•24m ago•2 comments

Details that make interfaces feel better

https://jakub.kr/writing/details-that-make-interfaces-feel-better
1•dg-ac•25m ago•0 comments

Watch a 200 Pound, 14" Drive from the 80s Boot Unix [video]

https://www.youtube.com/watch?v=kpC_9EmStAE
1•KnuthIsGod•25m ago•0 comments

My billing system, it could be useful to some

https://github.com/peterretief/billing-v2
2•peter_retief•27m ago•1 comments

ConvertHook – White-label widget that shows where brands rank in ChatGPT

https://converthook.com
1•joefromcomkey•29m ago•0 comments

Palantir manifesto reads like the ramblings of a comic book villain

https://www.engadget.com/big-tech/palantir-posted-a-manifesto-that-reads-like-the-ramblings-of-a-...
1•robtherobber•29m ago•0 comments

SUSE and Nvidia reveal a turnkey AI factory for sovereign enterprise workloads

https://thenewstack.io/suse-nvidia-ai-factory/
1•CrankyBear•30m ago•0 comments

Curlew conservation scheme makes breakthrough in Fermanagh

https://www.rte.ie/news/ireland/2026/0421/1569263-curlew-conservation/
1•austinallegro•30m ago•0 comments

Modern Front end Complexity: essential or accidental?

https://binaryigor.com/modern-frontend-complexity.html
1•birdculture•32m ago•0 comments

Show HN: WeTransfer Alternative for Developers

https://dlvr.sh/
3•mariusbolik•38m ago•0 comments

Keeping code quality high with AI agents

https://locastic.com/blog/keeping-code-quality-high-with-ai-agents
1•locastica•40m ago•0 comments

The MACL Extended Attribute

https://eclecticlight.co/2026/04/21/the-macl-extended-attribute/
1•frizlab•42m ago•0 comments

Mother Earth Mother Board

https://efdn.notion.site/Mother-Earth-Mother-Board-WIRED-a8ff97e460bc4ac1b4a7b87f3503a55c
1•thunderbong•43m ago•0 comments

US recession probabilities implied by the yield curve

https://www.stlouisfed.org/on-the-economy/2023/sep/what-probability-recession-message-yield-spreads
1•latentframe•48m ago•1 comments

Show HN: AnyHabit – A minimalist habit tracker for Raspberry Pi and Docker

https://github.com/Sparths/AnyHabit
1•bebedi•50m ago•0 comments

Highlights from Git 2.54

https://github.blog/open-source/git/highlights-from-git-2-54/
1•tux3•53m ago•0 comments

Enhancing Sporting Organisation Efficiency with Generative AI

https://sinankprn.com/posts/enhancing-sporting-organisation-efficiency-with-generative-ai/
1•sminchev•53m ago•0 comments

Reconstructing a Vue and Three.js app from a single Webpack bundle

1•YufanZhang•54m ago•0 comments