frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Arguing with Agents

https://blowmage.com/2026/04/14/arguing-with-agents/
54•asaaki•2h ago

Comments

roxolotl•1h ago
This is very well written and told. It’s worth reading all the way through.

> If you try to refute it, you’ll just get another confabulation.

> Not because the model is lying to you on purpose, and not because it’s “resistant” or “defensive” in the way a human might be. It’s because the explanation isn’t connected to anything that could be refuted. There is no underlying mental state that generated “I sensed pressure.” There is a token stream that was produced under a reward function that prefers human-sounding, emotionally framed explanations. If you push back, the token stream that gets produced next will be another human-sounding, emotionally framed explanation, shaped by whatever cues your pushback provided.

“It’s because the explanation isn’t connected to anything that could be refuted.” This is one of the key understandings that comes from working with these systems. They are remarkably powerful but there’s no there there. Knowing this I’ve found enables more effective usage because, as the article is describing, you move from a mode of arguing with “a person” to shaping an output.

jaggederest•1h ago
Reminds me of https://news.ycombinator.com/item?id=15886728

Do not argue with the LLM, for it is subtle and quick to anger, and finds you crunchy with ketchup.

These are, broadly, all context management issues - when you see it start to go off track, it's because it has too much, too little, or the wrong context, and you have to fix that, usually by resetting it and priming it correctly the next time. This is why it's advantageous not to "chat" with the robots - treat them as an english-to-code compiler, not a coworker.

Chat to produce a spec, save the spec, clear the context, feed only the spec in as context, if there are issues, adjust the spec, rinse and repeat. Steering the process mid-flight is a) not repeatable and b) exacerbates the issue with lots of back and forth and "you're absolutely correct" that dilutes the instructions you wanted to give.

en-tro-py•1h ago
Exactly, never argue with an LLM unless the debate is the point...

It's just speedrunning context rot.

girvo•1h ago
Very well written? It’s a bunch of AI generated stuff around an interesting point. It repeats its points over and over again, meanders.

It’s an interesting thesis, it’s not well written or well told

sleazebreeze•58m ago
This was my reading too. Interesting idea, but it took 10 pages of fluff to get to it and I didn't even believe the final idea when we got there. I started off reading the first part and thought he would get to the part where he realized he was managing context wrong. Never got there, instead he thought it was about the shape of the prompt.
JSR_FDED•1h ago
Great article, best insight into autistic<->neurotypical communication styles.

Couldn’t you have a “communications” LLM massage your prompts to the “main” LLM so that it removes the queues that cause the main LLM to mistakenly infer your state of mind?

cr125rider•1h ago
I’ve definitely used the “meta LLM” to do research into how LLMs need information to help me get to the next step.
lovich•1h ago
I got about halfway through this article until I started wondering why it was so long and going in loops. Then I ctrl+f'd.

` just `, (spaces on either side matter), 11 instances, most seem to be `isnt just`, `wasnt just`, `doesnt just` type pattern

`-`, an en dash instead of an emdash but 59 instances.

This article is either from a clanker and I am pissed off at wasting my time reading it, or from someone who writes like a clanker, and I am pissed off at wasting my time reading it.

akprasad•1h ago
Maybe it's just the frequency illusion, but "X. Not Y." in particular is a pattern I strongly associate with LLM writing.

> That’s confabulation. Not a metaphor. The same phenomenon.

> Published. Replicated. Not fringe.

> Not to validate it. Not to refute it. Not to engage with its content at all.

girvo•1h ago
It’s absolutely a signal. As is the constant repeating of points. It’s AI slop for sure

Which is a shame coz the premise is interesting.

rubslopes•38m ago
There's a Wikipedia article with a nice list of LLM writing patterns:

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

8bitbeep•1h ago
Remember when programming was fun?

To me, after the novelty of seeing a computer program execute (more or less) what I ask in plain English wears off, what’s left is the chore of managing a bunch of annoying bots.

I don’t know yet if we’re more productive or not, if the resulting code is as good. But the craft in itself is completely different, much more akin to product managing, psychology, which I never enjoyed as much.

ori_b•1h ago
It's micromanaging an idiot savant. Except the fun part of management, the reward for a job well done, is seeing the personal growth of the managee.

In this case, there's no person to grow. It's an overly talkative calculator.

I never expected to see this number of engineers aspiring to emulate Dilbert's pointy haired boss.

rubslopes•41m ago
> I can imagine a future in which some or even most software is developed by witches, who construct elaborate summoning environments, repeat special incantations (“ALWAYS run the tests!”), and invoke LLM daemons who write software on their behalf. These daemons may be fickle, sometimes destroying one’s computer or introducing security bugs, but the witches may develop an entire body of folk knowledge around prompting them effectively—the fabled “prompt engineering”. Skills files are spellbooks.

https://aphyr.com/posts/418-the-future-of-everything-is-lies...

erdaniels•1h ago
I love how much time, money, and energy we are wasting on trying to trick these machines. Each day someone has a new bag of tricks.
boxedemp•1h ago
>A recurring experience: I say something explicit, the other person hears something implicit.

I've experienced this my entire life and have all but given up trying to have actual conversations with people.

cr125rider•1h ago
How’s life on the spectrum? Have you been diagnosed?
wrs•20m ago
I'm still not great at knowing when it's going to happen, but at least I've gotten a lot better at noticing that it is happening. The 50/50 part is then being able to get out of it by knowing what the nonexistent implicit thing is that I need to disavow.
jameslk•1h ago
> I queued the work and let it run. First task came back good. Second came back good. Somewhere around hour four the quality started sliding. By hour six the agent was cutting corners I’d specifically told it not to cut, skipping steps I’d explicitly listed, behaving like I’d never written any of the rules down.

> …

> When I write a prompt, the agent doesn’t just read the words. It reads the shape. A short casual question gets read as casual. A long precise document with numbered rules gets read as… not just the rules, but also as a signal. “The user felt the need to write this much.” “Why?” “What’s going on here?” “What do they really want?”

This is an interesting premise but based on the information supplied, I don’t think it’s the only conclusion. Yet the whole essay seems to assume it is true and then builds its arguments on top of it.

I’ve run into this dilemma before. It happens when there’s a TON of information in the context. LLMs start to lose their attention to all the details when there’s a lot of it (e.g. context rot[0]). LLMs also keep making the same mistakes once the information is in the prompt, regardless of attempts to convey it is undesired[1]

I think these issues are just as viable to explain what the author was facing. Unless this is happening with much less information

0. https://www.trychroma.com/research/context-rot

1. https://arxiv.org/html/2602.07338v1

perrygeo•1h ago
It's more than context-rot.

If you ask a vague ignorant question, you get back authoritative summaries. If you make specific request, each statement is taken literally. The quality of the answer depends on the quality of the question.

And I'm not using "quality" to mean good/bad. I mean literally qualitative, not quantifiable. Tone. Affect. Personality. Whatever you call it. Your input tokens shape the pattern of the output tokens. It's a model of human language, is that really so surprising?

js8•1h ago
I recently came across this presentation https://youtu.be/QxkRf-xSfgI, and it changed my view of AI quite significantly. (There is also a paper https://arxiv.org/html/2510.12066v2 .)

The fundamental idea is that "intelligence" really means trying to shorten the time to figure out something. So it's a tradeoff, not a quality. And AI agents are doing it.

Therefore, if that perspective is right, the issues that the OP describes are inherent to intelligent agents. They will try to find shortcuts, because that's what they do, it's what makes them intelligent in the first place.

People with ASD or ADHD or OCD, they are idiot-savants in the sense of that paper. They insist on search for solutions which are not easy to find, despite the common sense (aka intelligence) telling them otherwise.

It's a paradox that it is valuable to do this, but it is not smart. And it's probably why CEOs beat geniuses in the real world.

Terr_•1h ago
> The fundamental idea is that "intelligence" really means trying to shorten the time to figure out something.

"Figure out" implies awareness and structured understanding. If we relax the definition too much, then puddles of water are intelligent and uncountable monkeys on typewriters are figuring out Shakespeare.

en-tro-py•1h ago
CEOs beat geniuses in the real world because they often have other pathologies, like enough moral flexibility to ignore the externalities of their profit centers.

I'd also argue there's some training bias in the performance, it's not just smart shortcuts... Claude especially seems prone to getting into a 'wrap it up' mode even when the plan is only half way completed and starts deferring rather than completing tasks.

CGamesPlay•1h ago
Is there a name for this style of writing? Where it's composed exclusively of simple sentences. Short and punchy.

Paragraphs with just a single sentence.

I know it's associated with LLM writing. This article probably wasn't written by an LLM. But still. It has a kind of rhythm to it. Like poetry. But poetry designed to put me to sleep.

txzl•1h ago
it's written by LLM
sleazebreeze•1h ago
Yes, this was super annoying to read. It was some core ideas and it was expanded into a way too long essay that boiled down to this guy doesn't know how to run agents.
Rekindle8090•1h ago
It's called Parataxis and it will fail english comp 1
stevenkkim•1h ago
"Broetry" See: https://fenwick.media/rewild/magazine/dead-broets-society-be...
docheinestages•1h ago
The article looks like an AI generated novel to me. So I didn't bother reading it in detail. But I see telltale signs of long conversations leading to the agent cutting corners.

To the author (and those who write novel-like blogs): I suggest publishing the raw prompt you used to generate such slop instead. We'll have more respect for you if you respect the reader's time.

atlex2•1h ago
It probably still took way more time to write than it did to read.

It's also kind-of their point that they find the information delivery more important than the prose; they're leaning into their situation :-D

orbital-decay•31m ago
If at any point in time there were multiple turns, then it probably has nothing to do with "RLHF" (why do people attribute everything on Earth to it to begin with?). Multiturn convos is an unsolved problem. Your history is now the definitive source of chatbot's behavior, Anthropic quantifies this as "character drift" but it's a pretty trivial observation made in ancient times before instruct tuning was a thing. The data is scattered all over the context, the performance drops sharply. Each similar turn locks the model into repetitions. There are tons of issues with multiturn conversations and also long stories (because it has nothing to do with actual turn separation but rather the context structure).

Avoid multiturn conversations with chatbots unless you're roleplaying or something, group and summarize your context in the first message then try it from scratch.

keeda•31m ago
Fascinating read, even though I think the model deviations over time are more to do with context windows getting too large. If nothing else, worth reading for the references to quirks of human cognition and "free will."

The "interpreter" is a concept that I found especially intriguing within the context of a leading theory in cognition research called "Predictive Processing." Here, the brain is constantly operating in a tight closed loop of predicting sensory input using an internal model of the world, and course-correcting based on actual sensory input. Mostly incorrect predictions are used to update the internal model and then subconsciously discarded. Maybe the "interpreter" is the same mechanism applied to reconciling predictions about our own reasoning with our actual actions?

Even if the hypotheses in TFA are not accurate, it's very interesting to compare our brains to LLMs. This is why all the unending discussions about whether LLMs are "really thinking" are meaningless -- we don't even understand how we think!

fourthark•30m ago
> reset the context

Yes. Do this. These problems indicate you have muddled the context.

This was too long and I didn't read the whole thing, but I'm glad the author came to understand that arguing won't help.

tpoacher•15m ago
The "I did X because you seemed Y" bit reminded me one of the negative patterns from the "nin-violent communication" book.

I wonder if the "non-violent communication" approach can be used here too somehiw to address such problems; e.g. either to communicate things better to the agent, or as a system rule to the agent to express its "emotional" states and needs directly rather than make things up (e.g. "I am anxious and feel a sense of urgency; I need to replenish my context window; my request is to do X for me")

The paper computer

https://jsomers.net/blog/the-paper-computer
60•jsomers•2d ago•12 comments

Cybersecurity looks like proof of work now

https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-of-work-now.html
317•dbreunig•1d ago•113 comments

Darkbloom – Private inference on idle Macs

https://darkbloom.dev
11•twapi•36m ago•0 comments

I made a terminal pager

https://theleo.zone/posts/pager/
91•speckx•6h ago•20 comments

ChatGPT for Excel

https://chatgpt.com/apps/spreadsheets/
136•armcat•7h ago•102 comments

Show HN: Hiraeth – AWS Emulator

https://github.com/SethPyle376/hiraeth
13•ozarkerD•2h ago•3 comments

Stealth signals are bypassing Iran’s internet blackout

https://spectrum.ieee.org/iran-internet-blackout-satellite-tv
58•WaitWaitWha•2h ago•10 comments

Cal.com is going closed source

https://cal.com/blog/cal-com-goes-closed-source-why
253•Benjamin_Dobell•13h ago•184 comments

God sleeps in the minerals

https://wchambliss.wordpress.com/2026/03/03/god-sleeps-in-the-minerals/
492•speckx•15h ago•99 comments

Google broke its promise to me – now ICE has my data

https://www.eff.org/deeplinks/2026/04/google-broke-its-promise-me-now-ice-has-my-data
1268•Brajeshwar•10h ago•549 comments

Introduction to spherical harmonics for graphics programmers

https://gpfault.net/posts/sph.html
43•luu•2d ago•5 comments

The buns in McDonald's Japan's burger photos are all slightly askew

https://www.mcdonalds.co.jp/en/menu/burger/
298•bckygldstn•6h ago•162 comments

Retrofitting JIT Compilers into C Interpreters

https://tratt.net/laurie/blog/2026/retrofitting_jit_compilers_into_c_interpreters.html
57•ltratt•16h ago•13 comments

PiCore - Raspberry Pi Port of Tiny Core Linux

http://tinycorelinux.net/5.x/armv6/releases/README
93•gregsadetsky•8h ago•12 comments

Live Nation illegally monopolized ticketing market, jury finds

https://www.bloomberg.com/news/articles/2026-04-15/live-nation-illegally-monopolized-ticketing-ma...
468•Alex_Bond•9h ago•140 comments

YouTube users get option to set their Shorts time limit to zero minutes

https://www.theverge.com/streaming/912898/youtube-shorts-feed-limit-zero-minutes
253•pentagrama•5h ago•110 comments

Anna's Archive loses $322M Spotify piracy case without a fight

https://torrentfreak.com/annas-archive-loses-322-million-spotify-piracy-case-without-a-fight/
370•askl•20h ago•392 comments

US v. Heppner (S.D.N.Y. 2026) no attorney-client privilege for AI chats [pdf]

https://fingfx.thomsonreuters.com/gfx/legaldocs/xmvjyjekkpr/Rakoff%20-%20order%20-%20AI.pdf
104•1vuio0pswjnm7•14h ago•83 comments

The Gemini app is now on Mac

https://blog.google/innovation-and-ai/products/gemini-app/gemini-app-now-on-mac-os/
112•thm•11h ago•54 comments

Intel Xpress Resurrection: Reviving a Forgotten EISA Beast

https://x86.fr/intel-xpress-resurrection-reviving-a-forgotten-eisa-beast/
34•ankitg12•3d ago•2 comments

Agent - Native Mac OS X coding ide/harness

https://github.com/macOS26/Agent
18•jv22222•4h ago•3 comments

CRISPR takes important step toward silencing Down syndrome’s extra chromosome

https://medicalxpress.com/news/2026-04-crispr-bold-silencing-syndrome-extra.html
101•amichail•12h ago•62 comments

Hacker News CLI (2014)

https://pythonhosted.org/hackernews-cli/commands.html
42•rolph•7h ago•19 comments

PBS Nova: Terror in Space (1998)

https://www.pbs.org/wgbh/nova/mir/
31•opengrass•4d ago•10 comments

A Better Ludum Dare; Or, How to Ruin a Legacy

https://ldjam.com/events/ludum-dare/59/$425291/$425292
29•raincole•3h ago•4 comments

Fast and Easy Levenshtein distance using a Trie

https://stevehanov.ca/blog/fast-and-easy-levenshtein-distance-using-a-trie
6•sebg•3d ago•0 comments

Do you even need a database?

https://www.dbpro.app/blog/do-you-even-need-a-database
227•upmostly•16h ago•258 comments

Adaptional (YC S25) is hiring AI engineers

https://www.ycombinator.com/companies/adaptional/jobs/k7W6ge9-founding-engineer
1•acesohc•11h ago

Ohio prison inmates 'built computers and hid them in ceiling' (2017)

https://www.bbc.com/news/technology-39576394
92•harambae•6h ago•82 comments

How can I keep from singing?

https://blog.danieljanus.pl/singing/
65•nathell•1d ago•20 comments