Is there any way to stop LLMs generating text like this?
Is it really better than just writing it yourself? I guess generating blog posts is lower-effort and thus wins in this attention economy people think they are competing in.
The actual engine is here: https://github.com/badlogic/pi-mono
An interesting idea to have a bit more control over what your 'agent' is doing and keep it simple. Some of the prompts do give me pause though, why do we talk to text generators as if they are people, have we found this works best, or is it a sort of cargo-cult?
https://github.com/badlogic/pi-mono/blob/main/.pi/prompts/is...
I love that he's telling his tools not to trust people in his comments here!
I wrote earlier why the agent stack is splitting into specialized layers, and this is a good example of what drives it. Monolithic tools waste the most on their own overhead. https://philippdubach.com/posts/dont-go-monolithic-the-agent...
[meta] I frequently see criticism about an article having been obviously written by an LLM. Often the author apologizes for it in the HN comments. I wonder what is wrong with me that I am totally unaware of this LLM stench.
I have gotten a lot of value from hearing people criticize candidates' LLM usage in technical interviews and conversations. I adjusted my style from talking about axioms and best practices. Instead I always relate a personal anecdote to explain a technical decision. This has been universally well-received.
So I am hoping that someone can respond with some helpful holistic answers beyond a checklist of "uses em-dashes" and "says 'not X, but Y'". I suspect my writing style could be easily declared as having been written by an LLM.
The writing definitely has a stench and is full of breathless comparisons which pretend some very minor thing is a breakthrough. This is annoying and trite and people dislike it for that alone but also for the more important reasons above.
This blog post could have been a lot shorter. I’d honestly rather just read the prompt with a link to pi. People like this author should just publish their prompt IMO and they will continue to be called out on it till this bubble pops.
Tried Mistral Vibe, Codex, Opencode, Claude with gpt-oss:20b, ministral 3b,8b, Nemotron3 nano 30b and GLM 4.6V; finally settling on gpt for its impressive pass rates. All the other tools inject upto around 7-10k tokens on the initial prompt while pi takes up ~1.5k. This works out to be quite usable for my m3 pro machine that can take a while when processing the huge initial prompts from other CLIs.
While I'm not doing any serious work, and the other tools could be tweaked to use a simpler System Prompt; pi felt quick and the llms did use the tool calls correctly without being confused with all the huge prompts being dumped on them.
gas9S9zw3P9c•1h ago
7777332215•1h ago
gas9S9zw3P9c•1h ago
7777332215•1h ago
linkregister•1h ago
gas9S9zw3P9c•1h ago
But more important than the writing style, there is no interesting content here. It's all generic statements and platitudes with a bunch of generated links.
swordsith•1h ago
H8crilA•30m ago