frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

/Deslop

https://tahigichigi.substack.com/p/12-red-flags-of-ai-writing-and-how
12•yayitswei•2h ago

Comments

Leynos•1h ago
Please try and do these, because there's nothing more annoying than some comic book guy wannabe moaning about AI tells while I'm trying to enjoy the discussion.
Der_Einzige•1h ago
We wrote the paper on deslopping LLM and their outputs: https://arxiv.org/abs/2510.15061
piker•1h ago
Just don't use LLMs to generate text you want other humans to read. Think and then write. If it isn't worth your effort, it certainly isn't worth your audience's.
fastasucan•29m ago
It comes down to this for me as well. Just the same way I never open auto generated mails, I see no reason to read text other people have got an LLM to write for them.
fxwin•1h ago
> The elephant in the room is that we’re all using AI to write but none of us wants to feel like we’re reading AI generated content.

My initial reaction to the first half of this sentence was "Uhh, no?", but then i realized it's on substack, so probably more typical for that particular type of writer (writing to post, not writing to be read). I don't even let it write documentation or other technical things anymore because it kept getting small details wrong or injecting meaning in subtle ways that isn't there.

The main problem for me aren't even the eye-roll inducing phrases from the article (though they don't help), it's that LMs tend to subtly but meaningfully alter content, causing the effect of the text to be (at best slightly) misaligned with the effect I intended. It's sort of an uncanny valley for text.

Along with the problems above, manual writing also serves as a sort of "proof-of-work" establishing credibility and meaning of an article - if you didn't bother taking the time to write it, why should i spend my time reading it?

idop•1h ago
Indeed. I have never used an LLM to write. And coding agents are terrible at writing documentation, it's just bullet points with no context and unnecessary icons that are impossible to understand. There's no flow to the text, no actual reasoning (only confusing comments about changes made during the development that are absolutely irrelevant to the final work), and yet somehow too long.

The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.

fxwin•1h ago
I have tried using them, both for technical documentation (Think Readme.md) and for more expository material (Think wiki articles), and bounced off of them pretty quickly. They're too verbose and focus on the wrong things for the former, where output is intended to get people up to speed quickly, and suffer from the things i mentioned above for the latter, causing me to have to rewrite a lot, causing more frustration than just writing it myself in the first place.

That's without even mentioning the personal advantages you get from distilling notes, structuring and writing things yourself, which you get even if nobody ever reads what you write.

jbstack•1h ago
"Please write me some documentation for this code. Don't just give me a list of bullet points. Make sure you include some context. Don't include any icons. Make sure the text flows well and that there's actual reasoning. Don't include comments about changes made during development that are irrelevant to the final work. Try to keep it concise while respecting these rules."

I think many of the criticisms of LLMs come from shallow use of it. People just say "write some documentation" and then aren't happy with the result. But in many cases, you can fix the things you don't like with more precise prompting. You can also iterate a few rounds to improve the output instead of just accepting the first answer. I'm not saying LLMs are flawless. Just that there's a middle ground between "the documentation it produced was terrible" and "the documentation it produced was exactly how I would have written it".

idop•58m ago
Sure, but that's part of my point. It gives a facade of attention to detail (on the part of the dev) where there was none.
fxwin•53m ago
Believe me, I've tried. By the time i get the documentation to be the way I want it, I am no longer faster than if i had just written it myself, with a much more annoying process along the way. Models have a place (e.g. for fixing formatting or filling out say sample json returns), but for almost anything actually core content related I still find them lacking.
fastasucan•28m ago
Why not write it yourself?
Antibabelic•1h ago
Had the same thought reading this. I haven't found a place for LLMs in my writing and I'm sure many people have the same experience.

I'm sure it's great for pumping out SEO corporate blogposts. How many articles are out there already on the "hidden costs of micromanagement", to take an example from this post, and how many people actually read them? For original writing, if you don't have enough to say or can't [bother] putting your thoughts into coherent language, that's not something AI can truly help with in my experience. The result will be vague, wordy and inconsistent. No amount of patching-over, the kind of "deslopification" this post proposes, will help salvage something minimum work has been put into.

stuaxo•1h ago
This article itself feels LLM written.
oytis•51m ago
It is also an advertisement for a magic prompt to make LLM edit text to look less LLM-y.
varjag•1h ago
I would also point to a human-generated (and maintained) list:

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

randomtoast•1h ago
You just need to use this list as a prompt and instruct the LLM to avoid this kind of slop. If you want to be serious about it, you can even use some of these slop detectors and iterate through a loop until the top three detectors rate your text as "very likely human."
cadamsdotcom•1h ago
There’s a really cool technique Andrew Ng nicknamed reflection, where you take the AI output and feed it back in, asking the model to look at it - reflect on it - in light of some other information.

Getting the writing from your model then following up with “here’s what you wrote, here’re some samples of how I wrote, can you redo that to match?” makes its writing much less slop-y.

mold_aid•1h ago
Just seems like the author could have said "write the damn thing yourself" and been done with it.
oytis•49m ago
It will definitely help, but also some people, especially in marketing/sales, were writing like that before LLMs. So you should not only write the thing yourself, but also learn some good writing style.

Sizing chaos

https://pudding.cool/2026/02/womens-sizing/
642•zdw•14h ago•347 comments

The Mongol Khans of Medieval France

https://www.historytoday.com/archive/feature/mongol-khans-medieval-france
7•Thevet•2d ago•0 comments

Don't Trust the Salt: AI Summarization, Multilingual Safety, and LLM Guardrails

https://royapakzad.substack.com/p/multilingual-llm-evaluation-to-guardrails
11•benbreen•2d ago•0 comments

27-year-old Apple iBooks can connect to Wi-Fi and download official updates

https://old.reddit.com/r/MacOS/comments/1r8900z/macos_which_officially_supports_27_year_old/
371•surprisetalk•15h ago•203 comments

15 years of FP64 segmentation, and why the Blackwell Ultra breaks the pattern

https://nicolasdickenmann.com/blog/the-great-fp64-divide.html
133•fp64enjoyer•10h ago•48 comments

Step 3.5 Flash – Open-source foundation model, supports deep reasoning at speed

https://static.stepfun.com/blog/step-3.5-flash/
114•kristianp•9h ago•39 comments

Cosmologically Unique IDs

https://jasonfantl.com/posts/Universal-Unique-IDs/
407•jfantl•17h ago•121 comments

Old School Visual Effects: The Cloud Tank (2010)

http://singlemindedmovieblog.blogspot.com/2010/04/old-school-effects-cloud-tank.html
38•exvi•5h ago•4 comments

Voith Schneider Propeller

https://en.wikipedia.org/wiki/Voith_Schneider_Propeller
16•Luc•3d ago•3 comments

Anthropic officially bans using subscription auth for third party use

https://code.claude.com/docs/en/legal-and-compliance
423•theahura•9h ago•503 comments

Tailscale Peer Relays is now generally available

https://tailscale.com/blog/peer-relays-ga
420•sz4kerto•19h ago•207 comments

Visualizing the ARM64 Instruction Set (2024)

https://zyedidia.github.io/blog/posts/6-arm64/
48•userbinator•3d ago•8 comments

How to choose between Hindley-Milner and bidirectional typing

https://thunderseethe.dev/posts/how-to-choose-between-hm-and-bidir/
108•thunderseethe•3d ago•26 comments

Zero-day CSS: CVE-2026-2441 exists in the wild

https://chromereleases.googleblog.com/2026/02/stable-channel-update-for-desktop_13.html
345•idoxer•19h ago•188 comments

Lilush – LuaJIT static runtime and shell

https://lilush.link/
5•ksymph•2d ago•0 comments

DNS-Persist-01: A New Model for DNS-Based Challenge Validation

https://letsencrypt.org/2026/02/18/dns-persist-01.html
280•todsacerdoti•18h ago•126 comments

A word processor from 1990s for Atari ST/TOS is still supported by enthusiasts

https://tempus-word.de/en/index
59•muzzy19•2d ago•23 comments

Fff.nvim – Typo-resistant code search

https://github.com/dmtrKovalenko/fff.nvim
51•neogoose•2d ago•6 comments

Show HN: A Lisp where each function call runs a Docker container

https://github.com/a11ce/docker-lisp
52•a11ce•7h ago•17 comments

Antarctica sits above Earth's strongest 'gravity hole' – how it got that way

https://phys.org/news/2026-02-antarctica-earth-strongest-gravity-hole.html
13•bikenaga•2d ago•6 comments

All Look Same?

https://alllooksame.com/
79•mirawelner•13h ago•57 comments

What years of production-grade concurrency teaches us about building AI agents

https://georgeguimaraes.com/your-agent-orchestrator-is-just-a-bad-clone-of-elixir/
89•ellieh•13h ago•18 comments

Minecraft Java is switching from OpenGL to Vulkan

https://www.gamingonlinux.com/2026/02/minecraft-java-is-switching-from-opengl-to-vulkan-for-the-v...
227•tuananh•10h ago•102 comments

The Perils of ISBN

https://rygoldstein.com/posts/perils-of-isbn
135•evakhoury•18h ago•68 comments

A Pokémon of a Different Color

https://matthew.verive.me/blog/color/
118•Risse•4d ago•17 comments

What Every Experimenter Must Know About Randomization

https://spawn-queue.acm.org/doi/pdf/10.1145/3778029
88•underscoreF•17h ago•54 comments

Stoolap/Node: A Native Node.js Driver That's Surprisingly Fast

https://stoolap.io/blog/2026/02/19/introducing-stoolap-node/
23•murat3ok•5h ago•17 comments

Making a font with ligatures to display thirteenth-century monk numerals

https://digitalseams.com/blog/making-a-font-with-9999-ligatures-to-display-thirteenth-century-mon...
89•a7b3fa•3d ago•12 comments

R3forth: A concatenative language derived from ColorForth

https://github.com/phreda4/r3/blob/main/doc/r3forth_tutorial.md
89•tosh•16h ago•17 comments

Metriport (YC S22) is hiring a security engineer to harden healthcare infra

https://www.ycombinator.com/companies/metriport/jobs/XC2AF8s-senior-security-engineer
1•dgoncharov•15h ago