frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

LLM Writing Tropes.md

https://tropes.fyi/tropes-md
68•walterbell•3h ago

Comments

mvkel•1h ago
Weirdly, LLMs seem to break with these instructions. They simply ignore them, almost as if the pretraining/RL weights are so heavy, no amount of system prompting can override it
RandomWorker•1h ago
It's a beauty. We can easily detect the issues with Youtubers that generate scripts from this tool. I've noticed these tropes, after 30 seconds, remove, block, and do not recommend any further. I hope to train the algorithm to detect AI scripts and stop recommending me those videos. It's honestly turned me off from YouTube so much, or I find myself going to my "subscribed" tab and going to content creators that still believe in the craft.
antinomicus•6m ago
I’ve taken it one step further. YouTube as a front end is awful, and I’ve had enough. Tons of little dark patterns made to keep you on the site, annoying algorithms taking you places you never want to go, shitty ai slop, the whole nine yards. But I still like certain channels. As a result I’m doing everything self hosted now - not just YouTube but literally every single piece of digital media I consume. For YouTube I had to create a rotating pool of 5 residential ISP proxies - replaced as soon as YouTube download bot restrictions kick in - and rotated weekly either way.

With this I am able to get all my favorite subs onto my actual hard drive, with some extra awesome features as a result: I vibe coded a little helper app that lets me query the transcript of the video and ask questions about what they say, using cheap haiku queries. I can also get my subs onto my jellyfin server and be able to view it in there on any device. Even comments get downloaded.

All these streamers have gone too far trying to maximize engagement and have broken the social contract, so I see this as totally fair game.

carleverett•1h ago
"The "It's not X -- it's Y" pattern, often with an em dash. The single most commonly identified AI writing tell. Man I f*cking hate it. AI uses this to create false profundity by framing everything as a surprising reframe. One in a piece can be effective; ten in a blog post is a genuine insult to the reader. Before LLMs, people simply did not write like this at scale."

This one hit home... the first time I ever saw Claude do it I really liked it. It's amazing how quickly it became the #1 most aggravating thing it does just through sheer overuse. And of course now it's rampant in writing everywhere.

nh23423fefe•1h ago
Weird to care about a harmless construction along with punctuation.
ashivkum•1h ago
weirder still to immerse your brain in sewage and take pride in your lack of discernment.
mapmeld•50m ago
If you participate in certain online communities where posts used to generally share real ideas and ask real beginner questions, you get tired of it. I am especially tired of seeing "it's not X - it's Y" on /r/MachineLearning posts, claiming that they've found some "geometry" or basic PyTorch code which they think will solve AI hallucinations. And it's becoming clear these people are not just doing this sort of a thing on a whim, but spending days in delusional conversations with the AI.
bitwize•53m ago
If you sound like a car ad from Road & Track, I'm going to flag you as bot.

"No rough handling. No struggles to accelerate. Just pure performance. The new Toyota GT. It's not just a car—it's a revolution."

Most of the tropes listed on this page give text a more "car ad" (or sometimes "movie trailer") quality. I wonder if magazine scans and press releases unduly weighted the training set.

Retr0id•9m ago
I think it's more likely that car ads and chatbots are both optimizing for the same thing i.e. grabbing the audience's attention.
cyanydeez•1h ago
This kills the headline baiting tech blogger.
CharlesW•1h ago
A substantive human-written resource: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
bitwize•1h ago
You know how no one ever wrote their own software and then generative AI came along and suddenly we could have app meals home-cooked by barefoot developers? (The use of such cottagecore terminology for a process that requires being an ongoing client of a hundred-gigabuck, planet-burning megacorporation rubs me in many wrong ways.)

If AI finally gets rid of the thing that drove me nuts for years: "leverage" as a verb mean roughly "to use"—when no human intervention seems to work, then I shall be over-the-moon happy. I once worked at a place where this particular word was lever—er, used all the damn time and I'd never encountered something so NPC-ish. I felt like I was on The Twilight Zone. I could've told you way back then that you sounded like a bot doing that, now people might actually believe me and thank god.

I will stick by the em dashes however. And I might just start using arrows too. Compose - > → right arrow. Not even difficult.

Jordan-117•1h ago
Wikipedia also has an exhaustive guide, though it's not fun finding tropes you use yourself (I'm very guilty of the false range "from X to Y" thing):

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

Another one that seems impossible for LLMs to avoid: breaking article into a title and a subtitle, separated by a colon. Even if you explicitly tell it not to, it'll do it.

joshvm•1h ago
No mention of Claude/ChatGPT's favourite new word genuine and friends? They also like using real and honest when giving advice. Far as I can tell this is a new-ish change.

> Honestly? We should address X first. It's a genuine issue and we've found a real bug here.

Honorable mention: "no <thing you told me not to do>". I guess this helps reassure adherence to the prompt? I see that one all the time in vibe coded PRs.

pinum•8m ago
Similarly, "X that actually works"
capnrefsmmat•49m ago
I work on research studying LLM writing styles, so I am going to have to steal this. I've seen plenty of lists of LLM style features, but this is the first one I noticed that mentions "tapestry", which we found is GPT-4o's second-most-overused word (after "camaraderie", for some reason).[1] We used a set of grammatical features in our initial style comparisons (like present participles, which GPT-4o loved so much that they were a pretty accurate classifier on their own), but it shouldn't be too hard to pattern-match some of these other features and quantify them.

If anyone who works on LLMs is reading, a question: When we've tried base models (no instruction tuning/RLHF, just text completion), they show far fewer stylistic anomalies like this. So it's not that the training data is weird. It's something in instruction-tuning that's doing it. Do you ask the human raters to evaluate style? Is there a rubric? Why is the instruction tuning pushing such a noticeable style shift?

[1] https://www.pnas.org/doi/10.1073/pnas.2422455122, preprint at https://arxiv.org/abs/2410.16107. Working on extending this to more recent models and other grammatical features now

networked•28m ago
You may be interested in my collection of links about AI's writing style: https://dbohdan.com/ai-writing-style. I've just added your preprint and tropes.fyi. It has "hydrogen jukeboxes: on the crammed poetics of 'creative writing' LLMs" by nostalgebraist (https://www.tumblr.com/nostalgebraist/778041178124926976/hyd...), which features an example with "tapestry".

> Why is the instruction tuning pushing such a noticeable style shift?

Gwern Branwen has covered this extensively: https://gwern.net/doc/reinforcement-learning/preference-lear....

djoldman•25m ago
The RLHF is what creates these anomalies. See delve from kenya and nigeria.

Interestingly, because perplexity is the optimization objective, the pretrained models should reflect the least surprising outputs of all.

FartyMcFarter•46m ago
The article has been slashdotted so I don't know if this one is in there but:

One I've seen Gemini using a lot is the "I'll shoot straight with you" preamble (or similar phrasing), when it's about to tell me it can't answer the question.

1970-01-01•45m ago
What we really need is a browser plugin underlining these patterns, especially for comments.
charlieflowers•25m ago
This list reads like, "AIs are not your typical braindead person on the street. They actually use a decent but not crazily advanced vocabulary."

I mean, "tapestry" is a great word for something that is interconnected. Why not use it?

tiahura•24m ago
Many of these are standard fare in legal writing.

Negative parallelism is a staple of briefs. "This case is not about free speech. It is about fraud." It does real work when you're contesting the other side's framing.

Tricolons and anaphora are used as persuasion techniques for closing arguments and appellate briefs.

Short punchy fragments help in persuasive briefs where judges are skimming. "The statute is unambiguous."

As with the em dash - let's not throw the baby out with the bath water.

bryanrasmussen•9m ago
This makes me think of the attractiveness of overly bad writing to writers, as a challenge, the most obvious example being the bulwer-lytton award, or the instinctive ignoring of instructions from fiction magazines that might say "we don't want any stories about murderous grandparents, French bashing, bestiality, bank robbers from the future, or kind-hearted Nazis - and especially do not try to be super brilliant and funny and send us your story about kind-hearted Nazi bank-robbing french-bashing grandparents that like killing people and having sexy fun times with barnyard animals! Because every original thinker like you thinks they are the first to have come up with that idea!" and then as a writer you feel challenged to do exactly what they say they don't want because what a glorious triumph if you manage to outdo everyone and get your dreck published because it's dreck that is so bad it's good!

It does not seem like there are lots of people who are perversely inclined to write a story with all these tropes and words in it, but surely there must be some, because if you make something that beats the LLM (by being creatively good) using all the crap the LLM uses, it would seem some sort of John Henry triumph (discounting the final end of John Henry of course, which is a real downer)

agnishom•8m ago
> (let's play cat and mouse!).

No thanks, I hate this large scale social experiment

netsec_burn•8m ago
Another trope: longer README.md's than anyone would make, or want.
NewsaHackO•4m ago
Yes, to me this is a huge tell. Especially when it goes into detail about pros and cons (using a table) on the most superficial points.
roywiggins•2m ago
Don't forget "The Ludlum Delusion"- every header in an article or readme reads like a Robert Ludlum novel title.

CasNum

https://github.com/0x0mer/CasNum
151•aebtebeten•3h ago•22 comments

A decade of Docker containers

https://cacm.acm.org/research/a-decade-of-docker-containers/
217•zacwest•7h ago•157 comments

Dumping Lego NXT firmware off of an existing brick (2025)

https://arcanenibble.github.io/dumping-lego-nxt-firmware-off-of-an-existing-brick.html
135•theblazehen•1d ago•9 comments

Effort to prevent government officials from engaging in prediction markets

https://www.merkley.senate.gov/merkley-klobuchar-launch-new-effort-to-ban-federal-elected-officia...
185•stopbulying•3h ago•67 comments

The Day NY Publishing Lost Its Soul

https://www.honest-broker.com/p/the-day-ny-publishing-lost-its-soul
32•wallflower•3h ago•20 comments

Ki Editor - an editor that operates on the AST

https://ki-editor.org/
354•ravenical•14h ago•129 comments

In 1985 Maxell built a bunch of life-size robots for its bad floppy ad

https://buttondown.com/suchbadtechads/archive/maxell-life-size-robots/
54•rfarley04•3d ago•4 comments

Put the zip code first

https://zipcodefirst.com
167•dsalzman•1h ago•122 comments

Package managers need to cool down

https://nesbitt.io/2026/03/04/package-managers-need-to-cool-down.html
36•zdw•3d ago•26 comments

FLASH radiotherapy's bold approach to cancer treatment

https://spectrum.ieee.org/flash-radiotherapy
180•marc__1•9h ago•55 comments

Caitlin Kalinowski: I resigned from OpenAI

https://twitter.com/kalinowski007/status/2030320074121478618
41•mmaia•1h ago•17 comments

macOS code injection for fun and no profit (2024)

https://mariozechner.at/posts/2024-07-20-macos-code-injection-fun/
61•jstrieb•3d ago•10 comments

Autoresearch: Agents researching on single-GPU nanochat training automatically

https://github.com/karpathy/autoresearch
28•simonpure•4h ago•6 comments

LLM Writing Tropes.md

https://tropes.fyi/tropes-md
68•walterbell•3h ago•27 comments

The influence of anxiety: Harold Bloom and literary inheritance

https://thepointmag.com/examined-life/the-influence-of-anxiety/
6•apollinaire•3d ago•0 comments

How important was the Battle of Hastings?

https://www.historytoday.com/archive/head-head/how-important-was-battle-hastings
4•benbreen•3d ago•2 comments

Why developers using AI are working longer hours

https://www.scientificamerican.com/article/why-developers-using-ai-are-working-longer-hours/
26•birdculture•56m ago•13 comments

The surprising whimsy of the Time Zone Database

https://muddy.jprs.me/links/2026-03-06-the-surprising-whimsy-of-the-time-zone-database/
30•jprs•6h ago•1 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
93•PaulHoule•4d ago•9 comments

SigNoz (YC W21, open source Datadog) Is Hiring across roles

https://signoz.io/careers
1•pranay01•7h ago

Re-creating the complex cuisine of prehistoric Europeans

https://arstechnica.com/science/2026/03/recreating-the-complex-cuisine-of-prehistoric-europeans/
61•apollinaire•1d ago•23 comments

Lisp-style C++ template meta programming

https://github.com/mistivia/lmp
6•mistivia•2h ago•0 comments

Ask HN: Would you use a job board where every listing is verified?

21•BelVisgarra•2h ago•39 comments

Trampolining Nix with GenericClosure

https://blog.kleisli.io/post/trampolining-nix-with-generic-closure
6•ret2pop•2d ago•1 comments

The yoghurt delivery women combatting loneliness in Japan

https://www.bbc.com/travel/article/20260302-the-yoghurt-delivery-women-combatting-loneliness-in-j...
192•ranit•11h ago•121 comments

Show HN: ANSI-Saver – A macOS Screensaver

https://github.com/lardissone/ansi-saver
86•lardissone•10h ago•27 comments

Files are the interface humans and agents interact with

https://madalitso.me/notes/why-everyone-is-talking-about-filesystems/
169•malgamves•13h ago•104 comments

Self-Portrait by Ernst Mach (1886)

https://publicdomainreview.org/collection/self-portrait-by-ernst-mach-1886/
87•Hooke•2d ago•16 comments

$3T flows through U.S. nonprofits every year

https://charitysense.com/insights/the-3-trillion-blind-spot
88•mtweak•2h ago•53 comments

Bourdieu's theory of taste: a grumbling abrégé (2023)

https://dynomight.net/bourdieu/
35•sebg•2d ago•12 comments