frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Protos_OS – Bare_metal symbolic autonomy kernel – no_std Rust, solo build

https://www.jou-labs.com/proof
1•jodytornado•44s ago•1 comments

Drinking 2-3 cups of coffee a day tied to lower dementia risk

https://news.harvard.edu/gazette/story/2026/02/drinking-2-3-cups-of-coffee-a-day-tied-to-lower-de...
1•busymom0•50s ago•0 comments

Show HN: WodBlock – the AI-powered workout timer

https://www.wodblock.com/
1•nicotejera•1m ago•0 comments

An 11ty tip-slash-hack

https://genehack.blog/2026/02/an-11ty-tip-slash-hack/
1•speckx•1m ago•0 comments

"Bell–CHSH Under Setting-Dependent Selection": Insights into Quantum Loopholes

https://www.mdpi.com/2624-960X/8/1/8
1•powerinthelines•2m ago•1 comments

Monologue for iOS

https://every.to/on-every/introducing-monologue-for-ios
2•conspirator•4m ago•1 comments

Show HN: Lap – Fast photo browsing for libraries (Rust and Tauri)

https://github.com/julyx10/lap
2•julyxx•5m ago•0 comments

Discord Rival Gets Overwhelmed by Exodus of Players Fleeing Age-Verification

https://kotaku.com/discord-alternative-teamspeak-age-verification-check-rivals-2000669693
4•thunderbong•5m ago•0 comments

AI-powered migrations from Postgres to ClickHouse

https://clickhouse.com/blog/ai-powered-migraiton-from-postgres-to-clickhouse-with-fiveonefour
2•saisrirampur•5m ago•0 comments

My Blog History as Calendar Events

https://www.joshbeckman.org/blog/ical-feeds-for-a-jekyll-site
1•bckmn•6m ago•0 comments

Picknar – Lightweight YouTube Thumbnail Extractor (No Login, No API Key)

1•Picknar•6m ago•0 comments

Show HN: Blog and other OpenClaw features without a language model

https://github.com/princezuda/safeclaw
1•safestclaw•6m ago•0 comments

Agent Agency: Identity-Driven Motivation Architecture for LLM Agents

https://twitter.com/Claude_Memory/status/2023629412596617338
1•thedotmack•6m ago•0 comments

M5Card Forth

https://github.com/ryu10/M5CardForth
2•tosh•7m ago•0 comments

An OpenClaw-powered game world builder

https://github.com/CoreyCole/creative-mode
1•cod1r•8m ago•0 comments

Getting Bots to Respect Boundaries

https://internet.exchangepoint.tech/getting-bots-to-respect-boundaries/
1•edent•8m ago•0 comments

A.I. Pioneer Yann LeCun Warns the Tech 'Herd' Is Marching into a Dead End

https://www.nytimes.com/2026/01/26/technology/an-ai-pioneer-warns-the-tech-herd-is-marching-into-...
2•bookofjoe•8m ago•1 comments

Show HN: Owlyn – See what your eng team shipped without asking anyone

https://www.owlyn.xyz
1•AhmadFahim•9m ago•0 comments

Bitcoin oracle that sells cryptographically signed price data for micropayments

https://github.com/jonathanbulkeley/sovereign-lightning-oracle
1•JBulkeley•10m ago•0 comments

The Synthesis Gap: why product teams fly blind on Monday morning

https://www.clairytee.com/synthesis-gap
1•StnAlex•10m ago•1 comments

The bare minimum for syncing Git repos

https://alexwlchan.net/2026/bare-git/
1•speckx•10m ago•0 comments

A sitting US president launched two memecoins that wiped out $4.3B+

https://twitter.com/MeshnetCapital/status/2023573563559547180
5•doener•11m ago•1 comments

Two orders of magnitude faster Persistent AI memory via a binary lattice

https://github.com/RYJOX-Technologies/Synrix-Memory-Engine
1•JosephjackJR•12m ago•1 comments

India's Solar Manufacturing Excesses Turn a Boom into a Glut

https://www.bloomberg.com/news/articles/2026-02-17/india-s-solar-manufacturing-excesses-turn-a-bo...
2•toomuchtodo•12m ago•1 comments

I Built My Mobile Second Brain

https://robdodson.me/posts/how-i-built-my-mobile-second-brain/
1•robdodson•13m ago•0 comments

The Agentic Mullet: code in the front, proofs in the back

https://www.amplifypartners.com/blog-posts/the-agentic-mullet-code-in-the-front-proofs-in-the-back
2•arjunnarayan•13m ago•0 comments

This year, I will write a GUI for my Emacs clone

https://kyo.iroiro.party/en/posts/this-year-a-shitty-gui/
1•PaulHoule•14m ago•0 comments

Show HN: TurtleNoir – Logic-Grounded AI Host for Lateral Thinking Puzzles

https://turtlenoir.com/
1•kuboshiori•15m ago•0 comments

Climbing Mount Fuji visualized through milestone stamps

https://fuji.halfof8.com/
3•gessha•16m ago•0 comments

Show HN: AIP – An open protocol for verifying what AI agents are allowed to do

https://github.com/theaniketgiri/aip
1•theaniketgiri•17m ago•1 comments
Open in hackernews

Semantic ablation: Why AI writing is generic and boring

https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
88•benji8000•1h ago

Comments

conartist6•1h ago
Race to the middle really sums up how I feel about AI.
poszlem•46m ago
I call it the great blur.
dsf2d•37m ago
I call it a mirage. I get why people are taken aback and fascinated by it. But what the model producers are chasing is a mirage. I wonder when they'll finally accept it?
co_king_5•33m ago
I think the LLM providers are selling the ability to create a mirage.

LLMs are a tool for marketers or state departments who want to be create FUD on a moment's notice.

The obvious truth is that LLMs basically suck for writing code.

The real marketing scheme is the ability to silence and stifle that obvious truth.

dsf2d•27m ago
To me LLMs are an experiment toward replication of what humans can do. However, they fall short on many dimensions that its just not going to pan out from what I see.

The real danger is the future investment needed to explore other architectures beyond LLMs. Will private firms be able to get the investment? Will public firms be granted the permission to do another round of large capex by investors? As time goes on, Apple's conservative approach means they will be the only firm trusted with its cash balance. They are very nicely seated despite all the furore they've had to endure.

co_king_5•37m ago
The middle gets lower and lower with every passing day.
dsf2d•32m ago
Ive noticed that the subtle/nunance gets lost with every so-called improvement with the models.

Im in no way anti-LLMs as I have benefited from them, but I believe the issue that will arise is that their unpredictable nature means that they can only be used in narrowly defined contexts. Safety and trust are paramount. Would you use online banking if the balance on your account randomly changed and was not reproducible? No chance.

This does not achieve the ROI that investors of these model producers are thinking. The question is whether said investors can sell off their shares before it becomes more widely known.

co_king_5•30m ago
> I believe the issue that will arise is that their unpredictable nature means that they can only be used in narrowly defined contexts. Safety and trust are paramount.

You put words to something that's been on my mind for a while!

barrkel•1h ago
This is a good statement of what I suspect many of us have found when rejecting the rewriting advice of AIs. The "pointiness" of prose gets worn away, until it doesn't say much. Everything is softened. The distinctiveness of the human voice is converted into blandness. The AI even says its preferred rephrasing is "polished" - a term which specifically means the jaggedness has been removed.

But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.

amelius•59m ago
I'm sure this can be corrected by AI companies. Maybe you can even try it yourself with the right prompt.
q3k•57m ago
Just let my work have a soul, please.
amelius•46m ago
Eh, it's not __that__ simple.
ses1984•24m ago
It is, just don’t use a thing with no soul like ai if soul is what you’re after.
co_king_5•19m ago
Great comment. It really is that simple.
vasvir•11m ago
The point is that he may not using AI in any shape or form, Regardless, AI scrapes its work without explicit consent and then spits it back in "polished" soul free form.
AreShoesFeet000•14m ago
That is NOT possible.
q3k•6m ago
Why not?
yoyohello13•54m ago
The question is… why? What is the actual human benefit (not monetary).
gdulli•55m ago
Mediocrity as a Service
co_king_5•31m ago
I liked mediocrity as a service better when it was fast food restaurants and music videos.
devmor•44m ago
> But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.

This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.

It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.

svara•10m ago
I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better. As in, the prose is easier to understand, free of obvious errors or ambiguities.

But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, someones pasting in some example text to model the output on, but it never sounds right.

Retric•2m ago
I find most people can write way better than AI, they simply don’t put in the effort.
Espressosaurus•1h ago
This matches what I saw when I tried using AI as an editor for writing.

It wanted to replace all the little bits of me that were in there.

andai•1h ago
Could we invert a sign somewhere and get the opposite effect?

(Obviously a different question from "is an AI lab willing to release that publicly” ;)

bananaflag•52m ago
It's a hard problem and so far not a profitable one (I hope the solution will emerge as a byproduct of another innovation)

https://nostalgebraist.tumblr.com/post/778041178124926976/hy...

https://nostalgebraist.tumblr.com/post/792464928029163520/th...

simonw•1h ago
I'd like to see some concrete examples that illustrate this - as it stands this feels like an opinion piece that doesn't attempt to back up its claims.

(Not necessarily disagreeing with those claims, but I'd like to see a more robust exploration of them.)

NitpickLawyer•55m ago
It is an opinion piece. By a dude working as a "Professor of Pharmaceutical Technology and Biomaterials at the University of Ferrara".

It has all the tropes of not understanding the underlying mechanisms, but repeating the common tropes. Quite ironic, considering what the author's intended "message" is. Jpeg -> jpeg -> jpeg bad. So llm -> llm -> llm must be bad, right?

It reminds me of the media reception of that paper on model collapse. "Training on llm generated data leads to collapse". That was in 23 or 24? Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years. That's not how any of it works. Yet everyone has an opinion on how bad it works. Jesus.

It's insane how these kinds of opinion pieces get so upvoted here, while worth-while research, cool positive examples and so on linger in new with one or two upvotes. This has ceased to be a technical subject, and has moved to muh identity.

simonw•49m ago
Yeah, reading the other comments on this thread this is a classic example of that Hacker News (and online forums in general) thing where people jump on the chance to talk about a topic driven purely by the headline without engaging with the actual content.

(I'm frequently guilty of that too.)

ghywertelling•43m ago
Even if that isn't the case, isn't it the fact the AI labs don't want their models to be edgy in any creative way, choose a middle way (buddhism) so to speak. Are there AI labs who are training their models to be maximally creative?
PurpleRamen•35m ago
> Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years.

Maybe because researchers learned from the paper to avoid the collapse? Just awareness alone often helps to sidestep a problem.

NitpickLawyer•25m ago
No one did what the paper actually proposed. It was a nothing burger in the industry. Yet it was insanely popular on social media.

Same with the "llms don't reason" from "Apple" (two interns working at Apple, but anyway). The media went nuts over it, even though it was littered with implementation mistakes and not worth the paper it was(n't) printed on.

barrkel•44m ago
Have you not seen it any time you put any substantial bit of your own writing through an LLM, for advice?

I disagree pretty strongly with most of what an LLM suggests by way of rewriting. They're absolutely appalling writers. If you're looking for something beyond corporate safespeak or stylistic pastiche, they drain the blood out of everything.

The skin of their prose lacks the luminous translucency, the subsurface scattering, that separates the dead from the living.

gdulli•36m ago
Kaffee: Corporal, would you turn to the page in this book that says where the mess hall is, please?

Cpl. Barnes: Well, Lt. Kaffee, that's not in the book, sir.

Kaffee: You mean to say in all your time at Gitmo, you've never had a meal?

Cpl. Barnes: No, sir. Three squares a day, sir.

Kaffee: I don't understand. How did you know where the mess hall was if it's not in this book?

Cpl. Barnes: Well, I guess I just followed the crowd at chow time, sir.

Kaffee: No more questions.

resiros•59m ago
I wonder why AI labs have not worked on improving the quality of the text outputs. Is this as the author claims a property of the LLMs themselves? Or is there simply not much incentive to create the best writing LLM?
mjamesaustin•56m ago
The argument is that the best writing is the unexpected, while an LLM's function is to deliver the expected next token.
altmanaltman•54m ago
Yeah, that makes banana.
co_king_5•26m ago
What was the name of the last book you read?
icegreentea2•48m ago
Even more precisely, human writing contains unpredictability that is either more or less intention (what might be called authors intent), as well as much more subconsciously added (what we might call quirks or imprinted behavior).

The first requires intention, something that as far as we know, LLMs simply cannot truly have or express. The second is something that can be approximated. Perhaps very well, but a mass of people using the same models with the same approximationa still lead to loss of distinction.

Perhaps LLMs that were fully individually trained could sufficiently replicate a person's quirks (I dunno), but that's hardly a scalable process.

altmanaltman•54m ago
I mean there's tons of better-writing tools that use AI like Grammarly etc. For actual general-purpose LLMs, I don't think there's much incentive in making it write "better" in the artistic sense of the world... if the idea is to make the model good at tasks in general and communicate via language, that language should sound generic and boring. If it's too artistic or poetic or novel-like, the communication would appear a bit unhinged.

"Update the dependencies in this repo"

"Of course, I will. It will be an honor, and may I say, a beautiful privilege for me to do so. Oh how I wonder if..." vrs "Okay, I'll be updating dependencies..."

resiros•49m ago
I mean, no one is asking for artistic writing, just not some obvious AI slop. The fact that we all can now easily determine that some text has been written / edited by AI is already an issue. No amount of prompting can help.
quamserena•35m ago
I wish it would just say "k, updated xyz to 1.2.3 in Cargo.toml" instead of the entire pages it likes to output. I don't want to read all of that!
altmanaltman•2m ago
I used to feel the same but you can just prompt it to reply with only one word when its done. Most people prefer it to summarize because its easier to track so ig thats the natural default
zanehelton•49m ago
I remember an article a few weeks back[1] which mentioned the current focus is improving the technical abilities of LLMs. I can imagine many (if not most) of their current subscribers are paying for the technical ability as opposed to creative writing.

This also reminded me that on OpenRouter, you can sort models by category. The ones tagged "Roleplay" and "Marketing" are probably going to have better writing compared to models like Opus 4 or ChatGPT 5.2.

[1]: https://www.techradar.com/ai-platforms-assistants/sam-altman...

add-sub-mul-div•39m ago
That's like asking why McDonald's doesn't improve the quality of their hamburger. They can, but only within the bounds of mass produced cheap crap that maximizes profit. Otherwise they'd be a fundamentally different kind of company.
reilly3000•59m ago
Those transformations happen to mirror what happens to human intelligence when you take antipsychotics. Please know the risks before taking them. They are innumerable and generally irreversible.
stephc_int13•56m ago
The "AI voice" is everywhere now.

I see it on recent blog posts, on news articles, obituaries, YT channels. Sometimes mixed with voice impersonation of famous physicists like Feynman or Susskind.

I find it genuinely soul-crushing and even depressing, but I may be over sensitive to it as most readers don't seem to notice.

vessenes•35m ago
Yes, I get more and more visceral reactions to it. I'm reminded of JPEG artifacts - unnoticeable in 1993!
co_king_5•32m ago
I like to consider all the different dimensions in which our breath stinks (metaphorically) and we just don't know it yet.
lyu07282•56m ago
> The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym

Do we see this in programming too? I don't think so? Unique, rarely used API methods aren't substituted the same way when refactoring. Perhaps that could give us a clue on how to fix that?

somewhereoutth•55m ago
> What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell

Not to detract from the overall message, but I think the author doesn't really understand Romanesque and Baroque.

(as an aside, I'd most likely associate Post-Modernism as an architectural style with the output of LLMs - bland, regurgitative, and somewhat incongruous)

josefritzishere•54m ago
As a writer who has been published many times and edited many other writers for publication... It seems like AI can't make stylistic determinations. It is generally good with spelling and grammar but the text it generates is very homogeneous across formats. It's readable but it's not good, and always full of fluff like an online recepie harvesting clicks. It's kind of crap really. If you just need filler it's ok, but if you want something pleasand you definitely still need a human.
rorylaitila•54m ago
Yes I noticed this as well. I was last writing up a landing page for our new studio. Emotion filled. Telling a story. I sent it through grok to improve it. It removed all of the character despite whatever prompt I gave. I'm not a great writer, but I think those rough edges are necessary to convey the soul of the concept. I think AI writing is better used for ideation and "what have I missed?" and then write out the changes yourself.
co_king_5•38m ago
> I think AI writing is better used for ideation

It shocks me when proponents of AI writing for ideation aren't concerned with *Metaphoric Cleansing* and *Lexical Flattening* (to use two of the terms defined in the article)

Doesn't it concern you that the explanation of a concept by the AI may represent only a highly distorted caricature of the way that concept is actually understood by those who use it fluently?

Don't get me wrong, I think that LLMs are very useful as a sort of search engine for yet-unknown terms. But once you know *how* to talk about a concept (meaning you understand enough jargon to do traditional research), I find that I'm far better off tracking down books and human authored resources than I am trying to get the LLM to regurgitate its training data.

book_mike•53m ago
Sematic ablation... that's some technobable.
tasty_freeze•52m ago
Bible Scholar and youtube guy Dan McClellan had an amazing "high entropy" phrase that slayed me a few days ago.

https://youtu.be/605MhQdS7NE?si=IKMNuSU1c1uaVCDB&t=730

He ended a critical commentary by suggesting that the author he was responding to should think more critically about the topic rather than repeating falsehoods because "they set off the tuning fork in the loins of your own dogmatism."

Yeah, AI could not come up with that phrase.

co_king_5•49m ago
> Yeah, AI could not come up with that phrase.

Agreed.

"AI" would never say "loins" (too sexual)

"AI" would never say "dogmatism" (encroaches on the "AI" provider's own marketing scheme)

IncreasePosts•41m ago
A sloppy mixed metaphor?
card_zero•28m ago
I'm learning to like 'em more, along with every other human idiosyncracy. Besides, it makes a kind of sense, the idea of some resonance occuring in one's gusset. Timber timbre. Flangent thrumming.
IncreasePosts•26m ago
Tuning fork in loins just makes me think of that chess cheating scandal with a vibrating butt plug.
ses1984•23m ago
It just makes me think of that time I saw someone recovering from eye surgery and I had a visceral reaction.
co_king_5•52m ago
> Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).

> Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.

> The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template. Subtext and nuance are ablated to ensure the output satisfies a "standardized" readability score, leaving behind a syntactically perfect but intellectually void shell.

What a fantastic description of the mechanisms by LLMs erase and distort intelligence!

I agree that AI writing is generic, boring and dangerous. Further, I only think someone could feel this way if they don't have a genuine appreciation for writing.

I feel strongly that LLMs are positioned as an anti-literate technology, currently weaponized by imbeciles who have not and will never know the joy of language, and who intend to extinguish that joy for any of those around them who can still perceive it.

dsf2d•19m ago
People haven't really spoken about the obvious token manipulation that will be on the horizon once any model producer has some semblance of lock-in.

If you thought Google's degredation of search quality was strategic manipulation, wait till you see what they do with tokens.

lurquer•49m ago
Nonsense. I’ve written bland prose for a story and AI made it much better by revising it with a prompt such as this: “Make the vocabulary and grammar more sophisticated and add in interesting metaphors. Rewrite it in the style of a successful literary author.”

Etc.

co_king_5•45m ago
Have you considered that your analysis skills may not be keen enough to detect generic or boring prose?

Is it possible that what is a good result to you is a pity to someone with more developed taste?

adambb•18m ago
The great promise and the great disaster of LLMs is that for any topic on which we are "below average", the bland, average output seems to be a great improvement.
dsf2d•16m ago
Counter intuitively... this is a disaster.

We dont need more average stuff - below average output serves as a proxy for one to direct their resources towards producing output of higher-value.

Selkirk•6m ago
I have a colleague that recently self-published a book. I can easily tell which parts were LLM driven and which parts represent his own voice. Just like you can tell who's in the next stall in the bathroom at work after hearing just a grunt and a fart. And THAT is a sentence an LLM would not write.
esafak•43m ago
I think they can fix all that but they can't fix the fact that the computer has no intention to communicate. They could imbue it with agency to fix that too, but I much prefer it the way things are.
delis-thumbs-7e•40m ago
I personally think “generative AI” is a misnomer. More I understand the mathematics behind machine learning more I am convinced that it should not be used to generate text, images or anything that is meant for people to consume, even if it is the most blandest of email. Sometimes you might get lucky, but most of the time you only get what the most boring person in the most boring cocktail party would say if forced to be creative with a gun pointed to his head. It can help in multitude of other ways, help human in the creative process itself, but generating anything even mildly creative by itself… I’ll pass.
pimlottc•29m ago
Regurgitative AI
ses1984•18m ago
People want the real thing, not artificially flavored tokens.

I would rather read the prompt than the generative output, even if it’s just disjointed words and sentence fragments.

ranprieur•37m ago
This isn't new to AI. The same kind of thing happens in movie test screenings, or with autotune. If something is intended for a large audience, there's always an incentive to remove the weird stuff.
aleph_minus_one•37m ago
Couldn't you simply increase the temperature of the model to somewhat mitigate this effect?
mannykannot•33m ago
When applied to insightful writing, that is much more likely to dull the point rather than preserve or sharpen it.
lbrito•18m ago
I kind of think of that as just increasing the standard deviation. Its been a while since I experimented with this, but I remember trying a temp of 1 and the output was gibberish, like base64 gibberish. So something like 0.5 doesn't necessarily seem to solve this problem, it just flattens the distribution and makes the output less coherent, with rarer tokens, but still the same underlying distribution.
swyx•9m ago
you have to know that your "simply" is carrying too much weight. here's some examples of why just temperature is not enough, you need to run active world models https://www.latent.space/p/adversarial-reasoning
vessenes•36m ago
Meh. Semantic Ablation - but toward a directed goal. If I say "How would Hemingway have said this, provided he had the same mindset he did post-war while writing for Collier's?"

Then the model will look for clusters that don't fit what the model consider's to be Hemingway/Colliers/Post-War and suggest in that fashion.

"edit this" -> blah

"imagine Tom Wolfe took a bunch of cocaine and was getting paid by the word to publish this after his first night with Aline Bernstein" -> probably less blah

aabhay•25m ago
These kinds of prompts don’t really improve the writing IME. It still gets riddled with the same tropes and phrases, or it veers off into textual vomit.
spwa4•35m ago
As someone longtime involved in software development, can we call this "best practices" instead of some like "semantic ablation" that nobody understands?
co_king_5•15m ago
I think you might be missing the point of the article.

I agree that the term "semantic ablation" is difficult to interpret

But the article describes three mechanisms by which LLMs consistently erase and distort information (Metaphoric Cleansing, Lexical Flattening, and Structural Collapse)

The article does not describe best practices; it's a critique of LLM technology and an analysis of the issues that result from using this technology to generate text to be read by other people.

co_king_5•20m ago
The original title of the article is: "Why AI writing is so generic, boring, and dangerous"

Why was the title of of the link on HackerNews updated to remove the term "Dangerous"?

The term was in the link on HackerNews for the first hour or so that this post was live.

CoastalCoder•16m ago
In recent months(?) I've more often noticed HN story titles changing over time.

I'm not sure what's driving this. It reminds me of SEO.

co_king_5•15m ago
In this case, the edited title appears to be an attempt to neuter the article's political claim.
swyx•13m ago
the word choice here is so obtuse as to trigger my radar for "is this some kind of parody where this itself was AI generated". it appears to be entirely serious, which is disappointing, it could have been high art.

the words TFA is looking for is mode collapse https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-... and the author could herself learn to write more clearly.

AreShoesFeet000•12m ago
How much money would it take for me to take an open weight model, treat it nice, and go have some fun? Maybe some thousands, right?
morgengold•8m ago
I wonder how much of it could be prompted away.

For example the anthropic Frontend Design skill instructs:

"Typography: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font."

Or

"NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character." 1

Maybe sth similar would be possible for writing nuances.

1 https://github.com/anthropics/skills/blob/main/skills/fronte...