frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: UK Car Bingo

https://chrispattle.com/uk-car-bingo
1•pattle•1m ago•0 comments

In Production

https://lucumr.pocoo.org/2026/4/4/absurd-in-production/
1•surprisetalk•1m ago•0 comments

Audio Reactive LED Strips Are Diabolically Hard

https://scottlawsonbc.com/post/audio-led
1•surprisetalk•1m ago•0 comments

Happy Map

https://pudding.cool/2026/02/happy-map/
1•surprisetalk•1m ago•0 comments

Some Contemporary Heresies

https://kevinkelly.substack.com/p/some-contemporary-heresies
1•surprisetalk•2m ago•0 comments

Does Social Infrastructure Track with Loneliness?

https://loneliness.thirdplaceindex.org/
1•gmays•2m ago•0 comments

Intel says it's joining Elon Musk's 'Terafab' project

https://www.oregonlive.com/silicon-forest/2026/04/intel-says-its-joining-elon-musks-terafab-proje...
1•osnium123•2m ago•0 comments

Show HN: Shared team context for AI coding sessions with ambient intelligence

https://github.com/norrietaylor/distillery
1•torrienaylor•2m ago•0 comments

Show HN: I built a tool to stop debt collector calls using a federal law

1•dalvia62•3m ago•0 comments

Show HN: A music player built for Steam Deck with full gamepad navigation

1•kicumkicum•4m ago•0 comments

I use VeraCrypt to keep my data secure

https://blog.dmcc.io/journal/veracrypt-aliases-shell-workflow/
1•speckx•5m ago•0 comments

From "prompt and pray" to prompt engineering

https://gogogolems.substack.com/p/from-prompt-and-pray-to-prompt-engineering
1•larve•5m ago•0 comments

The World Needs More Software Engineers

https://www.oreilly.com/radar/the-world-needs-more-software-engineers/
1•worldvoyageur•6m ago•0 comments

Has electricity decoupled from gas prices in Germany?

https://has-electricity-decoupled-yet.strommarktberatung.de
2•konschubert•6m ago•0 comments

RISC-V XIP Linux Feature Removed After It Keeps Breaking for Months at a Time

https://www.phoronix.com/news/RISC-V-XIP-Being-Removed
1•t-3•8m ago•0 comments

Using LLMs to make novel research discoveries

https://github.com/t8/autolab
1•tatef•8m ago•1 comments

The Verification Facade: Structural Gaps in Cryspen's Hax Pipeline

https://symbolic.software/blog/2026-04-07-cryspen-hax/
1•alpaylan•8m ago•0 comments

Gemma 4 31B GGUF quants ranked by KL divergence

https://localbench.substack.com/p/gemma-4-31b-gguf-kl-divergence
1•blacktulip•9m ago•0 comments

The System Around the Code: Five Forces That Drive Engineering Work

https://www.wespiser.com/posts/2026-04-07-five-forces.html
1•wespiser_2018•9m ago•0 comments

Trump says a 'whole civilization will die tonight' if Iran deal isn't reached

https://www.pbs.org/newshour/world/trump-warns-a-whole-civilization-will-die-tonight-if-a-deal-wi...
4•lr0•10m ago•0 comments

Show HN: Yesterday's Bread – a modern MUD with AI NPCs and a personalised novel

https://aarils.com/
1•bridgettegraham•12m ago•0 comments

The Immigration Backlash Is a Global Phenomenon

https://homeeconomics.substack.com/p/the-immigration-backlash-is-a-global
3•gmays•13m ago•1 comments

A Digital Compute-in-Memory Architecture for NFA Evaluation

https://dl.acm.org/doi/10.1145/3716368.3735157
1•blakepelton•14m ago•1 comments

NanoClaw's Architecture Is a Masterclass in Doing Less

https://jonno.nz/posts/nanoclaw-architecture-masterclass-in-doing-less/
1•timbilt•15m ago•0 comments

What's the minimum structure capable of doing physics?

https://github.com/ckoons/BubbleSpacetimeTheory
1•CaseyKoons•16m ago•1 comments

A Skateboard Isn't a Vertical Slice of a Car but It Should Be

https://geo-ant.github.io/blog/2026/making-sense-of-mvp/
2•codeslasher•17m ago•1 comments

Judge prediction markets by depth, not volume

https://iter.ca/post/pred-mkt-depth/
1•speckx•17m ago•0 comments

Google Open Sources Experimental Agent Orchestration Testbed Scion

https://www.infoq.com/news/2026/04/google-agent-testbed-scion/
1•timbilt•17m ago•0 comments

Malware for Windows

https://github.com/matteo227/Malware/tree/main/Malware%20For%20Windows
1•Anonimo34SD•18m ago•0 comments

Show HN: Kumoh – an opinionated framework for Cloudflare Workers

https://github.com/arikchakma/kumoh
2•arikchakma•19m ago•0 comments
Open in hackernews

AI may be making us think and write more alike

https://dornsife.usc.edu/news/stories/ai-may-be-making-us-think-and-write-more-alike/
104•giuliomagnifico•2h ago

Comments

misterflibble•2h ago
Subtly? I beg to differ. My team leader only communicates to me using his LLM and so his "thoughts" are not his own!
ModernMech•1h ago
AI doesn't have to be conscious or sentient to take over, all that needs to happen is for politicians, law enforcement, journalists, educators etc. to uncritically parrot everything it outputs. The military is already using AI to make targeting decisions. If they just go with whatever the AI says to strike, then AI is already fighting our wars.
trollbridge•1h ago
As a bonus, mistakes can be blamed on AI.
krige•1h ago
For many that's not a bonus, that's the goal. Consequence-free life ahoy.
pixl97•1h ago
Fun and games until the AI decides extincting us is worth it.
bluefirebrand•19m ago
Unfortunately you can really tell which people haven't seriously considered that possibility or seriously don't care if it happens
misterflibble•1h ago
The scary thing is that AI decision making has been infiltrating society for decades as an unseen entity.
jerrygarcia•1h ago
I often wonder if the popularity of LLMs among company executives is that they are the perfect yes men.

They rarely disagree with any idea or proposal, providing a salve for the insecurities of their users.

davebren•1h ago
I was listening to one of Altman's more recent interviews and it sounded like he himself has LLM induced psychosis.
r_lee•1h ago
I remember him tweeting about how he can "feel the AGI" when speaking to GPT
bluefirebrand•21m ago
Yeah, it's hard to say if he's doing marketing because that's his job or if he's really swallowed the whole pill
avaer•1h ago
I'm not a fan of Altman, but it seems debatable whether LLM psychosis is psychosis if it is conducive to the subject given their environment. Which seems to be the case for Altman by some measures.

I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.

davebren•48m ago
I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.

Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.

misterflibble•8m ago
Definitely see our internal company agents enforcing the status quo!
MattGaiser•1h ago
And I would bet he judges your work with AI, assigns you work generated by AI, and perhaps evaluates whether you yourself use enough AI.
misterflibble•1h ago
That's exactly what he does...wtf are you spying on me?? Lol but seriously, I don't know how to handle his AI delegation
eru•1h ago
Well, has it been an improvement?
misterflibble•1h ago
No that's why I'm complaining
SecretDreams•1h ago
I would be looking for another job.

I'm fine with using LLMs as coding tools. But I find it deeply offensive when someone is very explicitly using them to communicate with me.

Communication is such a deeply human experience. It lets people feel each other out, and learn things beyond just the words being said. To have that filtered out by an LLM is just disgraceful.

misterflibble•1h ago
Yes exactly and I am actively applying for jobs. But I feel like the next job will also have this nonsense behaviour
sumeno•1h ago
Good luck finding a company that doesn't have these people if LLMs are used
misterflibble•6m ago
Yes true! It's everywhere now!
beached_whale•1h ago
This is one of my fears with this, losing ones voice. Everyone's expression distilled to the mean. This has ramifications in things like recognizing if a person is who they say they are too. At least currently, it is punished/shunned to sound like an LLM, but it's well within reason to see that shift to individuality being penalized.
misterflibble•1h ago
I think corporations will start penalizing first, they're already doing that to some extent at my work because they want their in-house agents to only review our PRs.
nusl•1h ago
https://not-an-llm.bearblog.dev/meat-based-llm-proxies/
thatjoeoverthr•37m ago
I've been calling them "meat condoms". In the workplace, it's one or two warnings before completely ejecting them. On social media, instant block.
misterflibble•4m ago
That's terrific lol thanks for the link BTW!
avaer•1h ago
Just because thoughts are translated doesn't mean they are consumed in the process.

However I don't doubt many "team leaders" can and should be replaced with LLMs.

nidnogg•57m ago
Guilty as charged. In my mind, when I'm insecure about a response or if I don't have enough expertise in the topic at hand I end up running it through an LLM. Lately I've been really trying harder to keep my original ideas as much as possible. I'm seeing a bit of an improvement, but still early to tell
aceazzameen•47m ago
You have to make some mistakes in your communication (or anything) if you ever want to grow and learn.
nidnogg•42m ago
You're absolutely right here and things have improved significantly at work after dropping this habit even if slightly.
embedding-shape•38m ago
"running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it? Checking against an LLM then using your own voice feels completely fine, just another type of validation before you share something, but if you actually let the LLM rewrite what you say, then I feel like that's beyond "running it through an LLM", it's basically letting the LLM write your text for you instead of just checking/validating.
misterflibble•6m ago
Yes checking and validation is one thing, but there are several engineers in my area that only communicate using agent copy paste. I challenged one fellow about that and he was furious!
downboots•2h ago
It's not explanation — it's relabeling. Why it matters:
axpvms•1h ago
You're absolutely right
kif•1h ago
Great point — this is the smoking gun
oceansky•2h ago
Wasted the opportunity of using an em dash instead of an en dash in the title.
adriand•1h ago
I always wonder if competitive market dynamics will solve problems like these, at least to some extent and for some people, because the people who retain the ability to communicate in a distinctive, persuasive and original style will be rewarded. Corporate dronespeak is no less homogeneous than AI writing, and companies with this communication style are regularly disrupted by nimbler, more authentic-sounding competitors.
rdevilla•1h ago
This state of affairs presages the advent of a second dark age - one that will forever eclipse the era of radical openness & transparency that once served the software community for decades. Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM whok would steal their competitive advantage & replicate it at scale, until any possible information asymmetries have been arbitraged away. The development & secrecy of technique will once again become a deep moat as LLMs fall into local, suboptimal minima, trained on and marketed towards the lowest common denominator. The Internet, or at least, The Web, becomes a Dark Forest of the Dead Internet (Theory), in which humans fear of speaking out and capturing the attention of the LLM who would siphon their creative essence for more, ever more training data. Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay. Quasi-monastic orders that still scribe with pen and paper emerge, that believe there is still value in training and educating a human mind and body.

- Unknown, 19 Feb 2026

existsdaily•1h ago
Scary... where can I find more of that?
danielbln•1h ago
There were no "dark ages", that's the same common wisdom blunder like "in the middle ages everybody was dressed in drab grey clothing, ate gruel and walked through mountains of poop everywhere". It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today. It was by no means a time of standstill.
eru•1h ago
As far as I can tell, the dark ages were called the dark ages because there wasn't much evidence to be found: writing was less prominent during that time.

> It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today.

You are seeing the fall of the western part of the Roman Empire a bit too rosy. Compare and contrast https://acoup.blog/2022/01/14/collections-rome-decline-and-f...

npsomaratna•1h ago
Somehow made me think of Warhammer 40k (maybe pre men of iron?)
plasticchris•28m ago
It’s a recurring theme, see dune’s references to Samuel Butler.
SketchySeaBeast•5m ago
I say this with a multiple decades-spanning love of the game and the lore, but Warhammer 40k is what you get when teenagers try to create something immediately after reading Dune.
SkyeCA•1h ago
> Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay

This is already happening and you don't have to look far to find it.

Personally HN is the only site I browse and comment on anymore (and I'm on here less than I once was). The vast, vast majority of my time online is spent in walled off Discords and Matrix chats where I know everyone and where there's a high bar to add new people. I have no real interest in open communities anymore.

avaer•1h ago
Directionally correct. But seems overly optimistic to think that moats can be kept from the prying eyes of LLMs, unless you're not interacting with the market at all.
davebren•11m ago
> Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM who would steal their competitive advantage & replicate it at scale

I've already started thinking this way, there's stuff I would have open sourced in the past but no longer will because I know it would get trained on. I'm not sure of any way I can share it with humans and only humans. If I let the LLMs have the UI patterns and libraries I've developed it would dilute my IP, like it has Studio Ghibli's art style.

TeMPOraL•8m ago
It's worth questioning the underlying assumptions. It's humans - all humans - that benefit from LLMs. I see a lot of people having this attitude, but I can't help but see it as really being about seeking credit instead of generosity, and/or Dog in the Manger mindset.
anizan•1h ago
Social media is a tool for perpetuating monothought
mhl47•1h ago
Social Media creates distinctive Filter Bubbles. A dominant LLM company (or multiple aligned ones) create one way of thinking.
paganel•1h ago
> contributed to the research, which was supported by funding from the Air Force Office of Scientific Research.

I guess when they're not busying bombing train infrastructure in Iran they have some money left to give to some propagandizing about AI. Always try to stay on top of the game!

tom-blk•1h ago
This is undoubtedly the case and imo quite concerning. Hard to minimize the effects as well, personally speaking.
api•1h ago
Compared to social media, probably for the better.
dist-epoch•1h ago
> The team points to multiple studies showing that LLM outputs are less varied than human-generated writing and that LLM outputs tend to reflect the language, values and reasoning styles of Western, educated, industrialized, rich and democratic societies. ... The researchers say that AI developers should intentionally incorporate diversity in language, perspectives and reasoning into their models.

Which is why Altman says Saudi Arabia should have it's own Sovereign AI cloud. Why should LLMs reflect democratic societies views on man and woman for example? They should also reflect the perspectives on man and woman that Saudi Arabia has, especially to local people. Western views should not be imposed on the rest of the world.

uncanny2•1h ago
I have made an observation that others have not discussed, that the real gem of our collective LLM experience is the proper documentation of “skills.”

Am I the only one who has noticed that the proper documentation of skills we do for LLMs after so many decades of neglecting junior and mid level roles are the real work?

We carefully explain to our LLMs policies, procedures, and practices which for generations before we have vaguely arbitrarily and ambiguously expected each human role to “figure out” for themselves?

Simply as a catalog of expectations our experiences have been valuable, apart from the “automated” aspects the LLms provide.

stared•1h ago
You are absolutely right!
Brendinooo•1h ago
I would imagine a similar critique was leveled at the written word when it was starting to supplant oral cultures.
eru•1h ago
Well, Plato's sock puppet Socrates famously opposed writing with pretty much these arguments.
Brendinooo•1h ago
Yup.

And to be clear, maybe some things were genuinely lost when we switched to the written word. But I have to believe it was a net gain.

Time will tell if that's true here as well.

plastic-enjoyer•1h ago
No, he did not and it would be good if people would have _actually_ read Plato's Phaedrus before regurgitating the same nonsense every time someone has a critical perspective on LLM writings.
Brendinooo•42m ago
Are you just trying to be a bit more measured by saying he wasn't so much "opposing" as "articulating pros and cons"?

Or are you trying to say that things like

"this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves"

or

"You would imagine that [written speeches] had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves."

aren't actual statements of opposition, or that there are no parallels to that and LLMs?

jessep•1h ago
Yeah, I’ve notice that people have started to sound like LLMs even when the LLMs aren’t writing for them. Not stupid people. Not lazy people. Some of the smartest people I know —- I can’t figure out how to use an em dash here, but you get the point.
drtz•1h ago
This could also be explained by the frequency illusion:

https://en.wikipedia.org/wiki/Frequency_illusion

mckirk•1h ago
If writing goes the way music seems to be going with Angine de Poitrine gaining a huge following as a kind of allergic reaction of people against the 'AI sameness'... then we could be in for a wild ride.

On the other hand, music is primarily an art form and writing (nowadays) is primarily utilitarian I would contend, so maybe the analogy doesn't quite hold up.

masswerk•59m ago
No diverging opinions, no unexpected critique, but universal basic intelligence. And here is the kicker: we won't even notice.

Here's an easy three-step plan to unanimous democracy:

• ask your LLM

• don't edit — the LLM has already selected the most average and most plausible opinion for you

• give it your voice, your voice matters

Learn to anticipate — there may not always be a power bank to keep your phone from running low!

stabbles•1h ago
Oh no, LLMs threaten our individuality ⸻ what will we do?!
compounding_it•1h ago
People are unloading the cognitive load onto the LLM. Probably because life stress is causing them to rely on technology to bring relief. It may not necessarily be a great choice.
everdrive•1h ago
So too did the printing press. Again, this is not a "something similar has happened in the past, therefore this is nothing new" sort of comment.

This is quite new, however this outcome was totally unavoidable -- once methods of communication become widespread and centralized it is impossible for them not to impact language and thought.

robofanatic•1h ago
Well, in few years not sure I will know how to think any more. If I am stuck on something I just ask the LLM and get the solution. While this shortcut sometimes saves me a ton of time and headaches, I miss that long route of thinking and getting to a solution myself. Maybe in future we will have gyms for brain workouts… I don’t know
ori_b•1h ago
Knowing people have gone full "LLM-brain", it's not subtle.
sobiolite•1h ago
Human communication and reasoning is the end result of billions of years of evolution. I'd be very surprised if LLMs can fundamentally alter it in a few years.

When considering phenomenon like these, I think people seriously underestimate what I'd call the "fashion effect". When a new technology, medium or aesthetic appears, it can have a surprisingly rapid influence on behaviour and discourse. The human social brain seems especially susceptible to novelty in this way.

Because the effects appear so fast and are often so striking, even disturbing, due to their unfamiliarity, it is tempting to imagine that they represent a fundamental transformation and break from the existing technological, social and moral order. And we extrapolate that their rapid growth will continue unchecked in its speed and intensity, eventually crowding out everything that came before it.

But generally this isn't what happens, because often what a lot of what we're seeing is just this new thing occupying the zeitgeist. Eventually, its novelty passes, the underlying norms of human behaviour reassert themselves, and society regresses to the mean. Not completely unchanged, but not as radically transformed as we feared either. The new phenomenon goes from being the latest fashion to overexposed and lame, then either fades away entirely, retreats to a niche, or settles in as just one strand of mainstream civilisational diversity.

LLMs will certainly have an effect on how humans reason and communicate, but the idea that they will so effortlessly reshape it is, in my opinion, rather naive. The comments in this thread alone prove that LLM-speak is already a well-recognised dialect replete with clichés that most people will learn to avoid for fear of looking bad.

jplusequalt•29m ago
>But generally this isn't what happens, because often what a lot of what we're seeing is just this new thing occupying the zeitgeist. Eventually, its novelty passes, the underlying norms of human behaviour reassert themselves, and society regresses to the mean. Not completely unchanged, but not as radically transformed as we feared either. The new phenomenon goes from being the latest fashion to overexposed and lame, then either fades away entirely, retreats to a niche, or settles in as just one strand of mainstream civilisational diversity

The internet didn't follow this trajectory. Neither did smart phones.

Surprise, surprise, it's the same people trying to make AI entrenched into our society.

davebren•27m ago
There's plenty of people communicating more with LLMs than humans right now, of course it's going to have an effect because our language and thought patterns are extremely adaptive to our environment. The communication system we are born with is extremely bare-bones/general so that it can absorb whatever language and culture we are born into.
mpalmer•16m ago
Think of all the things that took hundreds/thousands/millions of years to develop and mature, which humans have managed to destroy in relatively short order.

Every 50 years we cycle out an entirely new batch of thinking humans. What cognitive legacy is it exactly that you think is going to be self-preserving?

TeMPOraL•9m ago
You're talking about system altering the environment. GP was talking about the system altering itself. The system is a massive self-stabilizing collection of feedback loops. Unlike the static environment[0], it's incredibly hard to intentionally move such system to a different equilibrium. If it weren't, we'd already solved all the thorny world problems long ago.

--

[0] - Any self-stabilizing system that operates much slower than us - such as ecosystems or climate - is, from our perspective, static.

dfxm12•8m ago
Fads are often driven by moneyed interests. AI is no different. As long as guys Elon Musk, Sam Altman, Mark Zuckerberg, etc. are trying to bend the world to their will, and as long as they have the resources to do so, AI will be zeitgeist for just as long. On a smaller scale, this extends even to a CEO outsourcing support to AI, etc.
indrex•1h ago
…and the first paragraph has an em dash
jeffwask•52m ago
Take a community with AI moderation like Reddit, I've been a participant for years. With the recent push to AI autocorrect and moderation, you can see the changes in language. New words, new ways of speaking, unconsciously editing yourself because you don't want to draw the eye of the bot. It doesn't feel subtle. It feels Orwellian.
RobotToaster•36m ago
It's particularly egregious on youtube, where people frequently use words like "unalived" or "self-deleted" instead of murder or suicide, lest they incur the wrath of the almighty algorithm.
giancarlostoro•46m ago
English is not my first language, but when I started using Firefox with the built-in spell correction, I firmly believe my ability to spell words went drastically up. My grammar is stiff iffy, like I'm pretty sure I do comma splices everywhere, but at least most people can understand what I say now compared to when I was 13 and on the internet.

If there was a "gramma nazi" teenie tiny LLM with a total focus on English grammar only, and you baked that into every browser, I feel like my grammar would improve slightly. Word does it to an extent, but I don't use Word nearly enough for it to be meaningful. Firefox text spell checking was on 98% of the things I used online.

Joel_Mckay•34m ago
Some play this everyday, as vocabulary will improve in time =3

https://play.freerice.com

nickphx•42m ago
Who is "us"?
CompoundEyes•39m ago
An aspect of LLMs that I like is the specificity in word choice. One well defined word can be an alias for a couple sentences of explanation that human might not have pulled out of the air in that moment.

It reminds me of the wheel of emotions. If people absorb a wider palette of words communication might benefit. https://www.isu.edu/media/libraries/counseling-and-testing/d...

davebren•36m ago
This is my current fear, even if I choose not to use it if everyone around me does their way of speaking is all going to become more chatbot-esque. It already seems to be transferring to people its false sense of confidence, and its lack of reasoning ability. The corporate demand to participate in this is something I can't do, the cost is our humanity.

I guess one hope for luddites is that we can stay tethered by reading pre-LLM books and other content.

kusokurae•32m ago
On a creative level, I remember McCarthy describing scalped heads as like wet polyps blue in the moonlight. The more generic ways of describing something like that would never give me such a visceral reaction to the violence he was trying to tell me something about.

I already lose interest reading books where the phrases are recycled and the max sentencelength for the whole book grazes 40.

If people communicate to me without personality through prompt wastrelry I'll discount theirs and wait till they're willing to actually have an opinion. In this specific context style and substance tend to come in a pair or not at all. If you can't beat 'em you can at least filter 'em out.

taco_emoji•29m ago
Can't affect you if you don't use it
iainctduncan•28m ago
One has only to compare blogs and "thought leadership" posts from now and five years ago to see this is already happening, and big time.
tarkin2•17m ago
People from a nation think and write alike because they share a common canon of literature and stories.

It's just a pity AI was trained on mindless, garbage business-speak, and now that's our globalised common literature.

And now we're feeding that regurgitated mindless, garbage business-speak back into AI models, thereby reinforcing the garbage and further rotting our minds.

break_the_bank•14m ago
Wrote about this a while ago actually; I called it the Billion Steve problem - https://x.com/gyani1595/status/2034652087494090829
dfxm12•11m ago
Large language models may be standardizing human expression

I think it is important to distinguish "human expression" from copying a response from an LLM. Someone who outsources their thinking to an LLM is only offering an AI's expression. It's not human expression.