- Unknown, 19 Feb 2026
> It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today.
You are seeing the fall of the western part of the Roman Empire a bit too rosy. Compare and contrast https://acoup.blog/2022/01/14/collections-rome-decline-and-f...
This is already happening and you don't have to look far to find it.
Personally HN is the only site I browse and comment on anymore (and I'm on here less than I once was). The vast, vast majority of my time online is spent in walled off Discords and Matrix chats where I know everyone and where there's a high bar to add new people. I have no real interest in open communities anymore.
I've already started thinking this way, there's stuff I would have open sourced in the past but no longer will because I know it would get trained on. I'm not sure of any way I can share it with humans and only humans. If I let the LLMs have the UI patterns and libraries I've developed it would dilute my IP, like it has Studio Ghibli's art style.
Is that worth possibly maybe saving some time programming, but then not gaining the knowledge you would have if you did it yourself, that can be built on in the future?
I don't see technological advancement as good in itself if morality is in decline.
I guess when they're not busying bombing train infrastructure in Iran they have some money left to give to some propagandizing about AI. Always try to stay on top of the game!
Which is why Altman says Saudi Arabia should have it's own Sovereign AI cloud. Why should LLMs reflect democratic societies views on man and woman for example? They should also reflect the perspectives on man and woman that Saudi Arabia has, especially to local people. Western views should not be imposed on the rest of the world.
Am I the only one who has noticed that the proper documentation of skills we do for LLMs after so many decades of neglecting junior and mid level roles are the real work?
We carefully explain to our LLMs policies, procedures, and practices which for generations before we have vaguely arbitrarily and ambiguously expected each human role to “figure out” for themselves?
Simply as a catalog of expectations our experiences have been valuable, apart from the “automated” aspects the LLms provide.
And to be clear, maybe some things were genuinely lost when we switched to the written word. But I have to believe it was a net gain.
Time will tell if that's true here as well.
Or are you trying to say that things like
"this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves"
or
"You would imagine that [written speeches] had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves."
aren't actual statements of opposition, or that there are no parallels to that and LLMs?
On the other hand, music is primarily an art form and writing (nowadays) is primarily utilitarian I would contend, so maybe the analogy doesn't quite hold up.
Here's an easy three-step plan to unanimous democracy:
• ask your LLM
• don't edit — the LLM has already selected the most average and most plausible opinion for you
• give it your voice, your voice matters
Learn to anticipate — there may not always be a power bank to keep your phone from running low!
This is quite new, however this outcome was totally unavoidable -- once methods of communication become widespread and centralized it is impossible for them not to impact language and thought.
When considering phenomenon like these, I think people seriously underestimate what I'd call the "fashion effect". When a new technology, medium or aesthetic appears, it can have a surprisingly rapid influence on behaviour and discourse. The human social brain seems especially susceptible to novelty in this way.
Because the effects appear so fast and are often so striking, even disturbing, due to their unfamiliarity, it is tempting to imagine that they represent a fundamental transformation and break from the existing technological, social and moral order. And we extrapolate that their rapid growth will continue unchecked in its speed and intensity, eventually crowding out everything that came before it.
But generally this isn't what happens, because often what a lot of what we're seeing is just this new thing occupying the zeitgeist. Eventually, its novelty passes, the underlying norms of human behaviour reassert themselves, and society regresses to the mean. Not completely unchanged, but not as radically transformed as we feared either. The new phenomenon goes from being the latest fashion to overexposed and lame, then either fades away entirely, retreats to a niche, or settles in as just one strand of mainstream civilisational diversity.
LLMs will certainly have an effect on how humans reason and communicate, but the idea that they will so effortlessly reshape it is, in my opinion, rather naive. The comments in this thread alone prove that LLM-speak is already a well-recognised dialect replete with clichés that most people will learn to avoid for fear of looking bad.
The internet didn't follow this trajectory. Neither did smart phones.
Surprise, surprise, it's the same people trying to make AI entrenched into our society.
Every 50 years we cycle out an entirely new batch of thinking humans. What cognitive legacy is it exactly that you think is going to be self-preserving?
--
[0] - Any self-stabilizing system that operates much slower than us - such as ecosystems or climate - is, from our perspective, static.
It may finally [help us fix out the bullshit asymmetry](https://www.konstantinschubert.com/2026/03/31/ai-the-bullshi...) that has been exacerbated by social media.
If AI can provide us with a shared source of truth, it will be a big improvement over whatever twitter is doing to people.
And strangely, all these models seem to converge to a shared epistemology.
Monkey see monkey do. Simple as that.
If I spend 40 hours a week talking to anybody, some of their language or mannerisms are going to rub off on me. I can’t think of a compelling reason why a human-sounding chat bot would be any different.
If there was a "gramma nazi" teenie tiny LLM with a total focus on English grammar only, and you baked that into every browser, I feel like my grammar would improve slightly. Word does it to an extent, but I don't use Word nearly enough for it to be meaningful. Firefox text spell checking was on 98% of the things I used online.
It reminds me of the wheel of emotions. If people absorb a wider palette of words communication might benefit. https://www.isu.edu/media/libraries/counseling-and-testing/d...
I guess one hope for luddites is that we can stay tethered by reading pre-LLM books and other content.
I already lose interest reading books where the phrases are recycled and the max sentencelength for the whole book grazes 40.
If people communicate to me without personality through prompt wastrelry I'll discount theirs and wait till they're willing to actually have an opinion. In this specific context style and substance tend to come in a pair or not at all. If you can't beat 'em you can at least filter 'em out.
It's just a pity AI was trained on mindless, garbage business-speak, and now that's our globalised common literature.
And now we're feeding that regurgitated mindless, garbage business-speak back into AI models, thereby reinforcing the garbage and further rotting our minds.
I think it is important to distinguish "human expression" from copying a response from an LLM. Someone who outsources their thinking to an LLM is only offering an AI's expression. It's not human expression.
https://www.trackingai.org/political-test
You dont accidentally end up entirely left wing libertarian.
It's incredibly frustrating, but maybe a silver lining is that it will help me write more authentically, I don't know.
But yeah, their general tone is very... castrated. Safe. Hugely impersonal.
I have learned to quickly edit out their suggested comments when I ask for an advice.
To me they have been a positive -- after careful curation.
Are you kidding me?
How much more "real-world diversity" could they possibly incorporate into the models than the entire freaking Internet and also every scrap of text written on paper the AI companies could get a hold of?
How on Earth could someone think that AIs speak like this because their training set is full of LLM-speak? This is transparently obviously false.
This is the sort of massive, blinding error that calls everything else written in the article into question. Whatever their mental model of AI is it has no resemblance to reality.
misterflibble•2h ago
ModernMech•2h ago
trollbridge•2h ago
krige•2h ago
pixl97•1h ago
bluefirebrand•37m ago
misterflibble•2h ago
jerrygarcia•1h ago
They rarely disagree with any idea or proposal, providing a salve for the insecurities of their users.
davebren•1h ago
r_lee•1h ago
bluefirebrand•39m ago
avaer•1h ago
I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.
davebren•1h ago
Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.
misterflibble•26m ago
MattGaiser•1h ago
misterflibble•1h ago
eru•1h ago
misterflibble•1h ago
SecretDreams•1h ago
I'm fine with using LLMs as coding tools. But I find it deeply offensive when someone is very explicitly using them to communicate with me.
Communication is such a deeply human experience. It lets people feel each other out, and learn things beyond just the words being said. To have that filtered out by an LLM is just disgraceful.
misterflibble•1h ago
sumeno•1h ago
misterflibble•24m ago
lamasery•10m ago
Observing the effect of LLMs on the "business side" of things, I'm increasingly thinking of these as a kind of infection against which the MBA set and their acolytes have no immune response, and I think it's going to eat a large proportion of the benefit of LLMs to most businesses (possibly overwhelming it and actually harming productivity, will depend on how much better these tools get).
LLMs are awesome at bloating your slide decks while making them really slick and complete-looking. They're great at suggesting an entire set of features on a ticket you've just barely started writing ...but did you actually want all those? You end up with redundant or in-context-gibberish features that leave the person actually doing the work tracking down WTF actually matters. They are adding overhead to communication, so far, not just by puffing up and padding language (which isn't great either) but by adding noise "content" that can't be stripped out without talking to the person who created it and making sure that was actually just AI bullshit and not something they actually needed; that is, you can't just do the "LLM, summarize this" trick, because the author used an LLM to plan it, too, not just to pad-out and gussy-up something they actually thought through and wrote.
LLMs are letting people present very convincingly as having a more-complete understanding of what's going on than they really do in ways that are messing up productive work, I'm not sure business-folks are going to be generally capable of tamping this down because it is so in-line with the way they already operate (but on speed), and helps them so very much to look good to one another while saving tons of time. This isn't just the MBA set I accuse above, either, I'm noticing that this improbably-complete deck communication upward is becoming necessary to look competent (and to ladder-climb) as an IC.
Like, I'm only starting to think this through and really observing what's going on through this lens as I've only noticed it in the last few weeks, but the more I see the more alarming this is. I think this is going to be a little like the largely-wasteful "legibility" obsession of upper management, something enabled by computerization that they find irresistible and are pretty bad at employing judiciously and effectively, but probably a lot worse in terms of harm-to-productivity, and directly affecting and changing the behavior of far more layers of an organization. They never (businesses as a whole, to anthropomorphize a bit) gained wisdom with their new powers to burn resources chasing legibility, and this is starting to look like another thing they just will not be able to use (internally! I don't even mean for actually producing external-facing results!) with restraint and taste.
beached_whale•1h ago
misterflibble•1h ago
nusl•1h ago
thatjoeoverthr•55m ago
nusl•15m ago
misterflibble•22m ago
avaer•1h ago
However I don't doubt many "team leaders" can and should be replaced with LLMs.
nidnogg•1h ago
aceazzameen•1h ago
nidnogg•1h ago
embedding-shape•56m ago
misterflibble•24m ago
dfxm12•13m ago
The article seems to imply this is what is happening, as writing style converges towards LLM's style. You can call it what you want, but the important bit is that this is how it appears that LLM's are being used.
Checking against an LLM then using your own voice feels completely fine
Why use an LLM? If you're worried about style, starting with your own voice is more efficient. If you're worried about facts, looking something up in a primary source is best, and is probably cheaper on a few axes, especially if you need to check/validate anyway...