As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.
Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.
In parallel, people start using LLMs to summarize content in a style they prefer.
Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.
Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.
Guttural vocalizations accompanied by frantic gesturing towards a mobile device, or just silence and showing of LLM output to others?
And asbestos and lead paint was actually useful.
A door has been opened that cant be closed and will trap those who stay too long. Good luck!
I do use them, and I also still do some personal projects and such by hand to stay sharp.
Just: they can't mint any more "pre-AI" computer scientists.
A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:
* Not being able to mint any more "pre-AI" junior hires
And, even if we could:
* Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs
* Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs
* Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"
The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.
We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!
Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).
Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?
Just my $0.02, I could be wrong.
The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?
This is a non-study.
If you wrote two essays, you have more 'cognitive engagement' on the clock as compared to the guy who wrote one essay.
In other news: If you've been lifting in the gym for a week, you have more physical engagement than the guy who just came in and lifted for the first time.
Isn't the point of a lot of science to empirically demonstrate results which we'd otherwise take for granted as intuitive/obvious? Maybe in AI-literature-land everything published is supposed to be novel/surprising, but that doesn't encompass all of research, last I checked.
The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.
"While these LLM-to-Brain participants demonstrated substantial improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly higher connectivity across frequency bands, they consistently underperformed relative to Session 2 of Brain-only group, and failed to develop the consolidation networks present in Session 3 of Brain-only group."
The study also found that LLM-group was largely copy-pasting LLM output wholesale.
Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.
I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.
Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.
I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.
I wouldn't ask Cursor to go off and write software from scratch that I need to take ownership of, but I'm reasonably comfortable at this point having it make small changes under direction and with guidance.
The project I mentioned above was adding otel tracing to something, and it wrote a tracae viewing UI that has all the features I need and works well, without me having to spend hours getting it up set up.
Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.
Seems to have somehow been replaced with this AI psychosis?
Can you be more specific and/or provide some references? The "demonstrating curiosity about controversial topics" part is sounding like vaccine skepticism, though I don't recall ever hearing that being referred to as any kind of "psychosis".
As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.
It’s cheap, easy, and quite effective to passively learn the maps over the course of time.
My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.
Living in a city where phone-snatching thieves are widely reported on built my habit of memorising the next couple steps quickly (e.g. 2nd street on the left, then right by the station), then looking out for them without the map. North-Up helps anyways because you don't have to separately figure out which erratic direction the magnetic compass has picked this time (maybe it's to do with the magnetic stuff I EDC.)
I also wanted to mention that just spending some time looking at the maps and comparing differences in each services' suggested routes can be helpful for developing direction awareness of a place. I think this is analogous to not locking yourself into a particular LLM.
Lastly, I know that some apps might have an option to give you only alerts (traffic, weather, hazards) during your usual commute so that you're not relying on turn-by-turn instructions. I think this is interesting because I had heard that many years ago, Microsoft was making something called "Microsoft Soundscape" to help visually impaired users develop directional awareness.
I was shocked into using it when I realized that when using the POV GPS cam, I couldn't even tell you which quadrant of the city I just navigated to.
I wish the north-up UX were more polished.
https://www.nature.com/articles/s41598-020-62877-0
This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.
I have to visit a place several times and with regularity to remember it. Otherwise, out it goes. GPS has made this a non-issue; I use it frequently.
For me, however, GPS didn't cause the problem. I was driving for 5 or 6 years before it became ubiquitous.
GPS navigators provide a rough estimate of arrival time which becomes more accurate as you get closer. The estimate is occasionaly useful for letting someone know when you expect to arrive somewhere.
It's the non-habitual GPS uses that are detrimental to the brain: the trips to a new address in some unfamiliar part of town, when in the distant past you would have consulted a map and then planned a route, memorized it, and followed it without looking at the map.
Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...
The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.
We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.
I want a life of leisure. I don’t want to do hard things anymore.
Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”
Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754
I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.
A John Green quote on public education feels appropriate:
> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.
Funny enough, the reason he gave against books has now finally been addressed by LLMs.
There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.
misswaterfairy•1h ago
https://arxiv.org/abs/2506.08872