if you say it's a means to an end - to what a good grade? - we've lost the plot long ago.
writing is for thinking.
Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.
Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?
EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.
Do you have any evidence of this?
Taking the article's task of essay writing: someone presumably is supposed to read them. It's not a carriage task from point A to point B anymore. If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?
They are trained (amongst other things) on human essays. They just need to mimic them well enough to pass the class.
> Taking the article's task of essay writing: someone presumably is supposed to read them.
Soon enough, that someone is gonna be another LLM more often than not.
"However, the most unequivocal early archaeological evidence of equines put to working use was of horses being driven. Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry."
Long discussion in History Exchange about dating the cave paintings mentioned in the wikipedia article above:
https://history.stackexchange.com/questions/68935/when-did-h...
Unless you want to date the industrial revolution to 30 BCE when Vitruvius described the aeolipile, we can talk about the evidence of these technologies impact in society. For chariots that would be 1700 BCE and horseback riding well into iron age ~1000 BCE.
Your [0] says "Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry.", just after "the most unequivocal early archaeological evidence of equines put to working use was of horses being driven."
That suggests the evidence is stronger for cart use before riding.
If you follow your [1] link to "bullock cart" at https://en.wikipedia.org/wiki/Bullock_cart you'll see: "The first indications of the use of a wagon (cart tracks, incisions, model wheels) are dated to around 4400 BC[citation needed]. The oldest wooden wheels usable for transport were found in southern Russia and dated to 3325 ± 125 BC.[1]"
That is older than 3000 BC.
I tried but failed to find something more definite. I did learn from "Wheeled Vehicles and Their Development in Ancient Egypt – Technical Innovations and Their (Non-) Acceptance in Pharaonic Times" (2021) that:
> The earliest depiction of a rider on horseback in Egypt belongs to the reign of Thutmose III.80 Therefore, in ancient Egypt the horse is attested for pulling chariots81 before it was used as a riding animal, which is only rarely shown throughout Pharaonic times.
I also found "The prehistoric origins of the domestic horse and horseback riding" (2023) referring to this as the "cart before the horse" vs. "horse before the cart" debate, with the position that there's "strong support for the “horse before the cart” view by finding diagnostic traits associated with habitual horseback riding in human skeletons that considerably pre-date the earliest wheeled vehicles pulled by horses." https://journals.openedition.org/bmsap/11881
On the other hand, "Tracing horseback riding and transport in the human skeleton" (2024) points out "the methodological hurdles and analytical risks of using this approach in the absence of valid comparative datasets", and also mentions how "the expansion of biomolecular tools over the past two decades has undercut many of the core assumptions of the kurgan hypothesis and has destabilized consensus belief in the Botai model." https://www.science.org/doi/pdf/10.1126/sciadv.ado9774
Quite a fascinating topic. It's no wonder that Wikipedia can't give a definite answer!
It's the inevitable consequence of working at a different level of abstraction. It's not the end of the world. My assembly is rusty too...
The “skill domain” with compilers is the “input”: that’s what I need to grok , maintain , and understand . With LLMs it’s the “output”.
until that changes, you’re playing a dangerous game letting those skills atrophy.
As if that's anything new. There's the adage that's older than electronics, that freedom of the press is freedom for those who can afford to own a printing press.
> However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets).
Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)
See https://en.wikipedia.org/wiki/Socratic_problem
> Socrates was the main character in most of Plato's dialogues and was a genuine historical figure. It is widely understood that in later dialogues, Plato used the character Socrates to give voice to views that were his own.
However, have a look at the Wikipedia article itself for a more nuanced view. We also have some other writers with accounts of Socrates.
Nope.
Read the dialogue (Phaedrus). It's about rhetoric and writing down political discourses. Writing had existed for millennia. And the bit about writing being detrimental is from a mythical Egyptian king talking to a god, just a throwaway story used in the dialogue to make a tiny point.
In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.
And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.
Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.
Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?
The skills that leave: arguments, analysis, language, creativity often seem abstract and primarily if not exclusively sourced in our minds
OTOH, IME the quickest way to truly forget something is to overwrite it. Photographs being a notorious example, where looking at photographs can overwrite your own personal episodic memory of an event. I don't know how much research exists exploring this phenomenon, though, but AFAIU there are studies at least showing that the mere act of recalling can reshape memories. So, ironically, perhaps the best way not to forget is to not remember.
Left unstated in the above is that we can categorize different types of memory--episodic, semantic, implicit, etc--based on how they seem to operate. Generalizations (like the above ;) can be misleading.
I remember hearing about some research they'd done on "binge watching" -- basically, if you have two groups:
1. One group watches the entire series over the course of a week
2. A second group watches a series one episode per week
Then some time later (maybe 6 months), ask them questions about the show, and the people in group 2 will remember significantly more.
Anecdotally, I've found the same thing with Scottish Country Dancing. In SCD, you typically walk through a dance that has 16 or so "figures", then for the next 10 minutes you need to remember the figures over and over again from different perspectives (as 1st couple, 2nd couple, 3rd couple etc). Fairly quickly, my brain realized that it only needed to remember the figures for 10 minutes; and even the next morning if you'd asked me what the figures were for a dance the night before I couldn't have told you.
I can totally believe it's the same thing with writing with an LLM (or having an assistant write a speech / report for you) -- if you're just skimming over things to make sure it looks right, your brain quickly figures out that it doesn't need to retain this information.
Contrast this to riding a bike, where you almost certainly used the skill repeatedly over the course of at least a year.
And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves. Explore different paths. Thinking & pondering can only do so much and will reach the limits soon. Writing, on the other hand enables one to explore thoughts nearly endlessly.
Given that thinking is so intimately associated with writing (could be prose, drawing, equations, graphs/charts, whatever) and that LLMs are doing more and more of writing it'll be interesting to see the effect of LLMs on our cognitive skills.
There's a lot of talk about AI assisted coding these days, but I've found similar issues where I'm unable to form a mental model of the program when I rely too much on them (amongst other issues where the model will make unnecessary changes, etc.). This is one of the reasons why I limit their use to "boring" tasks like refactoring or clarifying concepts that I'm unsure about.
> it'll be interesting to see the effect of LLMs on our cognitive skills.
These discussions remind me a lot about this comic[1].
I find this to still be true with AI assisted coding. Especially when I still have to build a map of the domain.
I’d love to see some sort of study on people who actively particulate writing their stuff on social media and those who don’t.
If u want to spare your mind from GPT numbness - write or copy what it tells you to do by hand, do not abandon this process.
Or just write code, programs, essays, poems for fun. Trust me - it is and you’ll get smarter and more confident. GPT is a very dangerous convenience gadget, is not going away like sugar or Netflix, or obesity or long commutes … but similarly dosage and counter measures are essential to cope with the side-effects.
Hitting the keys is not always writing.
* Describing the purpose of the writing
* Defining the format of the writing
* Articulating the context
You are writing to figure out what you want.
Why do I still know how to optimize free conventional memory in DOS by configuring config.sys and autoexec.bat?
I haven’t done this in 2 decades and I’m reasonably sure I never again will
Now think about the effect on those humans currently using LLMs at that stage of their development.
I did this for a living at a large corp where I was the 'thinkpad guy', and I barely remember any of the tricks (and only some of the IBM stuff). Then Windows NT and 95 came out and like whoo cares... This was always dogshit. Because I was always an Apple/Unix guy and that was just a job.
Except when it does-- for example in the abstract where it is written that Brain-to-LLM users "exhibited higher memory recall" than LLM and LLM-to-Brain users.
It's very tempting to let it write a lot, let it structure things, let it make arguments and visuals. It's easy to let it do more and more... And then you end up with something that is very much... Not yours.
But your name is on it, you are asked to explain it, to understand it even better than it is written down. Surely the report is just a "2D projection" of some "high dimensional reality" that you have in you head... right? Normally it is, but when you spit out a report in 1/10th of the time it isn't. You struggle to explain concepts, even though they look nice on paper.
I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways.
I like the term cognitive debt as a description of the gap between what mental models one would have to develop pre-LLMs to get a report out, and how little you may need with an LLM.
In the end it is your name on that report/paper, what can we expect of you, the author? Maybe that will start slipping and we start expecting less over time? Maybe we can start skipping authors altogether and rely on the LLM's "mental" model when we have in depth questions about a report/paper... Who knows. But different models (like LLMs) may have different "models" (predictive algorithms) of underlying truth/reality. What allows for most accurate predictions? One needs a certain "depth of understanding". Writing while relying too much on LLMs will not give it to you.
Over time indeed this may lead to a population "cognitive decline, or loss of cognitive skills." I don't dare to say that. Book printing didn't do that, although it was expected at the time by the religious elite, they worried that normal humans would not be able to interpret texts correctly.
As remarked here in this thread before, I really do think that "Writing is thinking" (but perhaps there is something better than writing which we haven't invented yet). And thinking is: Developing a detailed mental model that allows you to predict the future with a probability better than chance. Our survival depends on it, in fact it is what evolution is in terms of information theory [0]. "Nothing in biology makes sense except in the light of ... information."
"""Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.
"""It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""
Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.
Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.
I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.
LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).
To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.
I wonder how the participants felt writing an essay while being hooked up to an EEG.
But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.
The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.
After a while you get bored of it (duh), and go back to doing what you usually do, utilizing the "bicycle" for the kind of stuff you actually like doing, if it's needed, because while exploration is fun, work is deeply personal and meaningful and does not sustain too much exploration for too long.
(highly personal perspective)
Coming back to AI, maybe in the future we will need to explicitly take mental exercise as seriously as we do with physical exercise now. Perhaps people will go to mental gyms. (That’s just a school you may say, but I think the focus could be different: Not having a goal to complete a class and then finish, but continuous mental exercises..)
This is pretty difficult for me to buy. Cycling has been shown time & again to be a great way to increase fitness.
Compared to sitting on your butt in a car or public transport.
Perhaps not compared to walking everywhere and chasing the antelope you want to cook for lunch.
I think what he meant is that both bicycles and LLMs are a force multiplier and you still provide the core of the work, but not all of the work any more.
"Look at that old timer! He can code without AI! That's insane!"
We detached this comment from https://news.ycombinator.com/item?id=44287157 and marked it off topic.
Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.
Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.
tguvot•4h ago
Would really like to present it to management that pushes ai assistance for coding
ivape•4h ago
OhNotAPaper•4h ago
You'd have to articulate harm, so this is basically dead in the water (in the US). Good luck.
eru•4h ago
Your management presumably cares more about results, than your long term cognitive decline?
tguvot•3h ago
if todays productivity is traded for longer term stability, i am not sure that it's a risk they would like to take
eru•3h ago
Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)
tguvot•1h ago
ezst•3h ago
eru•3h ago
> Companies don't own employees: workers can leave at any time.
> Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)
ezst•36m ago
OhNotAPaper•4h ago
I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)
throwawaygmbno•4h ago
But engineers aren't being fired completely in droves because we have adapted. The human can still break down the problem, tell the LLM to come up with multiple different ways of solving the problem, throw away all of them and asking for more. My most effective use is usually looking and seeing what I would do normally, breaking it down, and then asking for it in chunks that make sense that would touch multiple places, then coding details. It's just a shift in thinking like knowing when to copy and paste when being DRY.
Designers are screwing themselves right now waiting for case law instead of using their talents to make one unique thing not in the training set to boost their productivity and shaming tools that let them do that.
It will be a competitive advantage in the future to short sighted companies that took humans out the loop completely, but any company not using the tech will be horse shoe makers not worried because of all the mechanical issues with horseless carriages
raincole•3h ago
devjab•3h ago
These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.
The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.
Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.
Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.
tguvot•3h ago
darkstar_16•11m ago
tguvot•2m ago