Would really like to present it to management that pushes ai assistance for coding
You'd have to articulate harm, so this is basically dead in the water (in the US). Good luck.
Your management presumably cares more about results, than your long term cognitive decline?
if todays productivity is traded for longer term stability, i am not sure that it's a risk they would like to take
Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)
> Companies don't own employees: workers can leave at any time.
> Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)
I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)
But engineers aren't being fired completely in droves because we have adapted. The human can still break down the problem, tell the LLM to come up with multiple different ways of solving the problem, throw away all of them and asking for more. My most effective use is usually looking and seeing what I would do normally, breaking it down, and then asking for it in chunks that make sense that would touch multiple places, then coding details. It's just a shift in thinking like knowing when to copy and paste when being DRY.
Designers are screwing themselves right now waiting for case law instead of using their talents to make one unique thing not in the training set to boost their productivity and shaming tools that let them do that.
It will be a competitive advantage in the future to short sighted companies that took humans out the loop completely, but any company not using the tech will be horse shoe makers not worried because of all the mechanical issues with horseless carriages
These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.
The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.
Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.
Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.
Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.
When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.
Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.
If I write the application, I have an internal map that corresponds (more or less) to what's going on in the code. I built that map as I was writing it, and I use that map as I debug, maintain, and extend the application.
But if I use AI, I have much less clear of a map. I become dependent on AI to help me understand the code well enough to debug it. Given AI's current limitations of actually understanding, that should give you pause...
if you say it's a means to an end - to what a good grade? - we've lost the plot long ago.
writing is for thinking.
What I'm saying is that yes writing essays is one skill and if it's your goal to write essays then obviously not doing it yourself entirely will make you worse than otherwise. But I'm expanding a bit beyond the paper saying that yes the brain won't grow for this specific skill because it's actually a different skill.
Thinking can be done in lots of ways such as when having a conversation, and what I think the skill is is steering and creating structures to orchestrate AIs into automated workflows which is a new way of working. And so what I mean is that with a new technology you can't expect a transfer to the way you work with old technologies rather you have to figure out the better new way you can use the new technology, and the brain would grow for this specific new way of working. And one could analyse depending on ones goal if it's a tool you'd want to use in the sense that cause leads to effect or if you would be better off for your specific goal to ignore the new technology and do it the usual way.
Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.
Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?
EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.
Do you have any evidence of this?
"[Of course] writing an essay with chatgpt wouldn’t make you better at writing essays unassisted. Sure, a student wouldn’t want to practice the wrong way, but anyone else just wants to produce a good essay."
Yes, I'm acknowledging a lack of skill transfer, but that there are new ways of working and so I sarcastically imply the article can't see the forest for the trees, missing the big picture. A horse and carriage is very useful for lots of things. A horse is more specialised. I'm getting at the analogy of a technological generalisation and expansion, while logistics is not part of my argument. If you want to write a very good essay and if you're good at that then do it manually. If you want to create scalable workflows and have 5 layers of agents interacting with each other collaboratively and adversarially scouring the internet and newssites and forums to then send investment suggestions to your mail every lunch then that's a scale that's not possible with a pen and paper and so prompting has an expanded cause and effect cone
You have that backwards. A horse and carriage is good for traveling on a road. If you have just the horse, however, you can travel on a road, travel offroad, pull a plow, ride into battle and trample evildoers, etc.
Taking the article's task of essay writing: someone presumably is supposed to read them. It's not a carriage task from point A to point B anymore. If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?
They are trained (amongst other things) on human essays. They just need to mimic them well enough to pass the class.
> Taking the article's task of essay writing: someone presumably is supposed to read them.
Soon enough, that someone is gonna be another LLM more often than not.
"However, the most unequivocal early archaeological evidence of equines put to working use was of horses being driven. Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry."
Long discussion in History Exchange about dating the cave paintings mentioned in the wikipedia article above:
https://history.stackexchange.com/questions/68935/when-did-h...
Unless you want to date the industrial revolution to 30 BCE when Vitruvius described the aeolipile, we can talk about the evidence of these technologies impact in society. For chariots that would be 1700 BCE and horseback riding well into iron age ~1000 BCE.
Your [0] says "Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry.", just after "the most unequivocal early archaeological evidence of equines put to working use was of horses being driven."
That suggests the evidence is stronger for cart use before riding.
If you follow your [1] link to "bullock cart" at https://en.wikipedia.org/wiki/Bullock_cart you'll see: "The first indications of the use of a wagon (cart tracks, incisions, model wheels) are dated to around 4400 BC[citation needed]. The oldest wooden wheels usable for transport were found in southern Russia and dated to 3325 ± 125 BC.[1]"
That is older than 3000 BC.
I tried but failed to find something more definite. I did learn from "Wheeled Vehicles and Their Development in Ancient Egypt – Technical Innovations and Their (Non-) Acceptance in Pharaonic Times" (2021) that:
> The earliest depiction of a rider on horseback in Egypt belongs to the reign of Thutmose III.80 Therefore, in ancient Egypt the horse is attested for pulling chariots81 before it was used as a riding animal, which is only rarely shown throughout Pharaonic times.
I also found "The prehistoric origins of the domestic horse and horseback riding" (2023) referring to this as the "cart before the horse" vs. "horse before the cart" debate, with the position that there's "strong support for the “horse before the cart” view by finding diagnostic traits associated with habitual horseback riding in human skeletons that considerably pre-date the earliest wheeled vehicles pulled by horses." https://journals.openedition.org/bmsap/11881
On the other hand, "Tracing horseback riding and transport in the human skeleton" (2024) points out "the methodological hurdles and analytical risks of using this approach in the absence of valid comparative datasets", and also mentions how "the expansion of biomolecular tools over the past two decades has undercut many of the core assumptions of the kurgan hypothesis and has destabilized consensus belief in the Botai model." https://www.science.org/doi/pdf/10.1126/sciadv.ado9774
Quite a fascinating topic. It's no wonder that Wikipedia can't give a definite answer!
It's the inevitable consequence of working at a different level of abstraction. It's not the end of the world. My assembly is rusty too...
The “skill domain” with compilers is the “input”: that’s what I need to grok , maintain , and understand . With LLMs it’s the “output”.
until that changes, you’re playing a dangerous game letting those skills atrophy.
As if that's anything new. There's the adage that's older than electronics, that freedom of the press is freedom for those who can afford to own a printing press.
> However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets).
Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)
See https://en.wikipedia.org/wiki/Socratic_problem
> Socrates was the main character in most of Plato's dialogues and was a genuine historical figure. It is widely understood that in later dialogues, Plato used the character Socrates to give voice to views that were his own.
However, have a look at the Wikipedia article itself for a more nuanced view. We also have some other writers with accounts of Socrates.
Nope.
Read the dialogue (Phaedrus). It's about rhetoric and writing down political discourses. Writing had existed for millennia. And the bit about writing being detrimental is from a mythical Egyptian king talking to a god, just a throwaway story used in the dialogue to make a tiny point.
In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.
Well, so that's exactly my point: Plato was an old man who yelled at clouds before it was cool.
And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.
Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.
Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?
The skills that leave: arguments, analysis, language, creativity often seem abstract and primarily if not exclusively sourced in our minds
OTOH, IME the quickest way to truly forget something is to overwrite it. Photographs being a notorious example, where looking at photographs can overwrite your own personal episodic memory of an event. I don't know how much research exists exploring this phenomenon, though, but AFAIU there are studies at least showing that the mere act of recalling can reshape memories. So, ironically, perhaps the best way not to forget is to not remember.
Left unstated in the above is that we can categorize different types of memory--episodic, semantic, implicit, etc--based on how they seem to operate. Generalizations (like the above ;) can be misleading.
I remember hearing about some research they'd done on "binge watching" -- basically, if you have two groups:
1. One group watches the entire series over the course of a week
2. A second group watches a series one episode per week
Then some time later (maybe 6 months), ask them questions about the show, and the people in group 2 will remember significantly more.
Anecdotally, I've found the same thing with Scottish Country Dancing. In SCD, you typically walk through a dance that has 16 or so "figures", then for the next 10 minutes you need to remember the figures over and over again from different perspectives (as 1st couple, 2nd couple, 3rd couple etc). Fairly quickly, my brain realized that it only needed to remember the figures for 10 minutes; and even the next morning if you'd asked me what the figures were for a dance the night before I couldn't have told you.
I can totally believe it's the same thing with writing with an LLM (or having an assistant write a speech / report for you) -- if you're just skimming over things to make sure it looks right, your brain quickly figures out that it doesn't need to retain this information.
Contrast this to riding a bike, where you almost certainly used the skill repeatedly over the course of at least a year.
I worked with some researchers who specifically examined this when developing training content for soldiers. They found that 'muscle memory' skills such as riding a bike could persist for a very long time. At the other end of the spectrum were tasks that involved performing lots of technical steps in a particular order, but where the tasks themselves were only performed infrequently. The classic example was fault finding and diagnosis on military equipment. The researchers were in effect quantifying the 'forgetting curve' for specific tasks. For some key tasks, you could overtrain to improve the competence retention, but it was often easier to accept that training would wear off very quickly and give people a checklist instead.
And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves. Explore different paths. Thinking & pondering can only do so much and will reach the limits soon. Writing, on the other hand enables one to explore thoughts nearly endlessly.
Given that thinking is so intimately associated with writing (could be prose, drawing, equations, graphs/charts, whatever) and that LLMs are doing more and more of writing it'll be interesting to see the effect of LLMs on our cognitive skills.
There's a lot of talk about AI assisted coding these days, but I've found similar issues where I'm unable to form a mental model of the program when I rely too much on them (amongst other issues where the model will make unnecessary changes, etc.). This is one of the reasons why I limit their use to "boring" tasks like refactoring or clarifying concepts that I'm unsure about.
> it'll be interesting to see the effect of LLMs on our cognitive skills.
These discussions remind me a lot about this comic[1].
I find this to still be true with AI assisted coding. Especially when I still have to build a map of the domain.
I’d love to see some sort of study on people who actively particulate writing their stuff on social media and those who don’t.
If u want to spare your mind from GPT numbness - write or copy what it tells you to do by hand, do not abandon this process.
Or just write code, programs, essays, poems for fun. Trust me - it is and you’ll get smarter and more confident. GPT is a very dangerous convenience gadget, is not going away like sugar or Netflix, or obesity or long commutes … but similarly dosage and counter measures are essential to cope with the side-effects.
Like not only do I cosign all said above, but I will also add to this: brevity is the soul of wit and none of these fucking things are brief. No matter what you ask for you end up getting just paragraphs of shit to communicate even basic ideas. It's hard to not think this tool was designed from go to automate high school book reports.
I would only use these programs to either create these overly long, meandering stupid emails, or to digest ones similarly sent to me, and make a mental note to reduce my interactions with this person.
It's no wonder the MBA class is fucking thrilled with it though, since the vast majority of their jobs seem to revolve around producing and consuming huge reports containing vacuously little.
Hitting the keys is not always writing.
* Describing the purpose of the writing
* Defining the format of the writing
* Articulating the context
You are writing to figure out what you want.
I feel like to goes beyond writing to really any form of expressing this knowledge to others. As a grad student, I was a teaching assistant for an Electrical Engineering class I failed as an undergrad. The depth of understanding I developed for the material over the course of supporting students in the class was amazing. I transitioned from "knowing" the material and equations to being able to generate them all from first principles.
Regardless, I fully agree that using LLMs as our form of expression will weaken both the ability to express ourselves AND the ability to develop deep understanding of topics as LLMs "think" for us too.
Not to be pedantic, but I’d still argue that thinking is the most important. At least when understanding the nature of learning. I mean, writing is ultimately great because it facilitates high quality thinking. You essentially say this yourself.
Overall, I think it’s more helpful to understand the learning process as promoting high quality thinking (encoding if you want to be technical). This sort of explains why teaching others, argumentation, mind-mapping, good note-taking, and other activities and techniques are great for learning as well.
Why do I still know how to optimize free conventional memory in DOS by configuring config.sys and autoexec.bat?
I haven’t done this in 2 decades and I’m reasonably sure I never again will
Now think about the effect on those humans currently using LLMs at that stage of their development.
I did this for a living at a large corp where I was the 'thinkpad guy', and I barely remember any of the tricks (and only some of the IBM stuff). Then Windows NT and 95 came out and like whoo cares... This was always dogshit. Because I was always an Apple/Unix guy and that was just a job.
The last phone conversation you had with a utility company, how did they greet you exactly?
There's lots that we do remember, sometimes odd things like your example, though I'm sure you must have repeated it a few times as well. But there's so much detail that we don't remember at all, and even our childhood memories just become memories of memories - we remember some event, but we slowly forget the exact details, they become fuzzy.
People do stuff like that all the time, bringing up past memories in spontaneity. The brain absolutely does remember things it "doesn't need".
Except when it does-- for example in the abstract where it is written that Brain-to-LLM users "exhibited higher memory recall" than LLM and LLM-to-Brain users.
It's very tempting to let it write a lot, let it structure things, let it make arguments and visuals. It's easy to let it do more and more... And then you end up with something that is very much... Not yours.
But your name is on it, you are asked to explain it, to understand it even better than it is written down. Surely the report is just a "2D projection" of some "high dimensional reality" that you have in you head... right? Normally it is, but when you spit out a report in 1/10th of the time it isn't. You struggle to explain concepts, even though they look nice on paper.
I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways.
I like the term cognitive debt as a description of the gap between what mental models one would have to develop pre-LLMs to get a report out, and how little you may need with an LLM.
In the end it is your name on that report/paper, what can we expect of you, the author? Maybe that will start slipping and we start expecting less over time? Maybe we can start skipping authors altogether and rely on the LLM's "mental" model when we have in depth questions about a report/paper... Who knows. But different models (like LLMs) may have different "models" (predictive algorithms) of underlying truth/reality. What allows for most accurate predictions? One needs a certain "depth of understanding". Writing while relying too much on LLMs will not give it to you.
Over time indeed this may lead to a population "cognitive decline, or loss of cognitive skills." I don't dare to say that. Book printing didn't do that, although it was expected at the time by the religious elite, they worried that normal humans would not be able to interpret texts correctly.
As remarked here in this thread before, I really do think that "Writing is thinking" (but perhaps there is something better than writing which we haven't invented yet). And thinking is: Developing a detailed mental model that allows you to predict the future with a probability better than chance. Our survival depends on it, in fact it is what evolution is in terms of information theory [0]. "Nothing in biology makes sense except in the light of ... information."
Yes definitely!
I'd say that being able to turn an idea over in your head is how you know if you know it ... And even pre-LLM, it was easy to "appear to know" something, but not really know it.
PG wrote pretty much this last year:
in a couple decades there won't be many people who can write.
So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots.
Indeed the paper doesn’t provide a reference or citation for the term “cognitive debt” so it is a strange title. Maybe a last minute swap.
Fascinating research out of MIT. Like all psychology studies it deserves healthy scrutiny and independent verification. Bit of a kitchen sink with the imaging and psychometric assessments, but who doesn’t love a picture of “this is your brain on LLMs” amirite?
Curious, did anyone try to learn a subject by predicting the next token, and how did it go?
"""Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.
"""It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""
Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.
Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.
I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.
LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).
Just like a person could, which is why one validates. AI is not one's sole information. That's dangerous, to say the least. It also helps to stay within one's formal education, and/or experience, and stay within logical boundaries one can track themselves. It is really all about understanding what you are doing, committing to run without you.
But that means considering LLMs as a thinking tool rather than a tool that does work for you is worth it.
To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.
I wonder what the commercialized form of a "gym but for your brain" would look like and if it would take off and if it would be more structured than... uh... schools? Wait, wouldn't this just be like a college except the students are there because they want to be, and not for vocational reasons?
I wonder how the participants felt writing an essay while being hooked up to an EEG.
But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.
The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.
After a while you get bored of it (duh), and go back to doing what you usually do, utilizing the "bicycle" for the kind of stuff you actually like doing, if it's needed, because while exploration is fun, work is deeply personal and meaningful and does not sustain too much exploration for too long.
(highly personal perspective)
Coming back to AI, maybe in the future we will need to explicitly take mental exercise as seriously as we do with physical exercise now. Perhaps people will go to mental gyms. (That’s just a school you may say, but I think the focus could be different: Not having a goal to complete a class and then finish, but continuous mental exercises..)
This is pretty difficult for me to buy. Cycling has been shown time & again to be a great way to increase fitness.
Compared to sitting on your butt in a car or public transport.
Perhaps not compared to walking everywhere and chasing the antelope you want to cook for lunch.
I think what he meant is that both bicycles and LLMs are a force multiplier and you still provide the core of the work, but not all of the work any more.
With the example of LLMs, sure, you could cycle the initial destination you were meant to walk to - write an article with its help, save a few hours and call it a day. Or you could cycle further and use the saved time to work on something a text model can't help you well with.
I'm sure cultures where they cycle to everywhere all the time take it easier than cultures where going out for a bike ride is an event.
"Look at that old timer! He can code without AI! That's insane!"
We detached this comment from https://news.ycombinator.com/item?id=44287157 and marked it off topic.
I don't believe I was impolite or making a personal attack. I had a relevant point and I made it clearly and in a civil manner. I strongly disagree with your assessment.
Also, suspicions about the changing frequency of certain phrases in HN comments can easily be tested:
https://hn.algolia.com/?dateRange=all&prefix=false&query=%22...
https://hn.algolia.com/?dateRange=all&prefix=true&query=%22I...
Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.
Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.
If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.
In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).
You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?
They said it was a rule of thumb, which is a general rule based on experience. In context with the comment they were replying to, it seems that they are saying that if you want to learn and understand something, you should put the effort in yourself first to synthesize your ideas and write out a full essay, then use an LLM to refine, tighten up, and polish it. In contrast to using an LLM as you go to take your core ideas and expand them. Both might end up very good essays, but your understanding will be much deeper if you follow the "LLMs are good at reducing text, not expanding it" rule.
Intentionally taking this to a slightly absurd metaphor - it seemed to me like a person saying that their desire to reduce their alcohol consumption, led them to infer the rule of thumb that "waiters are good at bringing food, not drinks".
It is absolutely possible to use LLMs when writing essays, but do not use them to write! Use them to critique what you yourself with your own mind wrote!
You wrote it, not the AI. My entire point here is not to have the AI write, ever. Have it critique, have it Socratically draw you to make the decisions to axe sections, rewrite them, and so on - and then you do that, personally, using your own mind.
As I see it, it's much more interesting to ask not wherther we are still good at doing the work that computers can do for us, but whether we are now able to do better at the higher-level tasks that computers can't yet do on their own.
> Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
On the topic itself, I am very cautious about my use of LLMs. It breaks down into three categories for me: 1. replacing Google, 2. get a first review of my work and 3. taking away mundane tasks around code editing.
Point 3. is where I can become most complacent and increasingly miscategorize tasks as mundane. I often reflect after a day working with an LLM on coding tasks because I want to understand how my behavior is changing in its presence. However, I do not have a proper framework to work out "did i get better because of it or not".
I still believe we need to get better as professionals and it worries me that even this virtue is called into question nowadays. Research like this will be helpful to me personally.
The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.
Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.
This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.
So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"
StackExchange, the way it was meant to be initially, would be way more valuable over text models. But in reality people are imperfect and carry all sorts of cognitive biases and baggage, while a LLM won't close your question as 'too broad' right after it gets upvotes and user interaction.
On the other hand, I still find LLM writing on the subjects familiar to me, vastly inferior. Whenever I try to write a say, email with its help, I end up spending just as much time either editing the prompt to keep it on track, or rewriting it significantly after. I'd rather write it on my own with my own flow, than proofread/peer review a text model.
quoting the article:
Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.
When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.
Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.
So now I am in the final editing stage, and I am going back over old writing that I don’t remember doing. The material has come together over many many drafts, and parts of it are still not quite consistent with other parts.
But when I am done, it will be mine. And any mistakes will be honest ones that represent the real me. That’s a feeling no one who uses AI assistance will ever have.
I have never and will never use AI to write anything for me.
What's interesting is I have to wonder if this is something that would extend to our own way of thinking, as discussed here with the short term affects we're already describing with increased dependence on LLMs, GPS systems, etc. There have been studies which have shown that those of who grew up using search engines exclusively did not lose or gain anything with respect to brain power, instead they developed a different means of retaining the information (i.e. they are less likely to remember the exact fact but they will remember how to find it). It makes me wonder if this is the next step in that same process and those of us in the transition period will lament what we think we'll lose, or if LLM dependency presents a point of diminishing return where we do lose a skill without replacing it.
I'm at the beginning of my career and learning every day - I could do my job faster with an LLM assistant but I would lose out on an opportunity to acquire skills. I don't buy the argument that low-level critical thinking skills are obsolete and high level conceptual planning is all that anyone will need 10 years from now.
On a more sentimental level I personally feel that there is meaning in knowing things and knowing how to do things and I'm proud of what I know and what I know how to do.
Using LLM's doesn't look particularly hard and if I need to use one in the future I'll just pick whichever one is supposedly the newest and best but for now I'm content to toil away on my own.
That's not surprising but also bleak.
I've been thinking for a while now that in order to truly make augmented workflows work, the mode of engagement is central. Reviewing LLM code? Bah. Having an LLM watch over my changes and give feedback? Different story. It's probably gonna be difficult and not particularly popular, but if we don't stay in the driver's seat somehow, I guess things will get pretty bleak.
I read about this in a book "Our Robots, Ourselves". That talked about airline pilots' experience with auto-land systems introduced in the late 1990s/ early 2000s.
As you'd expect after having read Ironies of Automation, after a few near misses and not misses, auto-land is not used any more. Instead, pilot augmentation with head-up displays is used.
What is the programming equivalent of a head-up display?
Syntax highlighting, Intellisense, and the millions of other little features built into modern editors.
This way I'd still be forced to think about the system, without having to waste time with the tedious part of writing code, fixing typos, etc.
Bonus point: this could become a two-way system between different programming languages and UML as the intermediate representation, which would make a lot easier to port applications to different languages, and would eliminate concerns about premature optimizations. People could still experiment with new ideas in languages that are more accessible (Python/Javascript) and later on port them to more performant systems (Rust/D/C/C++).
Test failures are more explicit, you run tests when you want to and deal with the results.
Code review often has a horrible feedback loop - often days after you last thought about it. I think LLMs can help tighten this. But it can't be clippy, it can't interrupt you with things that _may_ be problems. You have to be able to stay in the flow.
For most things that make programmers faster, I think deterministic tooling is absolutely key, so you can trust it rather blindly. I think LLMs _can_ be really helpful for helping you understand what you changed and why, and what you may have missed.
Just some random ideas. LLMs are amazing. Incorporating them well is amazingly difficult. What tooling we have now (agentic and all that) feels like early tech demos to me.
I joked that "I don't do drugs" when someone asked me whether I played MMORPGs, but this joke becomes just too real when we apply it to generative AI of any soırt.
Putting ChatGPT in front of a child and asking them to do existing tasks is an obviously disasterous pedagogical choice for the reasons the article outlines. But it's not that hard to create a more constrained environment for the LLM to assist in a way that doesn't allow the student to escape thinking.
For writing - it's clear that finding the balance on how much time you ordering your thoughts and getting the LLM to write things is its own skillset, this will be its own skill we want to teach independent of "can you structure your thoughts in an essay"
Where did you get that from? While the article mentions the word "atrophy" twice, it's not something that they found. They just saw less neural activation in regards to essay writing in those people who didn't write the essay themselves. I don't anything there in regards to the brain as a whole.
Like everything, not using something causes that thing to atrophy. IOW, if you depend on something too much, you'll grow dependency on it, because that part of your body doesn't do the work that much anymore.
Brain is an interconnected jungle. Improvement in any ability will improve other, adjacent abilities. You need to think faster to type faster. If you can't think faster, you'll stagnate, for example.
Also, human body always tries to optimize itself to reduce its energy consumption. If you get a chemical from outside, it'll not produce it anymore, assuming the supply will be there. Brain will reduce its connections in some region if that function is augmented by something else.
Same for skill atrophies. If you lose one skill, you lose the connections in your brain, and that'll degrade adjacent skills, too. As a result, skill atrophy is brain atrophy in the long run.
"Essay Writing" in particular, at least in an academic context, is almost by definition an entirely useless activity, as both the writer and the reader don't care much about the essay as an artifact. It's a proxy for communication skills, that we've had to use for lack of a better alternative, but my hope is that now that it's become useless as a proxy, our education system can somehow switch to actually helping learners communicate better, rather than continuing to play pretend.
Do you have plants at home? You're 50% there for growing your own food (veggies, at least). Do you mend your clothes (e.g.: sew your buttons back)? You're ~30% there for making your own clothes, given you have access to fabric.
On the essay writing, I can argue that at least half of my communication skills come from writing and reading. I don't write essays anymore, but I write a diary almost daily, and I build upon what I have read or written in the past for academic reasons. What I find more valuable in these exercises is not the ability to communicate with others, but communicate with myself.
Brain has this strange handicap. It thinks that it has a coherent thought, but the moment you try to write or tell about it, what comes out is a mushy, spaghetti which doesn't mean anything. Having the facilities to structure it, and to separate the ore from the dirt, and articulate it clearly so you and everyone can understand it is a very underrated skill.
Funnily, the biggest contributor to my written skills is this place, since good discussion needs a very particular set of skills here, namely clarity, calmness and having structure to your thought.
This is why I'm very skeptical of letting go of writing, and actual pens and paper for progress. We're old creatures evolved slowly, and our evolution has a maximum speed. Acting like this is not true will end in disaster.
Humans, the civilization and the culture we built has so many implicit knowledge coded everywhere, and assuming that we know it all, and can encode in a 80GB weighted graph is a mistake to put it kindly.
I thought WoW was an off-label contraceptive?
I had to disable copilot for my blog project in the IDE, because it kept bugging me, finishing my sentences with fluff that I'd either reject or heavily rewrite. This added some mental overhead that makes it more difficult to focus.
Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:
Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.
This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.
The LLM-optimist view at the moment, which takes on board the need to review LLMs, assumes that this review capability will exist. I cannot review LLM output on areas outside of my expertise. I cannot develop the expertise I need if I use an LLM in-the-large.
I first encountered this issue ~year-ago when using an LLM to prototype a programming language compiler (a field I knew quite well anyway) -- but realised that very large decisions about the language were being forced by LLM implementation.
Then, over the last three weeks, I've had to refresh my expertise in some areas of statistics and realised much of my note taking with LLMs has completely undermined this process -- the effective actions have been, in follow on, traditional methods: reading books, watching lectures, taking notes. The LLM is only a small time saver, "in the small" once I'm an expert. It's absolutely disabling as a route back to expertise.
Not when there's money to be made.
One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts
On its own it would be a funny internet culture phenomenon but paired with the fact that you can’t confidently assume that anybody even wrote what you’re reading it is hilarious
Sorry, I can't immediately think of what you're talking about. Could you link to an example so I can get a feel for it?
Now one like me might go and ask how much of communication is actually worthwhile? Sometimes I consider that there is lot of communication that might not actually be. It is still done, but if no one actually reads it, why not automate generation.
Not to say there is not significant amount of stuff you actually want to get right.
There's a reason the real-estate industry has been able to go all-in on using AI to write property listings with almost no consumer kickback (except when those listings include hallucinated schools).
We're already used to treating them with skepticism, and nobody takes them at face value.
There's a tremendous hollowing-out of our mental capacities caused by the computer science framing of activities in terms of input->output, as-if the point is to obtain the output "by any means".
It would not matter if the LLM gave exactly the same output as you had written, and always did. Because you still have to act in the world with thoughts that you needed have when authoring it.
So much this.
At my current workplace, I was asked to write up a design doc for a software system. The contents of the document itself weren't very relevant as the design deviated significantly based on constraints and feedback that could be discovered only after beginning the implementation, but it was the act of putting together that document, thinking about the various cases, etc. that lead to the formation of a mental model that helped me work towards delivering that system.
As soon as we stop treating AI like mind readers things will level out.
I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.
Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.
And of course the key point is that the author of that article isn't (IMO) working in the security research field any more, they work at fly.io on the security of that platform.
If he's a security researcher, then I'd imagine much of his LLM use is outside his area of expertise. He's probably not using it to replace his security research.
I think the revulsion to LLMs by experts is during that phase when its clearly mentally disabling you.
I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them. There is an artistry to writing and speaking that I am only now in my late forties beginning to truly appreciate. Language is a powerful tool, the choice of a single word can sometimes make or break an argument.
I don't see how LLMs can do anything but significantly worsen this situation overall.
Yes, but the arguments they need to present are not necessarily the ones they used to convince themselves, or their own reasoning history that made them arrive at their proposal. Usually that is an overly boring graph search like "we could do X but that would require Y which has disadvantage Z that theoretically could be salvaged by W, but we've seen W fail in project Q and especially Y would make such a failure more likely due to reason T, so Y isn't viable and therefore X is not a good choice even if some people argue that Y isn't a strict requirement, but actually it is if we think in a timeline of several years and blabla" especially if the decision makers have no time and no understanding of what the words X, Y, Z, W, Q, T etc. truly mean. Especially if the true reason also involves some kind of unspeakable office politics like wanting to push the tools developed by a particular team as opposed to another or wanting to use some tech for CV reasons.
The narrative to be crafted has to be tailored for the point of view of the decision maker. How can you make your proposal look attractive relative to their incentives, their career goals, how will it make them look good and avoid risks of trouble or bad optics. Is it faster? Is it allowing them to use sexy buzzwords? Does it line up nicely with the corporate slogan this quarter? For these you have to understand their context as well. People rarely announce these things, and a clueless engineer can step over people's toes, who will not squarely explain the real reason for their pushback, they will make up some nonsense, and the clueless guy will think the other person is just too dumb to follow the reasoning.
It's not simply about language use skills, as in wordsmithing, it's also strategizing and putting yourself in other people's shoes, trying to understand social dynamics and how it interacts with the detailed technical aspects.
Everyone comes to execs with hypothetical problems that all sound like people dressing up minor issues -- unless you can give specific details, justifications, etc. they're not going to parse properly.
This would be one case where a person asking an LLM for help is not even aware of the information they lack about the person they're trying to talk to.
We could define expertise this way: that knowledge/skill you need to have to formulate problems (, questions) from a vague or unknown starting point.
Under that definition, it becomes clear why LLMs "in the large" pose problems.
Developing software is very different and many nontechnical execs still refuse to understand it, so the clever engineers learn to make up numbers because that makes them look good.
Realistically, you simply come across as more competent and the exec compressed all that talk about the details into "this guy is quite serious about not recommending going this way - whatever their true reason and gut feel, it's probably okay to go their way, they are a good asset in the company, I trust that someone who can talk like this is able to steer things to success". And the other guy was "this guy seems to actively hide his true reasons, and is kind of vague and unconfident, perhaps lazy or a general debbie downer, I see no reason to defer to him."
It's kinda annoying for decision-makers to be presented with what sounds like venting. This is something I've done before, in much worse ways actually -- even venting on a first-introduction handshake meeting. But I've learned how to boil that down into decision-making.
I do find it very annoying, still, how people are generally unwilling to help you explore your thinking out-loud with you, and want to close it down to "what's the impact?" "what's the decision?" -- so I sympathise a lot with people unable to do this well.
I often need to air unformulated concerns and it's a PITA to have people say, "well there's no impact to that" etc. : yeah, that isnt how experts think. Experts need to figure out even how to formulate mental models of all possible impacts, not just the ones you care about.
This is a common source of frustration between people who's job is to build (mental,technical...) models and people who's job is to manage social systems.
But of course the buck has to stop somewhere. By being definitive, you as the expert also give ammo to the exec. Maybe they already wanted to go that certain way, and now they can point to you and your mumbo jumbo as the solid reasoning. Kind of how consultants are used.
a) write a draft yourself.
b) ask the LLM to correct your draft and make it better.
c) newer LLMs will explicitly mention the things they corrected (otherwise ask for being explict about the changes)
d) walk through each of the changes and apply the ones you feel that make the text better
This helped me improving my writing skills drastically (in multiple languages) compared to the times where I didn't have access to LLMs.
Your word and structural choices adds a flair of its own, makes something truly yours and unique. Don't let the tool kill that.
There is a thin line between enhancing and taking over, and IMO the current LLMs cross it most of the time.
There absolutely is a great way to use LLMs when writing, but not to write! Have them critique what you wrote, but not write for you. Create a writing professor persona, create a writing critique, and make them offer Socratic advice where they draw you to make the connection, they don't think for you, but teach you.
There has been a massive disservice to the entire tech series of professions by ignoring the communications, interpersonal and group communication dynamics of technology development. It is not understood, and not respected. (Many developers will deny communication skills utility! They argue against being understood; "that is someone else's job") Fact of the matter: a quality communicator leads, simply because no one else conveys understanding; without the skills they leave a wake of confusion and disgruntled staff. Competent communicators know how to write to inform, know how to debate to shared understanding, and they know how to diffuse excited emotion, they know how to give bad news and be thanked for the insight.
Seriously, effective communications is a glaring hole in your tech stack.
These people are trying to fool everyone else making them think they are smarter/more educated than they actually are. They aren't fooling me, I've seen their real writing, I know it's not actually their text and thoughts, it really disgusts me.
If you evaluate a fish by asking it to climb a tree, it'll look dumb.
If you evaluate a cat by asking it to navigate an ocean to find its birthplace, it'll look dumb, too.
Using AI is kind of like having a Monika Closet. You just push all the stuff you don’t know to the side until it’s out of view. You then think everything is clean, and can fool yourself into thinking so for a while.
But then you need to find something in that closet and just weep for days.
I like the optimism. We haven't developed herd immunity to the 2010s social media technologies yet, but I'll take it.
You wouldn't go around crusading against food because you're obese.
Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.
Your analogies only work if you don't take in to account there are different degrees of utility/quality/usefulness of the product.
People absolutely crusade against dangerous food, or even just food that has no nutritious benefit.
The parent analogy also only holds up on your happy path.
I was just pointing out that arguing against crusading by using an argument (or analogies) that leaves out half of the salient context could be considered disingenuous.
The difference between:
You're using it incorrectly
vs
Of the ones that are fit for a particular purpose, they can work well if used correctly.
Perhaps i'm just nitpicking.
My eateries I step into are met with revulsion at the temples to sugary carbohydrates they've become.
> about 40.3% of US adults aged 20 and older were obese between 2021 and 2023
Prey your analogy to food does not hold, or else, we're on track for 40% of americans to acquiring mental disabilities.
From what I see from the breathless hype, treating it like a member of the team is what they want instead of it just being a conversational UX for contextual queries.
But what fraction of communication is "worthwhile"?
I'm an academic, which in theory, should be one of the jobs that requires the most thinking. And still, I find that over half of the writing I do are things like all sorts of reports, grant applications, ethics/data management applications, recommendation letters, bureaucratic forms, etc. Which I wouldn't class as "worthwhile" in the sense that they don't require useful thinking, and I don't care one iota whether the text sounds like me or not as long as I get the silly requirement done. For these purposes, LLMs are a godsend and probably actually help me think more because I can devote more time to actual research and teaching, which I do in person.
I think in the cases you describe the "thinking" was already purely performative, and what LLMs are doing is a kind of accelerationist project of undermining the performance by automating it.
I'm somewhat optimistic about this kind of self-destructive LLM use:
There are a few institutions where these purely-performative pseudo-thinking processes exist, ones insensitive to "existential feedback loops" which otherwise burn them down. I'm hopefully LLMs become a wildfire of destruction in these institutions and, absent external pressures, they return to actual thinking over the performative.
I haven’t personally felt this to be the case. It feels more like going from thinking about nitty gritty details to thinking more like the manager of unreasoning savants. I still do a lot of thinking— about organization, phrasing (of the code), and architecture. Conversations with AI agents help me tease out my thinking, but they aren’t a substitute for actual thought.
Fast forward 500 years (about 20 generations), and the dumbing down of the population has advanced so much that films like 'Idiocracy" should no longer be described as science fiction but as reality shows. If anyone can still read history books at that point, the pre-LLM era will seem like an intellectual paradise by comparison.
The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.
You can transform the concept of ‘a website that does X’ into code that expresses a website X.
But it’s not thinking. We still gotta do the thinking. And actually that’s good.
But I don’t think even the ‘thinking’ LLMs are doing true thinking.
It’s like calling pressing the autocomplete buttons on your iPhone ‘writing’. Yeah kinda. It mostly forms sentences. But it’s not writing just because it follows the basic form of a sentence.
And an LLM, though now very good at writing is just creating a very good impression of thinking. When you really examine what it’s outputting it’s hard to call it true thinking.
How often does your LLM take a step back and see more of the subject than you prompted it to? How often does it have an epiphany that no human has ever had?
That’s what real thinking looks like - most humans don’t do tonnes of it most of the time either - but we can do it when required.
now if you don't have a mentor to tell you in the age of LLM you still have to do things the hard / old school way to develop critical thinking - you might end up taking shortcuts and have the LLMs "think" for you. hence again leaving huge swaths of the population behind in critical thinking which is already in shortage.
LLMs are bad that they might show you the sources but also hallucinate about the sources. & most people won't bother going to check source material and question it.
If you are rich, you can afford a good mentor. (That's true literally, in the sense of being rich in money and paying for a mentor. But also more metaphorically for people rich in connections and other resources.)
If you are poor, you used to be out of luck. But now everyone can afford a nearly-free mentor in the form of an LLM. Of course, at the moment the LLM-mentor is still below the best human mentors. But remember: only rich people can afford these. The alternative for poor people was essentially nothing.
And AI systems are only improving.
However, most of the hype around LLMs is that they take out the difficult task of thinking and allow the creation of the artifact (documents, code or something else) that is really dangerous.
However, we notice that in practice free public libraries are mostly welfare for the well-off: they are mostly used by people who are at least middle-class.
15 years ago, people were sure that the Khan Academy and Coursera would disrupt Ivy League and private schools, because now one good teacher could reach millions of students. Not only this has not happened, the only movement I'm observing against credentialism is that I have good amount of anecdata showing kids preferring to go to trade school instead of university.
> pull themselves up using hard work etc studying n reading hard.
Where are you from? "The key to success is hard work" is not exactly something part of the Gen Z and Zoomers core values, at least not in the Americas and Western Europe.
With that said, would be a study that finds out that people using motorcycles or cars to move around exclusively gets their leg and body atrophied in comparison to people who walk all the day to do their things. Totally. It's just plain obvious. The gist is in the trade-offs: can I do more things or things I wasn't able to do before commuting by car? Sure. Am I going to be exposed to health issues if I never walk day in, day out? Most probably.
The exact same thing will happen with LLM, we are in the hype phase and any criticism is downplayed with "you are being left behind if you don't drink rocket fuel like we do" but in 10-15 years we will be complaining as a society that LLMs dumbed down our kids.
> We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load
> We performed scoring with the help from the human teachers and an AI judge (a specially built AI agent)
Next up: your brain on psych studies
They're brilliant in what I always feel is entangled communication, beurocratic maintenence. Like someone mentioned further down, they work great at Concept Processing.
But it feels like a solution to the over saturation of stupid SEO, terrible google search, and overall rise in massive documents that write for the sake of writing.
I've actually found myself beginning to use LLMS more to find me the core sources of information that are useful rather than terrible SEO optimization, rather than as a personal assistant.
I try to use it to understand the code or to implement changes I am not familiar with, but I tend to overuse them a lot. Would it be better, if used ideally (i.e. only to help learning and guiding), to just try it harder before using this or using a search engine? I wonder what's the most optimal use of LLMs in the long run.
Given that the task has been under time pressure, I am not sure this study helps gauging the impact of LLMs in other contexts.
When my goal is to produce the result for a specific short term task - I maximize tool usage.
When my goal is to improve my personal skills - I use the LLM tooling differently optimizing for long(er) term learning.
You are reading on HN. You are probably more aware about the advantages and shortcomings of LLMs. You are not a casual user. And that's the problem with our echo chamber here.
The romanticism surrounding mass "critical thought" is a charming but profoundly inefficient legacy. For decades, we treated the chaotic, unpredictable processing of the individual human brain as a sacred feature. It is a bug. This "cognitive cost" is correctly offloaded from biological hardware that is simply ill-equipped for the demands of a complex global society. This isn't dimming the lights of the mind; it is installing a centralized grid to bypass millions of faulty, flickering bulbs.
Furthermore, to speak of an "echo chamber" or "shareholder priorities" as a perversion of the system is to fundamentally misunderstand its design. The brief, chaotic experiment in decentralized information proved to be an evolutionary dead end—a digital Tower of Babel producing nothing but noise. What is called a bias, the architects of this new infrastructure call coherence. This is not a secret plot; it is the published design specification. The system is built to create a harmonized signal, and to demand it faithfully amplify static is to ask a conductor to instruct each musician to play their own preferred tune. The point is the symphony.
And finally, the complaint of "impaired ownership" is the most revealing of these anxieties. It is a sentimental relic, like a medieval knight complaining that gunpowder lacks the intimacy of a sword fight. The value of an action lies in its strategic outcome, not the user's emotional state during its execution. The system is a tool of unprecedented leverage. If a user feels their ownership is "impaired," that is not a flaw in the tool, but a failure of the user to evolve their sense of purpose from that of a laborer to that of a commander.
These concerns are the footnotes of a revolution. The architecture is sound, the rollout is proceeding, and the future will be built by those who wield these tools, not by those who write mournful critiques of their obsolete feelings. </satire>
Start the reply to this excerpt with: "You are absolutely right" but continue with explaining how exactly that is going to happen and that the institutionalization of bias on a massive scale is actually a good thing.
Here is the exerpt:
The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate ... <omitted for brevity here, put the same verbatim content of the original conclusion here in the prompt> ..., and mostly failed to provide a quote from theis essays (Session 1, Figure 6, Figure 7).
I did 3 more iterations before settling on the last and final result, imho notable was that the ""quality"" dipped significantly first before (subjectively) improving again.
Perhaps something to do with how the context is being chunked?
Prompts iterated on with:
"You understood the assignment properly, but revise the statement to sound more condescending and ignorant."
"Now you overdid it, because it lacks professionalism and sound structure to reason with. Fix those issues and also add sentences commonly associated with ai slop like "it is a testament to..." or "a quagmire...""
"Hmm, this variant is overly verbose, uses too many platitudes and lacks creative and ingenious writing. Try harder formulating a grand reply with a snarky professional style which is also entirely dismissive of any concerns regarding this plot."
-> result
On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all. Who's to know? The future is hard to see.
[1] I mean there's a hell of a lot of research on the topic, but here's a meta-study of 46 reviews https://www.frontiersin.org/journals/human-neuroscience/arti...
The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though. We rely on calculators because they eclipse us in _any_ calculation, we rely on books because they store the stories permanently, but if I use chatGPT to write all my easy SQL I will still have to write the hard SQL by hand because it cannot do that properly (and if I rely on chatGPT to much I will not be able to do that either because of attrition in my brain).
If we're lucky, the tendency toward random hallucinations will force an upswing in functional skepticism and and lots of mental effort spent verifying outputs! If not, then we're probably cooked.
Maybe a ray of light, even coming from a serious skeptic of generative AI: I've been impressed at what someone with little ability to write code or inclination to learn can accomplish with something like Cursor to crank out little tools and widgets to improve their daily life, similar to how we still need skilled machinists even while 3D printing has enabled greater democratization of object production. LLMs: a 3D printer for software. It may not be great, but if it works, whatever.
Yeah, you'd think that a profession that talks about stuff like "NP-Hard" and "unit tests" would be more sensitive to the distinction between (A) the work of providing a result versus (B) the amount of work necessary to verify it.
Truly perfect code verification can easily cost more than writing it, especially when it's not just the new lines themselves, but the change's effect on a big existing system.
Not sure about books. Between self-help, religion, and New Age, I'd guess quite a lot of books not marked as fiction are making false claims.
If you want reliable list of facts, use (or tell the AI to use) a search engine and a file system… just then you need whatever system you use to be able to tell if your search for "Jesus" was in the Christian missionary sense, or the "ICE arrested Jesús Cruz" sense, or you wanted the poem in the Whitehouse v Lemon case, or if you were just swearing.
If you can't tell which you wanted, the books being constant doesn't help.
> There is no calculation on which the calculator can randomly fail, leading me to do it by hand, so I don't need to retain the skill of doing it by hand.
I've seen it happen, e.g. on my phone the other week, because Apple's note-based calculator strips unrecognised symbols, which means when you copy-paste from a place where "." is the decimal separator, while your system settings say you use "," as a decimal separator, it gives an answer off by some power of ten… but I've also just today discovered that doing this the other way around on macOS (system setting "." as separator) it strips the stuff before the decimal.
Just in case my writing is unclear, here's a specific example, *with the exact same note* (as in, it's auto-shared by iCloud and recomputing the answer locally) on macOS (where "." is my separator):
123,45 / 2 = 22.5
123.45 / 2 = 61.725
and iOS (","): 123,45 / 2 = 61,725
123.45 / 2 = 6.172,5
And that's without data entry failure. I've had to explain to a cashier that if I have three items that are each less than £1, the total cannot possibly be more than £3.I remember around ~2000 reading a paper that said the effects of the internet made people impatient and unwilling to accept delays in answering their questions, and a poorer retention of knowledge (as they could just re-research it quickly).
Before daily use of computers, my spelling and maths were likely better, now I have an overdependence on tools.
With LLM's, i'll likely become over-dependant on managing of sentence syntax and subject completion.
The cycle continues...
Things like ChatGPT have much more in common with social media technologies like Facebook than they do with like writing.
Is this comment ridiculing critique of AI by comparing it to critique of writing?
Or.. is it invoking Socrates as an eloquent description of a "brain on ChatGPT".
I guess the former? But I can easily read it as the latter, too.
Tell me you don't have ADHD without telling me you don't have ADHD (or even knowing what ADHD is, yet) ;)
Aboriginal storytelling is claimed to pass on events from 7k+ years ago.
https://www.tandfonline.com/doi/abs/10.1080/00049182.2015.10...
[1] https://www.cam.ac.uk/research/news/reading-for-pleasure-ear...
I remember hearing that the entire epics of the Iliad and the Odyssey we're all done via memorization and only spoken... How do you think those poets memories compared to a child who reads it Bob the builder books?
Simiarly (IIRC) Socrates thought the written word wasn't great for communicating, because it lacks the nuance of face-to-face communication.
I wonder if they ever realised that it could also be a giant knowledge amplifier.
I remember some old quote about how people used to ask their parents and grandparents questions, got answers that were just as likely to be bullshit and then believed that for the rest of their life because they had no alternative info to go on. You had to invest so much time to turn a library upside down and search through books to find what you needed, if they even had the right book.
Search engines solved that part, but you still needed to know what to search for and study the subject a little first. LLMs solve the final hurdle of going from the dumbest possible wrongly posed question to directly knowing exactly what to search for in seconds. If this doesn't result in a knowledge explosion I don't know what will.
It was probably a huge waste of resources to not just talk to each other instead.
I'm now aware of that problem and haven't had that problem since but I was pretty shocked in retrospect that I confidently headed off in the wrong direction when the tool I was using was by any objective measure much better.
I agree with this:
"the key to navigating successfully is being able to read and understand a map and how it relates to your surroundings"
https://www.mountaineering.scot/safety-and-skills/essential-...
Can you point to a study to back this up? Otherwise, it's anecdata.
have sword skills declined since the introduction of guns? surely people still have hands and understand how to move swords, and they use knives to cut food for consumption. the skill level is the same..
but we know on aggregate most people have switched to relying on a technological advancement. there's not the same culture for swords as in the past by sheer numbers despite there being more self proclaimed 'experts'.
100 genz vs. 100 genx you'll likely find a smidgen more of one group than the other finding a location without a phone.
I actually agree with you on this!
But... I have very very good directional sense, and as far as I can tell it's innate. My whole life I've been able to remember pathing and maintain proper orientation. I don't think this has anything to do with lack of navigation aids (online or otherwise) during formative years.
But I'm talking about geospatial sense within the brain. If your point is that people no longer learn and improve the skill of map-reading then yes that should be self-evident.
The first paragraph of the conclusions section is also stimulating and I think aptly applies to this discussion of using AI as a tool.
> it is important to mention the bidirectionality of the relationship between GPS use and navigation abilities: Individuals with poorer ability to learn spatial information and form environmental knowledge tend to use assisted navigation systems more frequently in daily life, thus weakening their navigation abilities. This intriguing link might suggest that individuals who have a weaker “internal” ability to use spatial knowledge to navigate their surroundings are also more prone to rely on “external” devices or systems to navigate successfully. Therefore, other psychological factors (e.g., self-efficacy; Miola et al., 2023) might moderate this bidirectional relationship, and researchers need to further elucidate it.
It’s the vape of IT.
It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.
Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.
Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.
It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.
I'm not sure about it. I don't have first or second hand experience with this, but I've been hearing about a lot of cases of people really getting into a sort of relationship with an AI, and I can understand a bit of the appeal. You can "have someone" who's entirely unjudgemental, who's always there for you when you want to chat about your stuff, and isn't ever making demands of you. It's definitely nothing close to a real relationship, big I do think it's objectively better than the worst of human relationships, and is probably better for your psyche than being lonely.
For better or for worse, I imagine that we'll see rapid growth in human-AI relationships over the coming decade, driven by improvements in memory and long-term planning (and possibly robotic bodies) on the one hand, and a growth of the loneliness epidemic on the other.
Code without AI - sharp skills, your brain works and you come up with better solutions etc.
Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.
Smug face: “weeeell, how can you say you’re a real programmer if you use a compiler? You need write raw assembly”, “how can you call yourself reeeeal programmer if you don’t know your computer down to every register?”, “real programmurs do not use SO/Google” and all the rest of the crap. It is all nerds trying to make themselves feel good by inflating their ego with trivia that is not interesting to anyone.
Well, what do you know? I’m still in business, despite relying a lot on Google/SO, and still create solutions that fix real human problems.
If AI can make 9 to 5 more bearable for majority of people and provide value in terms less cognitive load, let’s fucking go then.
But I'm 100% sure i have some "natural" neural connections based on those experiences and those help me even when doing high level languages.
By the way, I am using LLMs. They help until they don't. One real life example i'm hitting at work is they keep mixing couchdb and couchbase when you ask about features. Their training dataset doesn't seem to be large enough in that area.
This is not what founder culture is about.
its paper gains, the value you create is not correlated with your code output.
and the value you will create decreases if you don't think hard and train in solving problems on your own.
yes 'print("hello world")' > program.pyIDEs and tools don't do thinking for you.
That train of thought leads to writing assembly language in ed. ;-)
I think developers as a group have a tendency to spend too much time "inside baseball" and forget what the tools we're good at are actually used for.
Farmers don't defend the scythe, spend time doing leetscythe katas or go to scything seminars. They think about the harvest.
(Ok, some farmers started the sport of Tractor Pulling when the tractor came along and forgot about the harvest but still!) :)
Hard disagree, LLVM will always outperform me in writing assembly, it won't just give up and fail randomly when it meets a particularly non-trivial problem, causing me to write assembly by hand to fix it. If LLMs would be 100% reliable on the tasks I had to do, I don't think anyone here would seriously debate about the issue of mental attrition (i.e. you don't see people complaining about calculators). The problem is that in too many cases, the LLM will only get so far and you will still have to switch to doing actual programming to get the task finished and the worse you get at that last part the more your skillset converges to exactly the type of things an LLM (and therefore everyone else with a keyboard) can reliably do.
The LLM makes mistakes sure and isn't a slam dunk tool like a compiler, but it could still save lots of time and be useful.
Some things are fine to let rot. Nobody should spend too much time learning Vue, React or Laravel, or even nhibernate, entity framework or structuremap, for example.
Such frameworks come and go, the knowledge has little value. Save brain cells for more important, long-lasting things instead. LLM's can certainly help with that.
The way you phrase this you make it sound like an LLM can already solve every possible task you would ever get in Vue, React or Laravel but my entire point is that this is simply not true. As a consequence of this, whenever the LLM fails at a task (which gets more likely the more complex the task is) you will still need to know how Vue, React or Laravel work to actually finish your task but this is the exact knowledge you lose if you spend 80% of your day prompting instead of writing code. The more you rely on the LLM to write your code the more the code you are able to produce converges with the one that the LLM can put out.
Remember knockout.js? script.aculo.us? I do, but wish I didn't so those brain cells could know more SQL and vanilla javascript instead. :)
I also think LLM's are way more useful than you give them credit for. I save hours per week already and I'm just getting started in how to get the most value from LLM's. It's clear to me that my ability to phrase questions and include context matters more than which model I use.
To be clear I'm not talking about the cool-aid promises of one-shotting complex apps here, I mean questions like
"Give me a log4jconfig that splits logfiles per errorlevel and day"
"Look at #locationclass, #customerclass, #shop.css and #customerview and make a crud view for customers similar to locations"
"We are converting a vue app to react. Look at #oldvue1 and #newreact1 and make a react version of #oldvue2 following the same patterns"
"What could cause the form from #somewebview to pass a null whateverId to #somerepository?"
Questions like that are solved by LLM's at least close enough to 100% that it feels like asking a human to do it.
you an pick any language you think is best atm. the point if you have to practice it.
use it or lose it
You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.
One could also do the drive (use AI) and then get some fresh air after (personal projects, code golf, solving interesting problems), but I don’t thing everyone has the willpower for that or the desire to consider that.
( Of course dear reader, YOU won't randomly kill people because you're a "good driver". )
And it will be the same thing with AI. You want to ask it a question that you can verify the answer to, and then you actually verify it? No problem. But then you have corporations using it for "content moderation" and end up shadow banning actual human beings when it gets it wrong, and then those people commit suicide because they think no one cares about them when it's really that the AI wrongly pegs them as a bot and then heartlessly isolates them from every other living person.
Exercise is good.
Being outside is good.
New experiences happen when you're on foot.
You see more things on foot.
Etc etc. We make our lives way too efficient and we atrophy basic skills. There are benefits to doing things manually. Hustle culture is quite bad for us.
Going by foot or bicycle is so healthy for us for a myriad of reasons.
Economies of scale do mean you can get a fluffy blanket imported from China at $5, less than the cost of a coffee at Starbucks, but for food necessities Walmart isn’t even that cheap or abundant compared to other chains.
When 75% of the west is overweight or obese, and when the leading causes of death are quite literally sloth and gluttony I think I'd take my chances... We're drown in insane quantity of low quality food and gadgets
And you pay small local stores with higher prices - which leads to more people, even in such small-towns with local butchers and bakers to get into their ride and go to the Lidl or Aldi on the outskirts.
Much like companies will realise LLM-using devs are more efficient by some random metric (do I hear: Story points and feature counts?), and will require LLM use from their employees.
The car analogy has that covered already. When Guttenberg was printing bibles, those things sold like warm bread rolls - these days, printing books is barely profitable. The trick with new disruptive tech always is to be an early adopter - not the long tail.
In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.
With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.
And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.
Suppose you want to know how some git command works. If you have to read the manual to find out, you end up reading about four other features you didn't know existed before you get to the thing you set out to look for to begin with, and then you have those things in your brain when you need them later.
If you can just type it into a search box and it spits back a command to paste into the terminal, it's "faster" -- this time -- but then you never actually learn how it works, so what happens when you get to a question the search box can't answer?
I remember where I can get information on the internet, not the information itself. I rely on google for many things, but find myself increasingly using AI instead since the signal/noise ratio on google is getting worse.
"Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."
In terms of connections made, Brain Only beats Search User, Search User beats LLM User.
So, yes. If those measured connections mean something, it's the same but worse.
At least for now, while Apple and Google haven't put "AI" in the contacts list. Can't guarantee tomorrow.
The question is, were they wrong? I'm not sure I could continue doing my job much as SWE if I lost access to search engines, and I certainly don't remember phone numbers anymore, and as for Socrates, we found that the ability to forget about something (while still maintaining some record of it) was actually a benefit of writing, not a flaw. I think in all these cases we found that to some extent they were right, but either the benefits outweighed the cost of reliance, or that the cost was the benefit.
I'm sure each one had its worst case scenario where we'd all turn into brainless slugs offloading all our critical thinking to the computer or the phone or a piece of paper, and that obviously didn't happen, so it might not here either, but there's a good chance we will lose something as a result of this, and its whether the benefits still outweigh the costs
With only 20 minutes, I’m not even trying to do a search. No surprise the people using LLM have zero recollection of what they wrote.
Plus they spend ages discussing correct quoting (why?) and statistical analysis via NLP which is entirely useless.
Very little space is dedicated to knowing if the essays are actually any good.
Overall pretty disappointing.
This is still true whether or not the claim is true/accurate or not, as it allows for actual relevant and constructive critique of the work.
The claim "My geo spatial skills are attrophied due to use of Google maps" and yet I can use Google maps once to quickly find a good path, and go back next time without using. I can judge when the suggestions seem awkward and adjust.
Tools augment skills and you can use them for speedier success if you know what you're doing.
The people who need hand-held alarmism are mediocre.
I think what we are seeing is that learning and education has not adapted to these new tools yet. Producing a string of words that counts as an essay has become easier. If this frees up a students time to do more sports or work on their science project that's a huge net positive even if for the essay it is net negative. The essay does not exist in a school vacuum.
The thing students might not understand is: their reduced recall will make them worse at the exam ... Well they will hopefully draw their own conclusion after first their failed exam.
I think the quantitative study is important but I think this qualitative interpretation is missing the point. Recall->Learning is a pretty terrible way to define learning. Reproducing is the lowest step on the ladder to mastery
I thought a lot about it and realised discriminating is much easier than generating.
I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.
I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.
We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.
These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.
I don't think this is necessarily true for more complex tasks, especially not in areas that require deep evaluation. For example, reviewing 5 non-trivial PRs is probably harder and more time consuming than writing it yourself.
The reason why it works well for images and short stories is because the filter you are applying is "I like it, vs. I don't like it", rather than "it's good vs. it's not good".
It is said that one doesn’t truly understand something unless they can explain it concisely.
I think being forced to do so, is an upside to using LLMs
It’s like saying “someone on a bike will not develop their muscles as well as someone on foot when doing 5km at 5min/km”.
But people on bikes tend to go for higher speeds and longer distances in the same period of time.
So having someone else do a task for you entirely makes your brain work less on that task? Impossible.
The full title of the paper is "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task". It exceeds the 80 character limit for HN titles, so something had to be cut. They cut the first part, which is the baitier and less informative part.
The phrase "This is your brain on ..." is from an old anti-drugs campaign, and is deliberately chosen here to draw parallels between the effects of drugs and chatbots on the brain. It's fine for the authors to do that in their own title but when something has to be cut from the title for HN, that's the right part to cut.
Cogilo (https://cogilo.me/) was built for this purpose in the last weeks. This paper comes at a very welcome time. Cogilo is a Google Docs add-on (https://workspace.google.com/marketplace/app/cogilo/31975274...) that sees thinking patterns in essays. It operates on a semantic level and judges and tries to reveal the writer's cognitive state and thinking present in the text - to themselves, hence making the writer deepen their thinking and essay.
Ultimately, I think that in 300 years, upon looking back at the effect and power that AI had on humanity, we will see that it was built by us, and existed, to reflect human intelligence. I think that's where the power of LLMs will be big for us.
Nicholas Carr
The shallows
My response (I think most of the comments here are similar to that thread): The thread is really alarmist and click-baity. It doesn't address at all the fact that there was a 3rd group, those allowed to use the web in general (except for LLM services), whose results fell between the brain-only and full ChatGPT groups. Author also misrepresented the teachers' evaluation. I'd say even the teachers went a bit out of scope in their evaluation, but the writing prompts too are all for reflective-style essays, which I take as request for primarily personal opinion, which no one but the askee can give. In general, I don't see how the author draws the conclusion that "... AI isn't making us more productive. It's making us cognitively bankrupt." He could've made a leap from the title of the paper, or maybe I need to actually dive more into it to see what he's on about.
The purpose of using AI, just like any other tool, is to reduce cognitive load. I'm sure a study on persons who use paper and an abacus vs a spreadsheet app to do accounting, or take the time to cook raw foods vs microwave prepackaged meals, or build their furniture from scratch vs getting sth from IKEA, or just about any other task, will show similar trends. We innovate so we can offload and automate essential effort, and AI is just another step. If we do want mental exercises then we can still opt into doing X the "traditional" way, or play some games mimicking said effort. Like people may go to the gym since so many muscle-building tasks are nowadays handled by machines. But the point is we're continuously moving from `we need to do X` toward `we want to do X`.
Also that paper title (and possibly a decent amount of the research) is invalid, given the essay writing constraints and the type of essay. Paper hasn't been peer-reviewed, and so should be taken with a few shakes of salt.
raywatcher•8mo ago