Thinking and using ChatGPT are not. Overview ‹ Your Brain on ChatGPT — MIT Media Lab https://share.google/RYjkIU1y4zdsAUDZt
We learned to think by writing only after writing became cheap. Yes, we’ve trained our brains to develop ideas by editing raw thoughts on paper, but it is just one of the possible methods.
I have read a lot of late 18th, 19th and early 20th century books and diaries, and it is plainly clear that writers such as Tolstói, Zweig, Goethe and others developed full books in their mind first, then wrote them from cover to cover in 20-30 days.
Thinking used to be detached from writing. That is a fact. We just lost that ability in the modern era thanks to cheap writing technology: pen and paper, then computers. I'm not saying the current approach is wrong, but don't assume that the only way to think is to write.
Socrates argued that writing would destroy people's memory. He wasn't 100% wrong, yet here we are. The criticism towards the use of LLMs is so deliciously ironic. The analogy with writing... writes itself. Kids that grow up with LLMs will just think differently.
They are making the point that writing is more than dumping a completed thought. The act of doing that helps you to critique your dumped thoughts, to have more thoughts about your thoughts, to simplify them or expand them.
It’s easier to go meta once you dump your state.
Kind of ironic, though - I wrote, but my thinking process wasn't so great :)
Thanks for the correction!
When reading long, closely reasoned passages of medieval philosophy, I've wondered about their development process, when there was no such thing as scratch paper.
> Kids that grow up with LLMs will just think differently.
People are just glibly saying this sort of thing, but what specifically is coming? I'm now wrestling with the problem of dealing with university students who don't hesitate to lean on LLMs. I'm trying to not be dismissive, but it feels like they are just thinking less, not differently.
LLMs are like I have someone else to do some or all of the thinking and writing and editing. So I do less thinking.
A bicycle lets my own energy go father. Writing. A car lets me use an entirely different energy source. LLMs. Which one is better for my physical fitness?
Btw the idea about Tolstoy and others keeping those massive books in their head and cranking them out over a month is fascinating. Any evidence or others who imagine the same? In Tolstoy's case, he was a count and surely had the funds, no?
Bigger novels such as war and peace were written episodically.
I think you have some misconceptions here. First, the article does not claim that thinking is writing, and especially not that there is no thinking without writing. They only explain that writing is supporting and driving a higher quality of thinking.
Second, paper isn't the only medium to write. And writing isn't the only persistent form of communication to support and improve thinking.
> Thinking used to be detached from writing.
It still is.
> I have read a lot of late 18th, 19th and early 20th century books and diaries, and it is plainly clear that writers such as Tolstói, Zweig, Goethe and others developed full books in their mind first, then wrote them from cover to cover in 20-30 days.
I seriously doubt that it was ever common for writers to compose a whole book in their head and then write it down. Maybe some writers with exceptional memories did this. But there's a whole book about how War and Peace was written based on textual evidence that wouldn't exist if it had simply popped out of Tolstoy's head fully formed: https://www.amazon.com/Tolstoy-Genesis-Peace-Kathryn-Feuer/d....
I have a better way to frame this:
Learning your own language and culture is a lifelong process.
A big phase, the adult phase, of learning is learning to write in your language (I'm implying there's more to writing than chosing words; specially in this context of language as thinking)
indeed, a lot of modern people never make it out of this big phase of learning your language. they never go beyond writing = thinking. but some people do learn the next phase
which involves distinguishing language itself from thoughts and ideas (is some idea known? understood? perceived?? but the idea is "the self" or some other complex notion)
so the only quality of the modern era I admit, is that there's a lot of people that only learn rudimentary thinking-writting, and too few people that learn 'advanced' languange-thinking where writing becomes secondary to thinking.
finally, I learned this idea from reading around the meaningness blog/book
Nowadays that seems to be rare, but my impression from reading my journals is that it was often more common to dictate than to physically hand write things.
It's seems clear that abstract thinking in particular is greatly aided by writing, because the written text acts like a thought cache. A bit like an LLM context window which you can fill with lots of compact, compressed "tokens" (words).
Abstract thoughts are "abstract" because they can't be visualized in our mind, so they don't benefit from our intuitive imagination ability (Kant's "Anschauung"). So it is hard to juggle many abstract thoughts in our working memory.
We can also think of the working memory as the CPU registers, which are limited to a very small number, while the content of the CPU cache or RAM corresponds to the stuff we write down.
Our "anschauung" (visual imagination) is perhaps something like a fixed function hardware on a GPU, which is very good at processing complex audiovisual content, i.e. concrete thoughts, but useless for anything else (abstract thoughts).
Writing is thinking with a superpower. It's like using the "Pensive" from Harry Potter, depicted in the scene where Harry and Dumbledore pull memory whisps out of their temple to rewatch in a mirror pool. Writing enables you to apply your attention to an idea at multiple levels of analysis with significantly less effort than doing the same while also preserving the idea in your head manually.
There's an excellent podcast (Radiolab, possibly) about how this conception of what the first amendment means is rather recent (1910s-1920s) and that the ideas of what "free speech" meant before that are really radically different.
And because reading and writing are thinking we must not delegate it to AI models as a matter of habit. In particular, during students' formative time, they need to learn how to think in reading and writing mode - reflecting, note-taking etc.
Compare it with the use of a pocket calculator: once you have a solid grounding, it's fine to use electronic calculators, but first one ought to learn how to calculate mentally and using pen and paper. If for no other reason, to check whether we made a typo when entering our calculation, e.g. when the result is off by 100 because we did not press the decimal point firmly enough.
I am very concerned that young people delegate to LLMs before reaching that stage.
I think we should trust children enought that they'll also figure out a crazy changing technological world.
on the other hand, internet millenial ideals are fast dying. the digital dream of cultural and mediatic abundance is turning into a nightmare of redundant content as information wars saturate the figurative airwaves
You might put a baby in a pool so it can learn to swim, but you make sure their environment is such that drowning is an impossibility. A child destined to be an Olympian swimmer still requires guidance, even if their natural ability and inclinations outpace both their peers and their elders.
Humans always have and always will use tech as a crutch -- to reduce time and effort (and energy expended). The 'physical enshittification' (PE) that has ensued from using mechanical crutches has made us lazy, fat, and sick. And now _mental_ crutches have arrived, which promise to replace our very thought processes, freeing us from all the annoying cognitive heavy lifting once done by our brains.
IMO, there's every reason to believe that the next step in human evolution will be driven by the continued misuse of tech as crutches, likely leading to widespread _mental_ enshittification (ME) -- doing to our minds what misuse of tech has already done to our culture and to our bodies.
Perhaps mankind can avoid this fate. But only if we insist on _thinking_ for ourselves.
My thinking has increased with the use of LLMs, not decreased, most likely because LLMs take the edge off of grind work like reading a lot of noise to capture the 1% signal, formulating accurate statements for abstract ideas, and bringing together various domains that are beyond your area of expertise.
Now will you make mistakes? Sure, but you would have made the same mistakes at a slower pace without LLMs anyways. Or more accurately, you just wouldn’t do the research or apply domains not in your area of expertise, and your thinking would be a lot more narrow.
The strawman is thinking that banning LLMs will induce rigorous thinking. Just like banning calculators does not make everyone good at math.
But allowing calculators WILL make those who like math reach much deeper into the field than without.
Have you ever run into any mathematician that praised the calculator for his/her career? I’d be really curious to read about that.
The modern equivalent of a calculator is Excel.
And that is not done with calculators, that is done quickly in your head by having practiced a lot of calculations manually. This is why engineer students still practice manual calculation in college in most places.
And the feeling is similar to how using Google on the 2004-2014 web was.
It used to be Google would return a huge list or relevant links. Loading all of them was quick. Skimming the content was quick.
Now every search is a massive ad. Every site is slow to load full of ads and useless slop. Slop which was written manually at first, then accelerated with Markov chains, now at light speed with LLMs.
So an LLM is required to filter through the LLM slop to find the tiny bit of real content.
> To [Thamus] came Thoth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Thoth in praise or blame of the various arts. But when they came to letters, this, said Thoth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Thoth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.
-- Plato, Phaedrus
We've been having this same conversation for over 2,000 years now. And while I actually think Thamus is probably correct, it doesn't change the reality that we are now using reading and writing for everything.
Now whether or not this will in the abstract become leverage for another type of skill or multiplier is to be seen.
When you speak or write instead of just think, you create something that did not previously exist: new words and sentences. When you write instead of speak, you aren't exactly creating something new — you're often just recording words that just as well could have been spoken. Using an LLM is much closer to the first case. It's creating something that didn't previously exist (an expanded thesis on a brief thought provided by you), and therefore seems to possibly risk the user's ability to think atrophying.
I agree with you but that article itself says, "for example, handwriting can lead to widespread brain connectivity."
Writing is thinking your own thoughts.
There's a big difference, and is why writing is so painful for so many people. It's also why writing is critically important.
edit: Likewise teaching is really important. Crystallization of thought is incredible valuable and difficult.
And, as you point out, if you push yourself to read actively, it helps a lot!
I've heard that some philosophers like Schopenhauer argue that reading can become a passive process, where we simply follow another person's thoughts without engaging our own critical thinking. It's interesting to consider that it's not just LLMs but we too would become like stochastic parrots under certain circumstances.
What writing changes is that in words, you have to make it explicit how one thing leads to another. Partly, that's just due to the imposition of sentence structure.
Ironically, this is precisely the crazy thing about Trumpspeech: it's just associations - vibe-chaining if you will.
Writing requires thought.
LLMs do not think.
Which has a lot to do with how people intuit when text is LLM-generated.
> For example, LLMs can aid in improving readability and grammar, which might be particularly useful to those for which English is not their first language.
I don't know whether this has been empirically confirmed, but I have the strong belief that a manuscript with poor grammar, by a non-native English speaker, has a much higher probability of being rejected than the same manuscript but copyedited by something like Grammerly or a SOTA LLM.
Ideally writing style should matter much less than the quality of the research, but reviewers are not just influenced by objective criteria but, unconsciously, also by vibes, which includes things like writing style and even formatting.
making writing valuable is another skill (see this evergreen lecture from the university of chicago leadership lab: https://www.youtube.com/watch?v=aFwVf5a3pZM)
however when I encounter people with low written or verbal acuity, they have to survive somehow, so it's wise to observe what tools of cunning they tend to reach for.
Hence, the models depend on human writing.
Realistically, going forward model training will just need to incorporate a step to remove data below some quality threshold, LLM-generated or otherwise.
Current humans might as well :-)
This morning I asked ChatGPT a question about how Quickbooks handles charts of accounts compared to NetSuite. It answered my question better than anything else would have.
Also, I'm currently using Claude Code to fix some bugs -- it's handling the heavy lifting while I think about what needs to happen.
I'm in favor of human writing as an underrated tool of culture-making...but the scope of what counts as "thinking" is expanding.
Say you start with a set of findings, for example, western blots, data from a transgenic mouse engineered for the relevant gene, and some single cell sequencing data. Your manuscript describes the identification of a novel protein, editing the gene in a mouse and showing what pathways are affected in the mouse.
What material would you give the LLM? How would the LLM "know" which of these novel findings were in any way meaningful? As far as I'm aware, it is unlikely that the LLM would be able to do anything other that paraphrase what you instruct it to write. It would be a return to the days before word processing became common, and researchers would either dictate their manuscripts to a typist, or hand the typist a stack of hand-written paper.
The actually hard part of writing scientific papers is not putting the words "down on paper" so to speak, but deciding what to say.
Given that they are trained on all of arXiv, ..., it's much more likely they are aware of all public relevant papers than your average researcher.
That's the outline.
I doubt an LLM would help much in deciding how best to present the finer details, as they will be very specific to your particular manuscript.
In that sense, you’d give the LLM the purpose of the paper, the field you’re writing in, and the relevant data from your lab notebook. Personally, I never enjoyed writing manuscripts — most of the time goes into citing every claim and formatting everything correctly, which often feels more like clerical work than communicating discovery.
I don’t mind if LLMs help write these papers. I don’t think learning to mimic this stylistic form necessarily adds to the process of discovery. Scientists should absolutely be rigorous and clear, but I’d welcome offloading the unnecessary tedium of stylized writing to automation.
I remain to be convinced that the tasks you propose an LLM could do contribute any more to the process of writing a paper than dictating to a typist could do in the 1950's. It's impressive for a machine, but not particularly productivity-boosting. Tedious tasks such as correctly formatting references belong to the copy-editing stage (i.e. very last stage of writing a paper), where indeed I have seen journals adopt "AI" approaches. But these processes are not a bottleneck in the scientist's workflow.
I certainly don't think the performance of LLMs that I'm familiar with would be any use at all in compiling the original data into scientifically accurate figures and text, and providing meaningful interpretations. Most likely they would simply throw out random "hallucinations" in grammatically correct prose.
If there is any use for LLMs in paper writing, I would think that it is for tedious but not well-defined tasks. For example, asking if an already written paper conforms to a journal's guidelines and style. I don't know about you, but I spend a meaningful amount of time [2] getting my papers into journal page limits. That involves rephrasing to trim overhangs, etc. "Rephrase the following paragraph to reduce the number of words by at least 2" is the kind of thing that LLMs really do seem to be able to do reliably.
1: As usual, the input data can be wrong, but that would be a problem for LLMs too. 2: I don't actually know how much time. It probably isn't all that long, but it's tedious and sure does feel like a long time while I'm doing it.
I have often spent more time doing this than writing the original draft, especially for grant applications...
Just as a calculating can be implemented on a computer which has low cognitive abilities but high algorithmic and procedural abilities, we need to extract out the word-smithing capabilities from writing separate from the thinking portion. Our lack of distinction in terms reflects a muddled conceptual framework.
LLMs are excellent wordsmiths completely divorced from the concept of thinking. They break the correlative assumption - that excellent writing is corresponds with excellent thinking. Until now, we've been able to discern poor idea because they have a certain aesthetic, think conspiracy rants in docx saying something about a theory of everything based on vibrations. But that no longer holds. We have decent enough word-smithing coupled with a deficit of thinking. Unfortunately this breaks our heuristics with consequences ranging from polluting our online commons to folks end up believing nonsense like ChatGPT named itself Nova and they are a torchbearer for spiritual gobbledygook.
My point is that we're in the process of untangling these two and as a result, we're likely to see confusion and maybe even persistent misunderstanding until this distinction becomes a more common part of how we talk about and evaluate written work. They're living in an AGI-world and we're just..not.
It's one of the reasons for the "one to throw away"-idea of writing shitty code first just to get it to work, and then remake it after you have thought through the problem by coding it.
It's quite similar in hard sciences as it's in natural languages. For instance I don't understand Hungarian at all. Few words "igen", "jó napot kívánok" doesn't a knowledge of the language form.
Then German. I had to learn it in school so I have orders of magnitude better grasp at it because I can actually say a few statements that form in my mind: "Nein, ich brauch nicht ein anderes stück Steak". Might not be 100% correct gramatically and vocabulary wise but it conveys the message and also transmits that I understand the context.
And then come English which I speak since 33 years. I actually THINK in English a lot of times and there are concepts I can't easily express in my native Romanian language without resorting to a painfully long and sometimes unsuccessful software-driven (as opposed to FPGA-encoded) translation process.
Sometimes I struggle to fit those sentiments and connections to wording that I imagine will make sense, to someone else or even to myself. I guess that would be the, "Writing is thinking," part, but it seems more like, "Effective and coscientious (self-)communication is thinking."
And if thinking is dependent on language, maybe we should create a new language for artificial intelligence rather than feeding it human languages.
--
1: https://en.wikipedia.org/wiki/Linguistic_relativity#Artifici...
One key takeaway is if you want to learn/remember something better, always rewrite things in your own words as both the act of writing AND paraphrasing makes it more sticky
I see things more optimistically. If good writing leads to good thinking, then anything I do to improve my ability to write well transitively helps me to think well.
In that sense, I actually see a huge benefit to LLMs in improving my writing and therefore improving my thinking. Not only can I ask for detailed and powerful feedback, I can also ask for more details on background context or related topics that I wouldn't be aware of.
I believe judicious use of LLMs can make us better than we could be without them.
zug_zug•4d ago
jryb•4d ago
__rito__•4d ago
Let's say, I am making something concrete by putting ideas, thoughts, knowledge into paper. While doing it, I am finding gaps and mistakes and finding opportunities to correct them. But it is not limited to 'correction', it also opens newer dimensions and perspectives- ones that previously didn't exist in my conscious mind.
I consider writing as a tool of thinking. Another tool is brainstorming with a group, or any group discussion in general. These amend to your thoughts, make the existing ones more solid, and opens new direction, and unravels connections previously not accessible.
Read this essay by Paul Graham: Putting Ideas into Words [0]. And also refer to his other essays on writing.
There is also a great book by Paul Zissner: Writing to Learn. I suggest this book to people.
Writing, when done while learning works akin to teaching- one of the most crucial steps in so-called Feynman Technique of learning.
[0]: https://paulgraham.com/words.html
zug_zug•4d ago
I'm not saying that writing can't be a useful tool to organize ideas, definitely it can. But I think I've found two things:
- Now the best way to "iterate" my thoughts is to rubberduck with ChatGPT; it's really amazing how much faster I can learn when I admit how little I know, even on something like global warming or an advanced math topic.
- By and large, "organizing my thoughts" isn't really a high-return activity in my life. Having an intelligently written blog that I've put hundreds of hours into has never done anything for my career or led to any personal connections, and honestly who's to say my time wouldn't have been better-spent just coming with some jokes to network better rather than having some cohesive theory of everything that nobody asked for?
FrankenDino•3d ago
https://www.goodreads.com/author/show/7881675.William_Zinsse...
__rito__•19h ago
aquariusDue•1d ago
For example a few days ago I realized that I found it hard to reverse a word in my mind, even a simple one. Try for yourself, think of a word and then reverse it in your head with your eyes closed.
Some people might struggle with the above, some may find it doable in their heads, but most can agree that it's absurdly easy if you can externalize it to paper or a text editor at least.
sorcerer-mar•1d ago
Sure some type latent structure was there all along (thus why I put them down), but it wasn't necessarily visible to me, nor optimal, nor did it include/exclude all the right points. The need for iteration itself proves that the act of writing is actually doing the synthesis.
wrp•1d ago
chambers•1d ago
It makes sense for our age. Amid a thousand distractions, typing on the keyboard gives the illusion of getting a grip. Note-taking on my computer gives the illusion of a second brain. Ululating on the internet gives the illusion of sharing thoughts.
Instead of "writing is thinking", I prefer "thought precedes speech" https://inframethodology.cbs.dk/?p=1127; it fits the small human mind better though I've yet to learn it properly.
lukebechtel•1d ago
often when I write an idea down, my "inner critic" process gets more activated upon seeing the textual representation.
thus I find gaps and flaws more easily.
not true for all domains, but many.
jimbokun•1d ago
bGl2YW5j•1d ago
denkmoon•1d ago