From all of my observations, the impact of LLMs on human thought quality appears largely corrosive.
I’m very glad my kid’s school has hardcore banned them. In some class they only allow students to turn in work that was done in class, under the direct observation of the teacher. There has also been a significant increase in “on paper” work vs work done on computer.
Lest you wonder “what does this guy know anyways?”, I’ll share that I grew up in a household where both parents were professors of education.
Understanding the effectiveness of different methods of learning (my dad literally taught Science Methods) were a frequent topic. Active learning (creating things using what you’re learning about) is so much more effective than passive, reception oriented methods. I think LLMs largely are supporting the latter.
I also don't think the nature of LLMs being a negative crutch is new knowledge per se; when I was in school, calculus class required a graphing calculator but the higher end models (TI-92 etc) that had symbolic equation solvers were also banned, for exactly the same reason. Having something that can give an answer for you fundamentally undermines the value of the exercise in the first place, and cripples your growth while you use it.
With LLM's giving you ready-made answers I feel like it's the same. It's not as rewarding because you haven't obtained the answer yourself. Although it did feel rewarding when I was interrogating an LLM about how CSRF works and it said I asked a great question when I asked whether it only applies to forms because it seems like fetch has a different kind of browser protection.
No one to day learns that anymore. The vast, vast majority have no idea and I don’t think people are dumber because of it.
That is to say, I think it’s not cut-and-dried. I agree you need to learn something, but something’s it’s okay use a tool.
I tried to encapsulate that to some degree when writing something (perhaps poorly?) recently actually - https://smcleod.net/2025/03/the-democratisation-paradox-what...
The real question isn't "is it okay to use a tool" but "how does using a tool affect what you learn".
In the cases of both LLMs and symbolic solving calculators, I believe the answer is "highly detrimental".
Later in the university I was studying engineering. And we were forced to prepare all the technical drawings manually in the first year of study. Like literally with pencil and ruler. Even though computer graphics were widely used and we're de facto standard.
Personally I don't believe hardcore ban will help with any sort of thing. It won't stop the progress either. It's much better to help people learn how to use things instead of forcing them to deal with "old school" stuff only.
While this is superficially similar, I believe we are talking about substantially different things.
Learning (the goal) is a process. In the case of an assignment, the resulting answer / work product, while it is what is requested, is critically not the goal. However, it is what is evaluated, so many confuse it with the goal (“I want to get a good grade”)
Anything which bypasses the process makes the goal (learning) less likely to be achieved.
So, I think it is fine to use a calculator to accelerate your use of operations you have already learned and understand.
However, I don’t think you should give 3rd graders calculators that just give them the answer to a multiplication or division when they are learning how those things work in the first place.
Similarly, I think it’s fine to do research using the internet to read sources you use to create your own work.
Meanwhile, I don’t think it’s fine to do research using the internet to find a site where you can buy a paper you can submit as your own work.
Right now, LLMs can be used to bypass a great deal of process, which is why I support them not being used.
It’s possible, maybe even likely that we’ll end up with a “supervised learning by AI” approach where the assignment is replaced by “proof of process”, a record of how the student explored the topic interactively. I could see that working if done right.
So most are not curious. So what do you do for them?
Plus, many kids fail school not because of laziness, but because of their toxic environment.
Kids optimize. When I was in high school I was fully capable of getting straight F's in a class I didn't care about and straight A's in a class I enjoyed.
Why bother learning chemistry when you could instead spend that time coding cool plugins and websites in PHP that thousands of internet strangers are using? I really did build one of the most popular phpBB plugins and knew I was gonna be a software engineer. Not that my chemistry professor cared about any of that or even understood what I'm talking about.
Like any other tool, it's more a question of how they're used. For example, I've seen incredible results for students who use ChatGPT to interrogate ideas as they synthesize them. So, for example, "I'm reading this passage PASSAGE and I'm confused about phrase X. The core idea seems similar to Y, which I am familiar with. if I had to explain X, I'd put it like this ATTEMPT Can you help me understand what I'm missing?"
The results are very impressive. I'd encourage you to try it out if you haven't.
I try to use AI to automate things I already know and force myself to learn things I don't know.
It takes discipline/curiosity but it can be a net positive.
“The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. The output from AI answers questions. It teaches me facts. But it doesn’t really help me know anything new.”
I think the thesis is that with AI there is less need and incentive to “put the work in” instead of just consuming what the AI outputs, and that in consequence we do the needed work less and atrophy.
What does that mean, I’m curious?
The schools and university I grew up in had a “single-sanction honor code” which meant if you were caught lying or cheating even once you would be expelled. And you signed the honor code at the top of every test.
My more progressive friends at other schools who didn’t have an honor code happily poo-pooed it as a repugnantly harsh old fashioned standard. But I don’t see a better way today of enforcing “don’t use AI” in schools, than it.
I’m not sure how LLMs output is indistinguishable from Wikipedia or World Book.
Maybe? and if the question is “did the student actually write this?” (which is different than “do they understand it?” there are lots of different ways to assess if a given student understands the material…that don’t involve submitting typed text but still involve communicating clearly.
If we allow LLMs- like we allow calculators, just how poor LLMs are will become far more obvious.
“Falsifying or inventing any academic work, including the use of AI (ChatGPT, etc)”
Additionally, as mentioned, the school is taking actions to change how work is done to ensure students are actually doing their own work - such as requiring written assignments be completed during class time, or giving homework on physical paper that is to be marked up by hand and returned.
Apparently this is the first year they have been doing this, as last year they had significant problems with submitted work not being authored by students.
This is in an extremely competitive Bay Area school, so there can be a lot of pressure from parents on students to make top grades, and sometimes that has negative side effects.
I mean I get the existential angst though. There's a lot of uncertainty about where all this is heading. But, and this is really a tangent, I feel that the direction of it all is in the intersection between politics, technology and human nature. I feel like "we the people" leave walkover to powerful actors if we do not use these new powerful tools in service of the people. For one - to enable new ways to coordinate and organise.
That's interesting point. But here is the thing: you are supposed to drive. Not AI god. Look at it as at an assistant whom you can interrupt, instruct, correct, ask to redo. While focusing on 'what' you can delegate it some 'how' problems.
Some of my best writing came during the time that I didn't try to publicize the content. I didn't even put my name on it. But doing that and staying interested enough to spend the hours to think and write and build takes a strange discipline. Easy for me to say as I don't know that I've had it myself.
Another way to think about it: Does AI turn you into Garry Kasparov (who kept playing chess as AI beat him) or Lee Sedol (who, at least for now, has retired from Go)?
If there's no way through this time, I'll just have to occasionally smooth out the crinkled digital copies of my past thoughts and sigh wistfully. But I don't think it's the end.
I experienced this when I was younger with my rc planes, I joined some forum and I felt like everything I did had to be posted/liked to have value. I'd post designs/fantasy and get the likes then lose interest/not actually do it after I got the ego bump
Granted, I'm blessed to not have much busywork; if I need to produce corporate docs or listicles AI would be a massive boon. But I also suspect AI will be used to digest these things back into small bullet points.
The core takeaway for me is that if you have the desire to stretch your scope as wide as possible, you can get things done in a fun way with reduced friction, and still feel like your physical being is what made the project happen. Often this means doing something that is either multidisciplinary or outside of the scope of just being behind a computer screen, which isn't everyone's desire and that's okay, too.
Like, in the recent past, someone who wanted to achieve some goal with software would either need to learn a bunch of stuff about software development, or would need to hire someone like me to bring their idea to life. But now, they can get a lot further on their own, with the support of these new tools.
I think that's good, but it's also nerve-wracking from an employment perspective. But my ultimate conclusion is that I want to work closer to the ends rather than the means.
The post laments how everything is useless when any conceivable "end state" a human can do will be inferior to what LLMs can do.
So an honest attention toward the means of how something comes about—the process of the thinking vs the polished great thought—is what life is made of.
Another comment talks about hand-made bread. People do it and enjoy it even though "making bread is a solved problem".
I think a way to square the circle is to recognize that people have different goals at different times. As a person with a family who is not independently wealthy, I care a lot about being economically productive. But I also separately care about the joy of creation.
If my goal in making a loaf of bread is economic productivity, I will be happy if I have a robot available that helps me do that quickly. But if my goal is to find joy in the act of creation, I will not use that robot because it would not achieve that goal.
I do still find joy in the act of creating software, but that was already dwindling long before chatgpt launched, and mostly what I'm doing with computers is with the goal of economic productivity.
But yeah I'll probably still create software just for the joy of it from time to time in the future, and I'm unlikely to use AIs for those projects!
But at work, I'm gonna be directing my efforts toward taking advantage of the tools available to create useful things efficiently.
An AI is _not_ going to get awarded a PhD, since by definition, such are earned by extending the boundaries of human knowledge:
https://matt.might.net/articles/phd-school-in-pictures/
So rather than accept that an LLM has been trained on whatever it is you wish to write, write something which it will need to be trained on.
Having said that I am very worried about kids growing up with AI and it stunting their critical thinking before it begins - but as of right this moment AI is extremely sub par at genuinely good ideas or writing.
It’s an amazing and useful tool I use all the time though and would struggle to be without.
The key is to treat AI as a tool, not as a magic wand that will do everything for you.
Even if AI could handle every task, leaning on it that way would mean surrendering control of your own life—and that’s never healthy.
What works for me is keeping responsibility for the big picture—what I want to achieve and how all the pieces fit together—while using AI for well-defined tasks. That way I stay fully in control, and it’s a lot more fun this way too.
No LLM can ever express your unique human experience (or even speak from experience), so on that axis of competition you win by default.
Regurgitating facts and the mean opinion on topics is no replacement for the thoughts of a unique human. The idea that you're competing with AI on some absolute scale of the quality of your thought is a sad way to live.
It was never a useful metric to begin with. If your life goal is to be #1 on the planet, the odds are not in your favor. And if you get there, it's almost certainly going to be unfulfilling. Who is the #1 Java programmer in the world? The #1 topologist? Do they get a lot of recognition and love?
I used to write open source a lot but lately, I don't see the point. Not because I think LLMs can produce novel code as good code as me or will be able to in the near future. But because any time I come up with a new solution to something, it will be stolen and used without my permission, without giving me credit or without giving users the rights I give them. And it will be mangled just enough that I can't prove anything.
Large corporations were so anal about copyright that people who ever saw Microsoft's code were forbidden from contributing to FOSS alternatives like wine. But only as long as copyright suited them. Now abolishing copyright promises the C-suite even bigger rewards by getting rid of those pesky expensive programmers, if only they could just steal enough code to mix and match it with enough plausible deniability.
And so even though _in principle_ anybody using my AGPL code or anything that incorporates my AGPL code has the right to inspect and modify said code; yet now tine fractions of my AGPL code now have millions or potentially billions of users but nobody knows and nobody has the right to do anything about it.
And those who benefit the most are those who already have more money than they can spend.
Most of that 'corpus' isn't even on the Internet so it is wholly unknown to our "AI" masters.
Similarly in future we will not need mental "labor" but to keep ourselves sharp we need engage in mental exercises. I am thinking of picking up chess again just for this reason.
I still create, I just use physical materials like clay and such, to make things that AI can't yet replicate.
Name three things you cannot think about because of the language you use?
Or "why do people cook curds when making cheese."
Or how about this:
"Name three things you cannot think about because of the language you use?"
AI is at least to some extent a artificial regurgitarian. It can tell you about things that have been thought. Cool. But here is a question for you. Are there things that you can think about that have not been thought about before?
The reason people cook curds is because the goal of cheese making was to preserve milk, not to make cheese.
So the fun, all along, was not in the process of creation itself, but in the fact that the creator could somehow feel superior to others not being able to create? I find this to be a very unhealthy relationship to creativity.
My mixer can mix dough better than I can, but I still enjoy kneading it by hand. The incredibly good artisanal bakery down the street did not reduce my enjoyment of baking, even though I cannot compete with them in quality by any measure. Modern slip casting can make superior pottery by many different quality measures, but potters enjoy throwing it on a wheel and producing unique pieces.
But if your idea of fun is tied to the "no one else can do this but me", then you've been doing it wrong before AI existed.
Maybe AI is like Covid, where it will reveal that there were subtle differences in the underlying humans all along, but we just never realized it until something shattered the ability for ambiguity to persist.
I'm inclined to so that this is a destabilising thing, regardless of my thoughts on the "right" way to think about creativity. Multiple ways could coexist before, and now one way no longer "works".
But his argument does not align with that. His argument is that he enjoys the act of writing itself. If he views his act of writing (regardless of the idea being transmitted) as his "contribution to world's knowledge", then I have to say I disagree - I don't think his writing is particularly interesting in and of itself. His ideas might be interesting (even if I disagree), but he obviously doesn't find the formation of ideas enjoyable enough.
So while AI might remove the need for human beings to engage in certain practical activities, it cannot eliminate the theoretical, because by definition, theory is done for its own sake, to benefit the person theorizing by leading them to understanding something about the world. AI can perhaps find a beneficial place here in the way books or teachers do, as guides. But in all these cases, you absolutely need to engage with the subject matter yourself to profit from it.
As some others have commented, you can find rewards that aren't monetary to motivate you, and you can find ways to make your work so unique that people are willing to pay for it.
Technology forces us to use the creative process to more creatively monetize our work.
Self-actualisation should be about doing the things that only you can. Not better than anyone else, but more like, the specific things that ony you, with the same of your experience, expertise, values and constraints can do.
So for some, yes. It is of course also true that many people derive self-worth and fulfillment from contributing positively to the world, and AI automating the productive work in which they specialize can undermine that.
[1] https://en.yna.co.kr/view/AEN20191127004800315
[2] https://www.nytimes.com/2024/07/10/world/asia/lee-saedol-go-...
What I am saying is that (1) I regard this as an unhealthy relationship to creativity (and I accept that this is subjective), and (2) that most people do not feel that way, as can be confirmed by the fact that chess, go, and live music performances are all still very much practiced.
People realize this at various points in their life, and some not at all.
In terms the author might accept, the metaphor of the stoic archer comes to mind. Focusing on the action, not the target is what relieves one of the disappointment of outcome. In this cast, the action is writing while the target is having better thoughts.
Much of our life is governed by the success at which we hit our targets, but why do that to oneself? We have a choice in how we approach the world, and setting our intentions toward action and away from targets is a subtle yet profound shift.
A clearer example might be someone who wants to make a friend. Let's imagine they're at a party and they go in with the intention of making a friend, they're setting themselves up for failure. They have relatively little control over that outcome. However, if they go in with the intention of showing up authentically - something people tend to appreciate, and something they have full control over - the changes of them succeeding increase dramatically.
Choosing one's goals - primarily grounded in action - is an under-appreciated perspective.
The primary reason is not that it relieves us of the disappointment, but that worrying about the outcome increases our anxiety and impacts our action which hampers the outcome.
With AGI, Knowledge workers will be worth less until they are worthless.
While I'm genuinely excited about the scientific progress AGI will bring (e.g. curing all diseases), I really hope there's a place for me in the post-AGI world. Otherwise, like the potters and bakers who can't compete in the market with cold-hard industrial machines, I'll be selling my python code base on Etsy.
No Set Gauge had an excellent blog post about this. Have a read if you want a dash of existential dread for the weekend: https://www.nosetgauge.com/p/capital-agi-and-human-ambition.
agreed on the bumpy road - i don't see how we'll reach a post-scarcity society unless there is an intentional restructuring (which, many people think, would require a pretty violent paradigm shift).
without a dramatic shift in wealth distribution (no less than the elimination of private wealth and the profit motive), we can't have a post-scarcity society. capitalism depends entirely upon scarcity, artificial or not.
I wouldn’t worry too much yet.
"Knowledge workers" being in charge is a recent idea that is, perhaps, reaching end of life. Up until WWII or so, society had more smart people than it had roles for them. For most of history, being strong and healthy, with a good voice and a strong personality, counted for more than being smart. To a considerable extent, it still does.
In the 1950s, C.P. Snow's "Two Cultures" became famous for pointing out that the smart people were on the way up.[1] They hadn't won yet; that was about two decades ahead. The triumph of the nerds took until the early 1990s.[2] The ultimate victory was, perhaps, the collapse of the Soviet Union in 1991. That was the last major power run by goons. That's celebrated in The End of History and the Last Man (1992).[3] Everything was going to be run by technocrats and experts from now on.
But it didn't last. Government by goons is back. Don't need to elaborate on that.
The glut of smart people will continue to grow. Over half of Americans with college educations work in jobs that don't require a college education. AI will accelerate that process. It doesn't require AI superintelligence to return smart people to the rabble. Just AI somewhat above the human average.
[1] https://en.wikipedia.org/wiki/The_Two_Cultures
[2] https://archive.org/details/triumph_of_the_nerds
[3] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...
I'm curious why so any people see creators and intellectuals as competitive people trying to prove they're better than someone else. This isn't why people are driven to seek knowledge or create Art. I'm sure everyone has their reasons for this, but it feels like insecurity from the outside.
Looking at debates about AI and Art outside of IP often brings out a lot of misunderstandings about what makes good Art and why Art is a thing man has been compelled to make since the beginning of the species. It takes a lifetime to select techniques and thought patterns that define a unique and authentic voice. A lifetime of working hard on creating things adds up to that voice. When you start to believe that work is in vain because the audience doesn't know the difference it certainly doesn't make it feel rewarding to do.
No offense, but I've found that AI outputs very polished but very average work. If I am working on something more original, it is hard to get AI to output reasoning about it without heavy explanation and guidance. And even then, it will "revert to the mean" and stumble back into a rut of familiar concepts after a few prompts. Guiding it back onto the original idea repeatedly quickly uses up context.
If an AI is able to take a sliver of an idea and output something very polished from it, then it probably wasn't that original in the first place.
If the "means justify the ends" then doing anything is its own reason.
And in the _end_, the cards will land where they may. ends-justify-means is really logical and alluring, until I realize why am I optimizing for END?
(most AIs need to be explicitly told before you start this). You tell them not to agree with you, to ask more questions instead of providing the answers, to offer justifications and background as to why those questions are being asked. This helps you refine your ideas more, understand the blind spots, and explore different perspectives. Yes, an LLM can refine the idea for you, especially if something like that is already explored. It can also be the brainstorming accessory who helps you to think harder. Come up with new ideas. The key is to be intentional about which way you want it. I once made Claude roleplay as a busy exec who would not be interested in my offering until i refined it 7 times (and it kept offering reasons as to why an AI exec would or would not read it).
Try this. The world is infinitely complex. AI is very good at dealing with the things it knows and can know. It can write more accurately than I can, spell better. It just takes stuff I do and learns from my mistakes. I'm good with that. But here is something to ask AI:
"Name three things you cannot think about because of the language you use?"
Or "why do people cook curds when making cheese."
Or how about this:
"Name three things you cannot think about because of the language you use?"
AI is at least to some degree an artificial regurgitarian. It can tell you about things that have been thought. Cool. But here is a question for you. Are there things that you can think about that have not been thought about before or that have been thought about incorrectly?
The reason people cook curds is because the goal of cheese making was (in the past) to preserve milk, not to make cheese.
Something I’ve been thinking about lately is the idea of creative stagnation due to AI. If AI’s creativity relies entirely on training from existing writing, anrchitecture, art, music, movies, etc., then future AI might end up being trained only on derivatives of today’s work. If we stop generating original ideas or developing new styles of art, music, etc., how long before society gets stuck endlessly recycling the same sounds, concepts, and designs?
I do understand where the author is coming from. Most of the times, it is easy to read an answer---regardless of whether it is right/wrong, relevant or not---than think of an answer. So AI does take that friction of thinking away.
But, I am still disappointed of all this doom because of AI. I am inclined to throw my hands and say "just don't use it then". The process of thinking is where the fun lies, not in showing the world I am better or always right than so and so.
I find immense joy and satisfaction when I write poetry. It's like crafting a puzzle made of words and emotions. While I do enjoy the output, if there is any goal it is to tap into and be absorbed by the process itself.
Meanwhile, code? At least for me, and to speak nothing of those that approach the craft differently, it is (almost) nothing but a means to an ends! I do enjoy the little projects I work on. Hmm, maybe for me software is about adding another tool to the belt that will help with the ongoing journey. Who knows. It definitely feels very different to outsource coding than to outsource my artistic endeavors.
One thing that I know won't go away are the small pockets of poetry readings, singer-songwriters, and other artistic approaches that are decidedly more personal in both creation and audience. There are engaged audiences for art and there are passive consumers. I don't think this changes much with AI.
I can't relate to this at all. The reason I write, debate, or think at all is to find out what I believe and discover my voice. Having an LLM write an essay based on one of my thoughts is about as "me" as reading a thinkpiece that's tangentially related to something I care about. I write because I want to get my thoughts out onto the page, in my voice.
I find LLMs useful for a lot of things, but using an LLM to shortcut personal writing is antithetical to what I see as the purpose of personal writing.
I'd like to challenge a few things. I rarely have a moment where an LLM provides me a creative spark. It's more that I don't forget anything from the mediocre galaxy of thoughts.
See AI as a tool.
A tool that helps you to automate repetitive cognitive work.
ChatGPT write that post more eloquently:
May 16, 2025 On Thinking
I’ve been stuck.
Every time I sit down to write a blog post, code a feature, or start a project, I hit the same wall: in the age of AI, it all feels pointless. It’s unsettling. The joy of creation—the spark that once came from building something original—feels dimmed, if not extinguished. Because no matter what I make, AI can already do it better. Or soon will.
What used to feel generative now feels futile. My thoughts seem like rough drafts of ideas that an LLM could polish and complete in seconds. And that’s disorienting.
I used to write constantly. I’d jot down ideas, work them over slowly, sculpting them into something worth sharing. I’d obsess over clarity, structure, and precision. That process didn’t just create content—it created thinking. Because for me, writing has always been how I think. The act itself forced rigor. It refined my ideas, surfaced contradictions, and helped me arrive at something resembling truth. Thinking is compounding. The more you do it, the sharper it gets.
But now, when a thought sparks, I can just toss it into a prompt. And instantly, I’m given a complete, reasoned, eloquent response. No uncertainty. No mental work. No growth.
It feels like I’m thinking—but I’m not. The gears aren’t turning. And over time, I can feel the difference. My intuition feels softer. My internal critic, quieter. My cleverness, duller.
I believed I was using AI in a healthy, productive way—a bicycle for the mind, a tool to accelerate my intellectual progress. But LLMs are deceptive. They simulate the journey, but they skip the most important part. Developing a prompt feels like work. Reading the output feels like progress. But it's not. It’s passive consumption dressed up as insight.
Real thinking is messy. It involves false starts, blind alleys, and internal tension. It requires effort. Without that, you may still reach a conclusion—but it won’t be yours. And without building the path yourself, you lose the cognitive infrastructure needed for real understanding.
Ironically, I now know more than ever. But I feel dumber. AI delivers polished thoughts, neatly packaged and persuasive. But they aren’t forged through struggle. And so, they don’t really belong to me.
AI feels like a superintelligence wired into my brain. But when I look at how I explore ideas now, it doesn’t feel like augmentation. It feels like sedation.
Still, here I am—writing this myself. Thinking it through. And maybe that matters. Maybe it’s the only thing that does.
Even if an AI could have written this faster. Even if it could have said it better. It didn’t.
I did.
And that means something.
As far as I can tell, LLMs are incapable of any of the above.
I'd love to hear from LLM experts how LLMs can ever have original ideas using the current type of algorithms.
It makes me not want to participate in those communities (although to be honest, spending less time commenting online would probably be good for me).
1) If you wrote most of it yourself then you failed to adequately utilize AI Coding agents and yet...
2) If AI wrote most of it, then there's not exactly that much of a way to take pride in it.
So the new thing we can "take pride in" is our ability to "control" the AI, and it's just not the same thing at all. So we're all going to be "changing jobs" whether we like it or not, because work will never be the same, regardless of whether you're a coder, an artist, a writer, or an AD agency fluff writer. Then again pride is a sin, so just GSD and stop thinking about yourself. :)
What LLMs can't replace is network effects. One LLM is good but 10 LLMs/agents working together creating shared history is not replaceable by any LLM no matter how smart it becomes.
So it's simple. Build something that benefit from network effects and you will quickly find new ideas, at least it worked for me.
So now I am exploring ex. synthetic predictions markets via https://www.getantelope.com or
Rethinking myspace but for agents instead like: https://www.firstprinciple.co/misc/AlmostFamous.mp4
AI want's to be social :)
some people are all go and no stop. we call them impulsive.
some people may LOOK all go but have wisdom (or luck) behind the scenes putting the brakes on. Example: Tom Cruise does his own stunts, and must have a good sense for how to make it safe enough
What this author touches on is a chief concern with AI. In the name of removing life friction, it removes your brakes. Anything you want to do, just ask AI!
But should you?
I was out the other day, pondering what the word "respect" really means. It's more elusive than simply liking someone. Several times I was tempted to just google it or ask AI, but then how would I develop my own point of view? This kind of thing feels important to have your own point of view on. And it's that we're losing - the things we should think about in this life, we'll be tempted to not anymore. And come out worse for it.
All go, no brakes
E.g. imagine it was the case that you could write a blog post, with some insight, in some niche field – but you know that traffic isn't going to get directed to your site. Instead, an LLM will ingest it, and use the material when people ask about the topic, without giving credit. If you know that will happen, it's not a good incentive to write the post in the first place. You might think, "what's the point".
Related to this topic - computers have been superhuman at chess for 2 decades; yet good chess humans still get credit, recognition, and I would guess, satisfaction, from achieving the level they get to. Although, obviously the LLM situation is on a whole other level.
I guess the main (valid) concern is that LLMs get so good at thought that humans just don't come up with ideas as good as them... And can't execute their ideas as well as them... And then what... (Although that doesn't seem to be the case currently.)
I don't think that's a valid concern, because LLMs can't think. They are generating tokens one at a time. They're calculating the most likely token to appear based on the arrangements of tokens that were seen in their training data. There is no thinking, there is no reasoning. If they they seem like they're doing these things, it's because they are producing text that is based on unknown humans who actually did these things once.
Huh? They are generating tokens one at a time - sure that's true. But who's shown that predicting tokens one at a time precludes thinking?
It's been shown that the models plan ahead, i.e. think more than just one token forward. [1]
How do you explain the world models that have been detected in LLMs? E.g. OthelloGPT [2] is just given sequences of games to train on, but it has been shown that the model learns to have an internal representation of the game. Same with ChessGPT [3].
For tasks like this, (and with words), real thought is required to predict the next token well; e.g. if you don't understand chess to the level of Magnus Carlsen, how are you going to predict Magnus Carlsen's next move...
...You wouldn't be able to, even just from looking at his previous games; you'd have to actually understand chess, and think about what would be a good move, (and in his style).
[1] https://www.anthropic.com/research/tracing-thoughts-language...
[2] https://www.neelnanda.io/mechanistic-interpretability/othell...
[3] https://adamkarvonen.github.io/machine_learning/2024/01/03/c...
As a matter of fact I’m starting to have my doubts about the other people writing glowing, longwinded comments on this discussion.
“There are no shortcuts to knowledge, especially knowledge gained from personal experience. Following conventional wisdom and relying on shortcuts can be worse than knowing nothing at all.” ― Ben Horowitz
Almost. Similar. I still make things because sometimes what I find online (and what I can generate from AI) isn't "good enough" and I think I can do better. Even when there's something similar that I can reuse, I still make things to develop my skills for further occasions when there isn't.
For example, somebody always needs a slightly different JavaScript front-end or CRM, even though there must be hundreds (thousands? tens-of-thousands?) by now. There are many programming languages, UI libraries, operating systems, etc. and some have no real advantages, but many do and consequently have a small but dedicated user group. As a PhD student, I learn a lot about my field only to make a small contribution*, but chains of small contributions lead to breakthroughs.
The outlook on creative works is even more optimistic, because there will probably never be enough due to desensitization. People watch new movies and listen to new songs not because they're better but because they're different. AI is especially bad at creative writing and artwork, probably because it fundamentally generates "average"; when AI art is good, it's because the human author gave it a creative prompt, and when AI art is really good, it's because the human author manually edited it post-generation. (I also suspect that when AI gets creative, people will become even more creative to compensate, like how I suspect today we have more movies that defy tropes and more video games with unique mechanics; but there's probably a limit, because something can only be so creative before it's random and/or uninteresting.)
Maybe one day AI can automate production-quality software development, PhD-level research, and human-level creativity. But IME today's (at least publicly-facing) models really lack these abilities. I don't worry about when AI is powerful enough to produce high-quality outputs (without specific high-quality prompts), because assuming it doesn't lead to an apocalypse or dystopia, I believe the advantages are so great, the loss of human uniqueness won't matter anymore.
* Described in https://matt.might.net/articles/phd-school-in-pictures/
I bring up my studies because what the author is talking about strikes me as not having been ambitious enough in his thinking. If you prompt current LLMs with your idea and find the generated arguments and reasoning satisfactory, then you aren't really being rigorous or you're not having big enough ideas.
I say this confidently because my studies showed me not only the methods in finding and contrasting evidence around any given issue, but also how much more there is to learn about the universe. So, if you're being rigorous enough to look at implications of your theories, finding datapoints that speak to your conclusions and find that your question has been answered, then your idea is too small for what the state of knowledge is in 2025.
Granted, that happened before AI. The vast majority of text in my in-box, I never read. I developed heuristics for deciding what to ignore. "Stuff that looks like it was probably generated" will probably be a new heuristic. It's subjective for now. One clue is if it seems more literate than the person who wrote it.
Stuff that's written for school falls into that category. It existed for some reason other than being read, such as the hope that the activity of writing conferred some educational benefit. That was a heuristic too -- a rule of thumb for how to teach, that has been broken by AI.
Sure, AI can be used to teach a job skill, which is writing text that's not worth reading. Who wants to be the one who looks the kids in the eye and explain this to them?
On the other hand, I do use Copilot now, where I would have used Stackoverflow in the past.
What if you could turn your attention to much bigger things than you ever imagined before? What if you could use this new superpower to think more not less, to find ways to amplify your will and contribute to fields that were previously out of your reach?
That’s not what “no assistance” means.
I’m not nitpicking, however - I think this is an important point. The very concept of what “done completely by myself” means is shifting.
The LLMs we have today are vastly better than the ones we had before. Soon, they will be even better. The complaint he makes about the intellectual journey being missing might be alleviated by an AI as intellectual sparring partner.
I have a feeling this post basically just aliases to “they can think and act much faster than we can”. Of course it’s not as good, but 60-80% as good, 100x faster, might be net better.
Pursue that, since that's what LLMs haven't been helping you with. LLMs haven't really generated new knowledge, though there are hints of it--they have to be directed. There are two or three times when I felt the LLM output was really insightful without being directed.
--
At least for now, I find the stuff I have a lot of domain expertise in, the LLM's output just isn't quite up to snuff. I do a lot of work trying to get it to generate the right things with the right taste, and even using LLMs to generate prompts to feed into other LLMs to write code, and it's just not quite right. Their work just seems...junior.
But for the stuff that I don't really have expertise in, I'm less discerning of the exact output. Even if it is junior, I'm learning from the synthesis of the topic. Since it's usually a means to an end to support the work that I do have expertise in, I don't mind that I didn't do that work.
> I’ve been thinking about this damn essay for about a year, but I haven’t written it because Twitter is so much easier than writing, and I have been enormously tempted to just tweet it, so instead of not writing anything, I’m just going to write about what I would have written if Twitter didn’t destroy my desire to write by making things so easy to share.
and
> But here’s the worst thing about Twitter, and the thing that may have permanently destroyed my mind: I find myself walking down the street, and every fucking thing I think about, I also think, “How could I fit that into a tweet that lots of people would favorite or retweet?”
I would like access to whatever LLM the author is using, because I cannot relate to this at all. Nearly all LLM output I've ever generated has been average, middle-of-the-road predictable slop. Maybe back in the GPT-3 days before all LLMs were RLHF'd to death, they could sometimes come up with novel (to me) ideas, but nowadays often I don't even bother actually sending the prompt I've written, because I have a rough idea of what the output is going to be, and that's enough to hop to the next idea.
Besides that:
I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.
Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.
Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.
Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.
Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.
The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.
Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.
So, I don’t see the same picture, but something close to the opposite of what the author sees.
Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…
I get more use out of them every single day and certainly with every model release (mostly for generating absolutely not trivial code) and it's not subtle.
But I am not saying LLMs are impotent - the other week Claude happily churned me ~3500 lines of C code that allowed to implement a prototype capture facility for network packets with flexible filters and saving the contents into pcapng files. I had to fix a couple of bugs that it made, but overall it was certainly at least 5x-10x productivity improvement compared to me typing these lines of code by hand. I don’t dispute that it’s a pretty useful tool in coding, or as a thinking assistant (see the last paragraph of my comment).
What I challenged is the submissive self deprecating adoration across the entire spectrum.
That's not my experience though. I tried several models, but usually get a confident half-baked hallucination, and tweaking my prompt takes more time than finding the information myself.
My requests are typically programming tho.
In that respect I am not afraid of LLMs making me dumber as I would argue that google search has not made me dumber.
It used to be that if you spent your day doomscrolling instead of writing a blog post, that blog post wouldn't get written and you wouldn't get the riches and fame. But now, you can use AI to write your blog post / email / book. If you don't have an intrinsic motivation to work your brain, it's a lot easier to wing it with AI tools.
At the same time... gosh. I can't help but assume that the author is just depressed and that it has little to do with AI. The post basically says that AI made his life meaningless. But you don't have to use AI tools if they're harming you. And more broadly, life has no meaning beyond what we make of it... unless your life goal is to crank out text faster than an LLM, there's still plenty of stuff to focus on. If you genuinely think you can't possibly write anything new and interesting, then dunno, pick a workshop craft?
Anyway, the pendulum will swing the other way eventually, but it's a rough ride hanging on until then.
Glad to see stimulating discussion here falling on both sides.
> Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show.
That line really hits home for me.
In this case, abundance of cognitive ability.
We say that our food sucks. Yet, our elite athletes would crush Hercules or other God-like figures from our mythology. At the same time, we suffer from obesity.
The answer to the paradox comes from abundance. I don’t know why it happens, but I’ve noticed it on food, information retrieval, and now cognitive capacity.
Think about what happened to our capacity to search information on books. Librarians are masters of organizing chaos and filtering through information. But most of us don’t know a tiny fraction of their knowledge because we grew up with Google.
My hope is that, just like eating healthy is not as pleasurable as processed sugars but it’s necessary for a fit life, we will need to go through the process of thinking healthy even though is not as pleasurable as tinkering with LLM prompts.
This doesn’t mean escapism however. Modern athletes take advantage of the industrial world too, but they’re smart about it. I don’t think thinking will be much different.
As kelseyfrog commented already, the key is to focus on the action, not the target. Lifting is not just about hitting a number or getting bigger muscles (though they are great extrinsic motivators), its more of an action that we derive growth from. I have internalized the act of working out that those targets are baked into the unconscious. I don't overthink when I'm lifting. My unconscious take the lead, and I just follow. I enjoy seeing the results show up unexpectedly. It lets me grow without feeling the constant pressure of my conscious mind.
The lifting analogy can be applied to writing and other effortful pursuits. We write for the pleasure of reconciling internal conflicts and restoring order to our chaotic mind. Writing is the lifting of our mind. If we do it for comparison, then there's no point in lifting, or writing, or many other things we do after all our technological breakthroughs. Doing what we do is a means to an end, not the other way around.
So many breakthroughs come from people who work either in ignorance or defiance of existing established ideas. Almost by definition, in fact - to a large extent, everything obvious has already been thought. So to some extent, all the real progress happens in places that violate norms and pre-established logic.
So what's going to happen now if every idea has to run the gauntlet of a supremely intelligent but fully regressive AI? It feels like we could lose a tremendous amount of the potential for original thought from humanity. A good counter argument would be that this has already happened and we're still making progress. I just wonder however if it's a question of degree and that degree matters.
The AI will have been trained predominantly on the traditional approaches.
I feel AI will be fundamentally limited to regurgitating past ideas and intelligence.
It may at least use a breadth of knowledge to save some people time by helping them avoid repeating work already done.
I’d love to see an AI trained only on knowledge up to 1800 come up with a single invention of the past 200 years. (It won’t happen)
The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.
I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).
I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.
Perhaps we will now suffer from AI-mposter syndrome as well. Ain't life wonderful?
But for me what has actually happened is almost the opposite, I seem to be experiencing more of a "tree of thoughts" with the ability to now perform rapid experimentation down a given branch, disposing branches that don't bare fruit.
I feel more liberated to explore creative thoughts than ever. I spend less time on the toil needed both bootstrap my thought process and to fending off cognitive dissonance when the feeling of sunk cost creeps in after going too deep down the wrong path.
I wonder if it's just perhaps a difference in how people explore and "complete" their thoughts? Or am I kidding myself and actually getting dumber and just fail to see it?
bradgessler•5h ago
Today I'm working on doing the unthinkable in an AI-world: putting together a video course that teaches developers how to use Phlex components in Rails projects and selling it for a few hundred bucks.
One way of thinking about AI is that it puts so much new information in front of people that they're going to need help from people known to have experience to navigate it all and curate it. Maybe that will become more valuable?
Who knows. That's the worst part at this moment in time—nobody really knows the depths or limits of it all. We'll see breakthroughs in some areas, and others not.