Gotta disagree. Adding abstraction has yielded benefits but it certainly hasn't been consistently good. For example, see the modern web.
Back before I got a cell phone, I had many many phone numbers memorized. Once I got a cell phone with a contacts list, I just stopped. Now I have my parents and my wife's phone numbers memorized, and that's it.
URLs are much the same. On most websites, if I can see the domain is the one that I expect to be on, that's all I really care about. There's a few pages that I interact with the URL directly, but it's a minority.
- Natural language is not precise and has no spec, unlike programming languages.
- The translation from C (or other higher-level language) to assembly by a given compiler is determined in a way that the behavior of an LLM is not.
- On the flip side, the amount of control given to the tool versus what is specified by the programmer is wildly different between the two.
This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
Which is where you have basic desktop apps written using electron taking up 500MB each and use 1.2GB of memory. It doesn't scale well on a typical 8GB laptop on a user machine.
Not saying it should be in assembly either (which also doesn't make sense), but the fact that today's excuse is that a SWE is used to one language is really a poor excuse.
Nothing wrong with using high-level compiled languages to write native desktop apps that compile to an executable.
NodeJS was the biggest mistake our industry made and I will die on this hill. It has taken the crown from null. People have been trying to claw it back with Typescript but the real solution was to drop JS altogether. JS becoming the language in the browser was an artifact of history when we didnt know where this internet thing was going. By the time NodeJS was invented we should have known better.
Just send the bullet points! Nobody wants the prose. It’s a business email not an art. This is a hill I will die on.
Prose has its uses when you want to transmit vibes/feelings/... For actionable info communication between busy people, terse and to the point is better and more polite.
It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.
I use LLMs to shorten my emails.
Using LLMs to do that shortening is potentially hindering that practice.
The author's point I think is less about sending LLM waffle, it's a lot more that they can't send something that is indistinguishable from LLM waffle anyways due to skills issue - because the LLM is so often used instead of building that skill.
I think the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills - in which case those skills never form or atrophy.
Imagine:
Write a coherent but succinct email to Ms Griffin, principal of the school where my 8yo son goes, explaining;
- Quizlet good for short term recollection
- no point memorising stuff if going to forget later
- better to start using anki, and only memorize stuff where worth remembering forever
But the recipient can just ask AI to convert the prose into bullet points.
Oral exams, face to face interviews, etc.
If you think of the LLM as a tireless coach and instructor and not a junior employee you’ll have a wonderful opportunity. LLMs have taught me so so much in the last 12 months. They have also removed so many roadblocks and got me to where I wanted to be quicker. Eg: I don’t particularly care about learning Make atm but I do want to compile something to WASM.
Memorization used to be a much more important skill than it is today. I am probably worse at rote memorization than I was when I was 13. Am I dumber? I would say no - I've just adapted to the fact that memorization is much less important in a world where I have access to basically the entire recorded history of human knowledge anywhere, anytime.
LLMs are just another very powerful technology that changes what subdomains of intelligence matter. When you have an LLM that can write code better than any human being (and since I know I will get testy HN replies about how LLMs can't do that, I will clarify here that I mean this is a thing that is not true today but will be in the future), the skill that matters shifts from writing code to defining the problem or designing the product.
> Looking at historical examples, successful cases of offloading occurred because the skills are either easily contained (navigation) or we still know how to perform the tasks manually but simply don’t need to anymore (calculator). The difference this time is that intelligence cannot easily be confined.
This is true, but I think it just means we'll see a more extreme kind of the same change we've seen as we've created powerful new tools in the past. I think it's helpful to think of the tool less as intelligence and more as the products of intelligence that are relevant, like generating high quality code or doing financial analysis. You'll have tools that can do those things extremely well, and it'll be up to you to make use of them rather than worrying about the loss of now-irrelevant skills.
Unless you are maintaining hardware or device drivers which is done at any company that makes hardware such as: Apple, Google, Microsoft, Nvidia, SpaceX, Intel, AMD, ARM, Tesla and the list goes on.
Building web apps that you can update willy-nilly while running on arbitrarily powerful and always-available hardware isn't the entirety of software engineering.
I'll be thinking of something in the car, like how do torque converters work? And then I start live talk session with GPT and we start talking about it. Unlike a Wikipedia article that just straight tells you how it works, I can dive down into each detail that is confusing to me until I fully understand it. It's incredible, for the curious.
The vetting process is the same as if I were driving up I-5 with a gear head friend of mine having a conversation with them as we go.
You may think this, but the principles are extremely relevant even in much 'higher tiers' of programming, such as the front-end. Performance optimization is always relevant, and understanding the concepts you learn from learning assembly is crucial.
Such courses also generally encourage a depth of understanding of the whole computing stack. This is more and more relevant in the modern age, where we have technologies such as Web*Assembly*.
i now have qwen3. it runs locally on my machine. it can vibe code, it can oneshot, it can reason about complex non code problems and give me tons of background information on the entire world. i used to keep a copy of wikipedia offline, only some few gigabytes for the text version and even if that is too much there's reduced selection versions available in kiwix.
i am fine with llms taking over a lot of that tedious work, because i am confident i will always have this tool the same as all my other tools and data that i backup and keep for myself. that means its ok for me to cheat a bit here and there and reach out to the higher power models in the cloud the same way i would sometimes google an error message before even reading it if i am doing work to pay my bills. i have these rungs of the ladder to climb down from and feel like i am not falling into oblivion.
i think the phrase that sums this up best is work smarter not harder. im ok with accepting a smarter way of doing things, as long as i can always depend on it being there for me in an adverse situation.
Which is why I try to treat LLMs like a “calculator” to check my work.
I do things myself, then after I do it myself - ask an LLM to do the same.
That way, I’m still critical thinking and as a result - I actually get more benefit from the LLM since I can be more specific in having it help me fill in gaps.
The pocket calculator simplifies exactly one part of math and probably isn't even used that much by research mathematicians.
Chess programs are obviously forbidden in competitions. Should we forbid LLMs for programming? In line with the headline though, Magnus Carlsen said that he does not use chess programs that much and that they do make him dumber. His seconds of course use them in preparation for competitions.
LLMs destroy the essence of humanity: Free thought. We are fortunate that many people have come to the same conclusion and are fighting back.
I want to add another one to the author's list, which I think is even more relevant:
Writing.
Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.
Like Assembly to C to Python, as the author points out, LLMs allow us to move up the ladder of abstraction. There are obvious downsides to this, but I expect the evolution is inevitable.
The complaints about that evolution are also inevitable. We lose control, and expertise is less valued. We experts stand to lose a lot, especially if we clutch to the old ways. We're in the midst of a sea-change, we know what we're losing, but we're not sure what we're gaining.
Can you help me complete this analogy? By failing to rely on "writing" (read: LLMs), what will fail to be recorded and therefore remembered? Is the idea that if knowledge isn't encompassed by an LLM, in the future it will be inaccessible to anyone?
Because the journey is the destination. Using AI extensively so far appears to be a path that mostly allows for a regression to the mean. Caring about what you're doing, being intentional, and having presence of mind is what leads to interesting outcomes, even if every step along the way isn't engaging or yielding the same output as telling an LLM to do it.
I suppose if you don't care about what you're doing, go ahead and get an LLM to do it. But if it isn't worth doing yourself... Why are you doing it?
Really, do you need those Chrome extensions?
Alternatively, though... If you do, but they aren't mission critical, maybe it's fine to have an LLM puke it out.
For something that really matters to you though, I'd recommend being deep in it and taking whatever time it takes.
Also the tutor approach seems great to me. I don't feel like it's making me dumber. Using LLMs to produce code seemed to make me lazy and dumber though, so I've largely backed off. I'll use it to scaffold narrow implementations, but that's it.
It's interesting that he lists a number of historical precedents, like the invention of the calculator, or the mechanization of labor in the industrial revolution, and explains how they are different than AI. With the exception of chess, I think he's wrong about the effects of all of them.
For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did. People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did. People (in aggregate) are worse at those skills now.
Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place? Is it crazy to think that not memorizing things because we can access printed (and digitized) material might have larger, unforeseen consequences on our brains, or our societies? Could mechanizing menial labor have induced some change in how we think, or have any long term effects on our bodies?
I think we're seeing—and will continue to see—that there are knock-on effects to technology that we can't predict beforehand. We think we're making a simple exchange of an old, difficult skill for a new, easy one, but we're actually causing a more far-reaching cascade of changes that nobody can warn us of in advance.
And, to me, the even scarier thing is that those of us who don't live through those changes will have no basis for comparison to know whether the trade-off was worth it.
> Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
Don't kids still learn to do arithmetic in their head first? I haven't been in a school in decades but I remember doing it all sans calculator in elementary school. When you move on up to higher level stuff you end up using a calculator, but it's not like we skip that step entirely, do we?
Thing is some people never were good at reading/using maps, much less creating them. Even with GPS at hand I still prefer seeing a map to know where I'm going. Anyway, retaining at least a modicum of "classic" skills is beneficial. After all, GPS isn't infallible. As with all complex technologies, possibility of failure warrants having alternatives.
I was recently on a cruise, someone asked the ship's navigator whether officers were trained on using old instruments like the sextant. He replied that they were, and continue to drill on their use. Sure, the ship has up-to-date equipment, but knowing the "old ways" is potentially still relevant.
> "The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place?"
Naturally, old skills fade with advent of newer methods. There's a shortage of ferriers, people who shoe horses. Very few people are being apprenticed in the trade. (Though I'm told the work pays very well.) Owning horses is a niche but robust interest, so ferriers have full workloads, the occupation is not disappearing.
Point is that in real-world terms losing skills diminishes the richness of human lives because there's value in all constructive human endeavor. Similarly an individual's life is enriched by acquiring fundamental skills even if seldom used. Of course we have to parcel our time wisely, but sparing a bit of time to exercise basic capabilities is probably a good idea.
I feel like the possibility of having a dying and phone and needing to get back home from a new place late at night is certainly possible, so I think it is worth having at least a basic knowledge of the major highways in your locality and which direction each one goes.
Digital applications lead to the opposite of what they were meant to do. This is a very reliable indicator for outcomes.
What you described is more akin to laziness than loss of knowledge. It is also a trap. Your text is almost satirical to the notion that AI could be harmful for learning, because we all know we can relearn those things. And we can, for now.
Several generations of it, when people start to forget simple things, is where the danger lies. We don't know if it will come to that or not.
People were doing this with Stack Overflow / Blogs / Forums. It doesn't matter if you look up pre-existing solutions. It matters whether you understand it properly. If you do that is fine, if you don't then you will produce poor code.
> Before the rise of LLMs, learning was a prerequisite for output. Learning by building projects has always been the best way to improve at coding, but now, you can build things without deeply understanding the implementation.
People completed whole projects all the time before LLMs without deeply understanding the code. I've had to work with large amounts of code where it was clear people never read the docs, never understood the libraries frameworks they were working with. Many people seem to do "Cargo Cult Programming", where they just follow what someone else has done and just adapt enough to solve their problem.
I've seen people take snippets from stack overflow wholesale and just fiddle until it works not really understanding it.
LLMs are just a continuation of this pattern. Many people just want to do their hours and get paid and are not interested and/or capable of actually understanding fully what they are working on.
> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand. But this is also a narrow skill that isn’t foundational to other higher-order ones. Maybe software engineering will be something as obsolete as navigating where you can wholly offload it? However, that seems unlikely given the difference in complexity of the two tasks.
I think the author will learn the hard way. You shouldn't rely on Google Maps. Literally less than 2 weeks ago, Google maps was non-functional (I ran out of data), I ended up using road signs and driving towards town names I recognised to navigate back. Learning basic navigational methods is a good idea.
LLMs can accelerate learning. Everyone is optimistic about the idea of personalized tutors improving education. You can already use them like that while working on real-world projects.
etblg•4h ago
Think I'd rather just have the bullet points in the first place, to be honest, has to be easier and quicker to read than an LLM soup of filler paragraphs.
Tempest1981•4h ago