Same thing's happening now with code. We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs, etc, and not enough time thinking about the real problem we're trying to solve.
From Assembly to English. What do you reckon?
I think there is an important difference between LLM-interpreted English, and compiler-emitted Assembly, which is determinism.
The reason we're still going from human prompt to code to execution, rather than just prompt to execution, is that the code is the point at which determinism can be introduced. And I suspect it will always be useful to have this determinism capability. We certainly spend a lot of time debugging and fixing bugs, but we'd spend even more time on those activities if we couldn't encode the solutions to those bugs in a deterministic language.
Now, I won't be at all surprised if this determinism layer is reimplemented in totally different languages, that maybe are not even recognizable as "computer language". But I think we will always need some way to say "do exactly this thing" and the current computer languages remain much better for this than the current techniques to prompt AI models.
Originally I thought LLMs would add a new abstraction layer, like C++ -> PHP, but now I think we will begin replacing swaths of "logically knowable" processes one by one, with dynamic and robust interfaces. In other words, LLMs, if working under the right restrictions, will add a new layer of libraries.
A library for auth, a library for form inputs, etc. Extensible in every way with easy translation between languages. And you can always dig into the code of a library, but mostly they just work as-is. LLMs thrive with structure, so I think the real nexy wave will be adding various structures on top of general LLMs to achieve this.
I'm not even sure I disagree with your comment... I agree that I think LLMs will "add a new layer of libraries" ... but I think it seems fairly likely that they'll do that by generating a bunch of computer code?
I think that in programming we will still have to understand the builder's execution, which should remain deterministic, hopefully not at the level of assembly.
I think it is difficult to know in advance when the LLM will do a reasonable or good job and when it won't. But I am slowly learning when and how to use the tools while still enjoying using them.
English is just too poorly-specified. Programs need to be able to know exactly what they're supposed to do next, what their output is supposed to be, etc. Even humans need to ask each other for clarification and such all the time.
If you want to use English to specify a program, by the time you've adjusted it to be clear and specific enough to actually be able to do that...it turns out you've made a programming language.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
I definitely don't do that. It's a very small part of my job. And AFAIK, LLMs cannot generate assembly language yet, and CPUs don't understand English.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
Were you a published author in the 80s?
Because I highly doubt this was how writers in 80s thought of their job.
This is even enhanced when you create a superficial barrier such as writing in all caps.
Shakespeare wrote under pressure because he had deadlines. His creativity was shaped by the need to deliver.
Einstein, on the other hand, had no real deadlines. His creativity was shaped by the need to understand. He had time to sit with ideas, rethink assumptions, and see patterns no one else saw.
Shakespeare would say: "Creativity is all about time. And writing by hand takes time."
And Einstein would reply: "Time does not exist my friend. So take your time and write it again."
We will be fine.
I agree, but it feels like we need a new type of L_X_M. Like an LBM (Large Behavior Model), which is trained on millions of different actions, user flows, displays, etc.
Converting token weights into text-based code designed to ease the cognitive load on humans seems wildly inefficient compared to converting tokens directly into UI actions and behaviors.
But I think the domain of an AI-first PL would or could be much smaller. So the language is "lower-level" than English, but "higher-level" than any existing PL including AppleScript etc, because it would not have to follow the same kinds of strict parser rules.
With a smaller domain, I think the necessary verbosity of an AI-first PL could be acceptable and less ambiguous than law.
In my opinion, it's to communicate intent, so that intent can be turned into action. And guess what? LLMs are incredibly good at picking up intent through pattern matching.
So, if the goal of a language is to express intent, and LLMs often get our intent faster than a software developer, then why is English considered worse than Python? For an LLM, it's the same: just patterns.
However, when I don't have deadlines, like in my Github creations, I'm clearly a journey programmer; I don't get anything fully finished usually. In these projects tech I use is something I usually wouldn't pick if I worked for a client.
I had to write this template loader for my space sim... reference resolution, type mapping, and YAML parsing. This isn't the code I wanted to write. The code I wanted to write was behavior trees for AI traders, I'm playing with an idea where successful traders can combine behavior trees yada yada, fun side project.
But before I could touch any of that, I had to solve this reference resolution problem. I had to figure out how to handle cross-references between YAML files, map string types to Python classes, recursively fix nested references. Is this "journey programming"? Sure, technically. Did I learn something? I guess. But what I really learned is that I'd already solved variations of this problem a dozen times before.
This is exactly where I'd use Claude Code or Aider + plan.md now - not because I'm lazy or don't care about the journey, but because THIS isn't my journey. My journey is watching AI merchants discover trade routes, seeing factions evolve new strategies, debugging why the economy collapsed when I introduced a new resource.
OP treats all implementation details as equally valuable parts of "the journey," but that's like saying a novelist should grind their own ink. Maybe some writers find that meaningful. Most just want to write. I don't want to be a "destination programmer" - I want to be on a different journey than the one through template parsing hell.
Do they? That wasn’t my take away from the article.
My impression was that the author missed the enjoyment of problem solving because they overused AI. Not that they think all problems are equal.
For what it’s worth though, I do agree with your more general point about AI use. And in fact that’s how I’ve used AI code generation too. “Solve the tedious problem quickly so you can focus on the interesting one”.
I do think LLMs can be good for certain boilerplate code whilst still allowing you to enjoy the problems you care about, and as far as my binary definitions this is more of a grey area.
I guess for me, this has introduced a slippery slope where if the LLM can also code the "fun" stuff, I'll be more inclined to use it, which defeats the whole purpose for me. Perhaps being able to identify which type of project I am working on, it can help me avoid using LLMs to enjoy programming more again!
If llms get better, will you have to decide whether you actually care about writing decision trees, or if instead you just want to, more generally, curate procedural interactions (or something)?
My point is: if these next few years every project becomes an exercise in soul-searching for which parts of my work actually interest me, it is maybe less work not to use these tools, or alternatively, find something fullfilling that doesn’t involve making something.
> I think the cliche saying that the "journey is better than the destination" serves as a good framework to model this issue. Fundamentally, programmers (or individual programming projects) can be put into two categories: destination programmers and journey programmers.
It's allowed me tackle other parts of the knowledge stack that I would otherwise have no time for. For example, learning more about product management, marketing, and doing deeper research into business ideas. The programming has now gone strictly from coding to automating the flows related to these other jobs. In that sense, I'm still "programming", it just looks different and doesn't always involve an IDE. Bonus is my leverage has dramatically increased.
Human programming is the old, and new, programming substrate - and the liberal substrate for what AI tools do. They're trained on it.
As software engineers, we work with “pure thought-stuff”. We build puzzle like objects. It’s satisfying to make useful tools. It’s an ever-renewing stimulating task.
I always write "ship code," even for "farting around" projects. I feel that it helps me to be a better programmer, all around, and keeps me firmly focused on practicum. I like people to use my stuff, and I don't want them using shite.
I have found LLMs have actually increased my "journey." When I want to learn a new concept, the "proper" way to write "idiomatic" code, or solve a vexing problem, I fire up Perplexity or ChatGPT, and ask them questions that would have most folks around here, rolling in the aisles, streaming tears of mirth.
> The only stupid question is the one you don't ask.
That was on a former teacher's wall. Not sure if it was my art teacher, or a martial arts instructor.
At the same time, at least at the moment, this feels like just another tool. I'm old, started programming in the early 80s. Basic->Asm->C->C++ (perl-python-js-ts-go). Throughout my life things have gotten easier. Drawing an image on my Atari 800 or Apple II was way harder than it is on any PC today in JavaScript with the Canvas API or some library like three.js. Reading files, serialization, data strcutures, I used to have to write all that code by hand. I learned how to parse files, how to deal with endian issues, alignment issues, write portable code, etc but today I can play a video in 3 lines of JavaScript. I'm much happier just writing those 3 lines than writing video encoders/decoders by hand (did that in the 90s) and I'm much happier writing those 3 lines than integrating ffmpeg or some other video library into C++ or Rust or whatever. Similarly in 3D, I'm much happier using three.js or Unreal or Unity than writing yet another engine and 100+ tools.
ATM LLMs feel like just another step. If I'm making a game, I don't want the AI to design the game, but I do want the AI to deal with all the more tedious parts. The problem has been solved before, I don't need to solve it again. I just want to use the existing solution and get to the unique parts that make whatever I'm making special.
I guess he must have started programming a short time ago, if he can say that. LLM programming tools have just now been introduced.
Here's a typical scenario: you're a well-respected senior engineer at your company. Say you're an E8 at Meta. You spend your days in meetings, write great documentation, and read more papers than most, which helps you solve high-level architectural problems. You’ve built deep expertise in your domain and earned a strong reputation, both internally and in the industry.
But deep down, you know you’re rusty with tools. You haven’t written production code in years. You’re solid in math and machine learning theory from all the reading, but you’ve never actually built and shipped production ML models. You're fluent in linear algebra and what not, but you don't know shit about writing CUDA libraries, let alone optimizing them. When you check the job specs at companies like OpenAI, you see they’re using Rust. You might be able to write a doubly linked list in Rust, but let’s be honest—you’d struggle to write a basic web service in it.
So switching domains starts to feel daunting. To say the least, you'll lose your edge to influence. Even if you’re willing to take a pay cut, the hiring company might not even want you. Your experience may help a little, but not enough. You’d have to give up your comfortable zone of leading through influence and dive back into the mess of writing code, fixing elusive bugs, and building things from scratch—stuff you used to love.
But now? You’ve got a family of five. You get distracted more often. Leadership fits your life better—you can rely more on experience, communication, intuition. Still, a part of you misses being a journeyman.
So how does someone actually make that move? Do you just bite the bullet and try? Stick to adjacent areas to play it safe? Join a company doing the kind of work you want, but stay in your current domain at first—say, a backend engineer goes to OpenAI but still works on infra? Or is there another path?
So… do?
So no, I don’t miss the days of dealing with some douchebag on Stack Overflow or some neckbeard on a random subreddit telling me to pick up different career. They can now die in peace with their “hard-earned KnOwleDgE.”
Fiddling with directory structures or bike shedding over linter configs never felt artistic to me. It just felt like getting overly poetic about doing bullshit. LLMs and agents are amazing at doing these grunt work.
I get that some folks see the hand of God in their dotfiles or get high off Lisp’s homoiconicity, but most folks don’t relate to that. I just wanna do my build stuff and have fun with the results—not get romantic about the grind. I’m glad LLMs made all my man page knowledge useless if it means I can do more in less time and spend that time on things I actually enjoy.
So I look at tools like LLMs as just the latest incarnation of tools to reduce the number of hours the human has to spend to get to the end.
When I very first started programming, a very long time ago, the programmer actually had to consider where in memory, like at what physical address, things were. Then tools came along and it’s not a thing. You were not a programmer unless you knew all about sorting and the many algorithms and tradeoffs involved. Now people call sort() and it’s fine. Now we have LLMs. For some things people think they’re great. Me personally I have not found utility in them yet (mostly because I don’t work on web, front end, or in python) but I can see the potential. But dynamic loaders and sort() didn’t replace me, I’m sure LLMs won’t either, and I’ll be grateful if it helps me get to the end with less time invested.
LLMs to me are primarily:
1. A way to get over writers block; they can quickly get the first draft down, which I can then iterate on; I’m one of those people who generally first implement something in a dirty way just to get it working, and then do a couple more iterations / rewrites on it, so this suits my workflow perfectly. Same for writing a first draft of a design doc based on my brain dump.
2. A faster keyboard.
Generally, both of these mean that energetically, coding is quite a bit less mentally tiring for me, and I can spend more energy on the important/hard things.
I can say that in the last 2 years chatgpt/claude have added more code to my projects than me, and I am programming for 25 years (counting the rejected tokens as well).
When I use copilot/cursor it is so violent, it interrupts my thoughts, it makes me a computer that evaluates its code instead of thinking about how my code is going to interact with the rest of the system, how it evolves and how it is going to fail and so on.
Accept/Reject/Accept/Reject.. and in the end of the day, I look back, and there is nothing.
One day, it lagged a bit, and code did not come out, and I swear I didn't know what to type, as if it was not my code. On the next day I took time off work to just code without it. During that time I used it to write a st7796s spi driver and it did an amazing job, I just gave it 300 pages docs, and told it what api to make and it made amazing driver, I read it, and I used it, saved me half a day of work easily.
Life is what overcomes itself, as the poet said, I am not sure "destination programmers" exist. Or even if they do, I don't know what their "destination" means. If you want to get better, reflect on what you do and how you do it, and you will get better.
I wrote https://punkx.org/jackdoe/misery.html recently out of frustration, maybe you will resonate with it.
PS: there is no way we will be able to read llm's code in near future, it will easily generate millions of lines for you per day, so we will need to find am interface to debug it, a bit like Geordi from Star Trek. LLMs will be our lens into complexity.
Stage 0: The trade is a craft. There are no processes, only craftsmen, and the industry is essentially a fabric of enthusiasts and the surplus value they discover for the world. But every new person that enters the scene climbs a massive hill of new context and uncharted paths
Stage 1: Business in this trade booms. There is too much value being created, and standardization is needed to enforce efficiency. Education and training are structurally reworked to support a mass influx of labor and requirements. Craft still exists, and is often seen as the paragon for novices to aspire to, but most novices are not craftsmen and the craft has diminishing market value compared to results
Stage 2: The market needs volume, and requirements are known in advance and easily understood. Templates, patterns, and processes are more valuable in the market than labor. Labor is cheap and global. Automation is a key driver of future returns. Craftspeople bemoan the state of things, since the industry has lost its beating heart. However, the industry is far more productive overall and craft is slow.
Stage 3: Process is so entrenched that capital is now the only constraint. Those who can pay to deploy mountains of automated systems win the market since craft is so expensive that one can only sell craft to a market who wants it as a luxury, for ethics, or for aesthetics. A new kind of “craft” emerges that merges the raw industrial output with a kind of humane touch. Organic forms and nostalgia grip the market from time to time and old ideas and tropes are resurrected as memes, with short market lifecycles. The overwhelming existence of process and structure causes new inefficiencies to appear.
Stage 4: The market is lethargic, old, and resistant to innovation. High quality labor does not appear, as more craft driven markets now exist elsewhere in cool, disruptive, untapped domains. Capital flight occurs as its clear that the market can’t sustain new ideas. Processes are worn, despised, and all the key insights and innovations are so old that nobody knows how to build upon them. Experts from yesteryear run boutique consultancies in maintaining these dinosaur systems but otherwise there’s no real labor market for these things. Governments using them are now at risk and legal concerns grip the market.
Note that this is not something that applies broadly, e.g. “the Oil industry”, but to specific systems and techniques within broad industries, like “Shale production”, which embodies a mixture of labor power and specialized knowledge. Broadly speaking, categories of industries evolve in tandem with ideas so “petroleum industry” today means something different from “petroleum industry” in 1900
The group who struggle through texts by themselves with relying on any shortcuts -- they just sit with the text -- probably won't become top-shelf philologists, but when you give them a sentence they haven't seen before from an author they've read, the chances are very good that they'll be able to make sense of it without assistance. These students learn, in other words, how to read ancient languages.
The group who rely on translations learn to do precisely that: rely on a translation. If you give them a text by an author they've 'read' before and deny them use of side-by-side translation, they almost never had any clue how to proceed, even at the level of rudimentary parsing. Is that word the second-person-singular aorist imperative middle or is it the aorist infinitive active? They probably won't even know how to identify the difference -- or that there is one.
Our brains are built for energy conservation. They do what, and only what, we ask of them. Learning languages is hard. Reading a translation is easy. Given the choice betweem the harder skill and the easier, he brain will always learn the easier. The only way to learn the harder one is to remove the option: sit with the text; struggle.
So far I've been able to avoid LLMs and AI. I've written in other comments on HN about this. I don't want to talk to an anthropmorphic chat UI, which I call "meeting-based programming." I want to work with code. I want to become a more skillful SWE and better at working with programming languages, software, and systems. LLMs won't help me do this. All the time they save me -- all the time they steal from reading code, thinking about it, and consulting documentation -- is time they've stolen from the work I actually want to do. They'll make me worse at what I do and deprive me of the joy I find in it.
I've argued with teammates about this. They don't want to do the boring stuff. They say AI will do it for them. To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening and giving up the parts of themselves that they'll need to call upon when they find something 'interesting' to work on (edit: and I'd wager that what they consider interesting will be debased over time as well, as programming effort itself becomes foreign and a less common practice.)
Using a hoe is making you weaker than if you just used your bare hands. Using a calculator is making your brain lose skill in doing complicated arithmetic in your head.
Most have never built a fire completely from scratch, they surely are lacking certain skills but do/should they care?
But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
I took a statistics course in high school where we learned how to do everything on a calculator. I was terrible and didn’t understand statistics at the end of it. My teacher gave me a gentleman’s C. I decided to retake the course in college where my teacher taught us how to calculate the formulas by hand. After learning them by hand, I applied everything on exams with my calculator. I finished the class with a 100/100, and my teacher said there was no need for me to take the final exam. It was clear I understood the concept.
What changed between the two classes? Well, I actually learned statistics rather than how to let a tool do the work for me. Once I learned the concept, then I was able to use the tool in a beneficial way.
It's worse than that, people who rely too much on the AI never learn how to tell when it is wrong.
This is different from things like "nobody complains about using a calculator".
A calculator doesn't lie; LLMs on the other hand lie all the time.
(And, to be fair, even the calculator statement isn't completely true. The reason why the HP 12C is so popular is that calculators did lie about some financial calculations (numerical inaccuracy). It was deemed too hard for business majors to figure out when and why so they just converged on a known standard.)
Today, we are shoveling the old way into LLMs.
In the future, programming will be optimized for LLMs and not humans.
Do you understand the assembly language that the compiler writes today? Do you inspect it? Do you Analyse it and not trust it? No, you ignore it.
That’s the future.
Languages written purely for LLMs have not yet been invented but they’re coming for sure.
"Time has passed", indeed. Like 9 months. This just reminded me in a quaint way how we've gotten used to such rapid progress.
bowsamic•8h ago
d3ckard•8h ago
I have very similar thoughts after working with Cursor for a month and reviewing a lot of “vibe” code. I see the value of LLMs, but I also see what they don’t deliver.
At the same time, I am fully aware of different skill levels, backgrounds and approaches to work in the industry.
I expect two trends - salaries will become much higher, as an individual leverage will continue to grow. At the same time, demand for relatively low skill work will go to zero.
sanderjd•8h ago
Long before LLMs came onto the scene, I was telling people (like friends and family trying to understand what I do at work) that the actual coding part of the job is the least valuable, but that you just do still have to be able to write the code once you do the more valuable work of figuring out what to write.
But LLMs have made that distinction far more clear than I ever imagined. And I have found that for all my previous talk about it, I clearly still felt that the "writing the code" part was an important portion of my contribution, and have found it jarring to rebalance my conception of where I can contribute value.
furyofantares•7h ago
I've found this to be true of all generative AI to date. I have a clearer sense of where most of the value lies in most writing, imagery, code, and music.
I have a better sense of what having good taste (or any taste at all) means, and what the value of even seemingly trivial human decision-making is.
dottedmag•7h ago