And that's how I believe software engineering will end up. Hand crafted code will still be a thing, written by very skilled developers...but it will be a small niche market, where there's little (to no) economic incentives to keep doing it the craftmanship way.
It is a brave new world. We really don't know if future talent will learn the craft like old talent did.
Clearly you don't value the process of coding if you think it is analogous to a carpenter manually carving the details of a design that's above the process of building it. It is not a good analogy, at all.
But many of those have long since been automated away, because customers are more than happy to purchase cheaper products, made almost entirely by machines.
"AI-free" development will be a tiny niche in the coming years and decades. And those developers will not get paid any extra for doing it the old way. Just like artisanal workers today.
Cabinets are finished.
Software is codified rules and complexity, which is entirely aribtrary, and builds off of itself in an infinite number of ways. That makes it much more difficult to turn into factory output cabinetry.
I think more people should read "No Silver Bullet" because I hear this argument a lot and I'm not sure it holds. There _are_ niches in software that are artisanal craft, that have been majorly replaced (like custom website designers and stock WordPress templates), but the vast majority of the industry relies on cases where turning software into templates isn't possible, or isn't as efficient, or conflicts with business logic.
While I imagine “make an app that does X” won’t be as useful as “if … else” there is a middle ground where you’re relinquishing much of the control you currently are trying to retain.
For the AI to avoid this, I'd imagine it would need to be directed not to assume anything, and instead ask for clarification on each and every thing, until there is no more ambiguity about what is required. This would be a very long and tedious back and forth, where someone will want to delegate the task to someone else, and at that point, the person might as well write their own logic in certain areas. I've found myself effectively giving sudocode to the LLM to try to properly explain the logic that is needed.
I would argue that as an industry we love high level programming languages, because they allow you to understand what you are writing, much easier than looking at assembly code. Excellent for the vast majority of needs.
But then people go right on and build complicated frameworks and libraries with those languages, and very quickly the complexity (albeit presented much better for reading) comes back into a project.
There will be niches in research, high performance computing & graphics, security, etc. But we’re in the last generation or two that’s going to hand write their own CRUD apps. That’s the livelihood of a lot of software developers around the world.
To me it's similar to saying that there's no need for lawmakers after we get the basics covered. Clearly it's absurd, because humans will always want to expand on (or subtract from) what's already there.
Right now I'm wring a web app that basically manages data in a db, but guess the kinds of things I have to deal with. Here a a few (there are many much more), in no particular order:
- Caching and load balancing infrastructure.
- Crafting an ORM that handles caching, transactions, and, well, CRUD, but in a consistent, well-typed, and IO-provider-agnostic manner (IO providers could be: DBs like Postgres, S3-compatible object stores, Redis, Sqlite, Localstorage, Filesystems, etc. Yes, I need all those).
- Concurrent user access in manner that is performant and avoids conflicts.
- Db performance for huge datasets (so consideration of indexes, execution plans, performant queries, performant app architecture, etc, etc)
- Defining fluent types for defining the abstract API methods that form the core of the system
- Defining types to provide strong typing for routes that fulfill each abstract API method.
- Defining types to provide strongly-typed client wrappers for each abstract API method
- How to choose the best option for application- and API security (Cookies?, JWT?, API keys? Oauth?)
- Choosing the best UI framework for the requirements of the app. I actually had to write a custom react-like library for this.
- Media storage strategy (what variants of each media to generate on the server, how to generate good storage keys, etc.
- Writing tooling scripts that are sophisticated enough to help me type-check, build, text, and deploy the app just in the way I want
- Figuring out effective UI designs for CRUD pages, with sorting, filtering, paging, etc built in . This is far from simple. For just one example, naive paging is not performant, I need to use keyset pagination.
- Doing all the above with robust, maintainable, and performant code
- Writing effective specs and docs for all my decisions and design for the the above
- And many many more! I've been working on this "CRUD" app for years as a greenfield project that will be the flagship product of my startup.
I think software will remain much more artisan because, in some sense, software is a more crystallized form of thinking, yet still incredibly fungible and nebulous.
Minimalist design isn’t the result of minimal effort. It actually takes quite a lot of time and skill to design a cabinet that can flat pack AND be assembled easily AND fit cost constraints AND fit manufacturing and shipping and storage constraints AND have mass market appeal.
IKEA stuff exists because of hundreds or thousands of people with expertise in their roles, not in spite of them.
But by a similar argument, most anything with healthy competition and a discerning market can only lag that standard by some amount.
It's easy to conflate a once-in-a-generation orgy of monopolistic consolidation with the emergence of useful coding LLMs, but monopolies always get flabby and slack, and eventually the host organism evicts the parasite. It's when, not if.
And we'll still have the neat tools.
Your analogy doesn't work at all.
(I know this because my previous landlady's husband was a CNC machine operator.)
What takes more time to master: Learning how to be a competent woodworker, so that you can build a piece of complex furniture from scratch, by hand. Or learning how to punch in instructions, following templates.
The first can take years to master, the latter is done in mere months. You don't need to be a master woodworker to operate a CNC machine.
With LLMs, we're now at the point where non-programmers can get working CRUD apps. Nearly impossible only 2 years ago. LLMs are encroaching what is the bread and butter to thousands of webdevs out in the world.
You can still be a rock star dev that writes world class code, but you better be working for employers that value it - because AI will force prices down. When customers discover that a $20 subscription, or service can create them things that would previously cost thousands of dollars in dev money, there's no way back. Once that cat is out of the bag, you won't get it back in.
And sure, you don't need a master woodworker to operate a CNC machine, but the that person is still highly skilled, it's just that the set of skills is different: they need to be able to program on some level, and they also need to have an understanding of the materials they're working with. Go away for a bit and so some research on what somebody who operates a CNC machine actually does.
it got to a point where people would say they needed a website and i'd say to them that a facebook page would meet their needs better or eventually that a shopify site met their needs better. my business obviously folded.
The people who understand nothing about business, yet you can't talk to because they think gifted for being able to write instructions to a computer.
The people spin out new frameworks every day and make a clusterf*ck of hyped and over-engineered frameworks.
The people who took a few courses and went into programming for money..
I went into software because I enjoyed creating (coding was a means to an end), and I always thought coding was the easiest part of software development. But when I get into corporate work, I find people who preach code like religion and don't even care about what is being produced, spend thousands of hours debating syntax. What a waste of life, I knew they were stupid, and AI made sure they knew as well.
My sense is we really have to raise everyone's critical thinking abilities in order to spot bullshit.
I'm feeling basically mostly fulfilled in life, and don't feel my life is being wasted.
I didn't understand the thing about "AI made sure they knew as well", or maybe I'm not actually who you're describing.
But I definitely get into language, syntax, frameworks, parsing, and blah blah blah.
Plenty of people still play chess. Plenty of people still run. Machine performance has surpassed humans long ago in both disciplines. Are those people stupid also?
I don't mean this offensively; it is what it is. If you are aware of who you are, then good, but if the issue is that a lot of those are not even aware of their strength or their limitations. Just like humans got humbled with chess, AI is humbling those coders.
But again for me, the appeal for software was creativity. Our ability to shape ideas and experiences.
I never cared about proving my intelligence or arguing about syntax. If the code is maintainable, readable, and works, all good. But I corporate, I was debating over PRs, mostly subjective opinions, a form of intellectual game..
And the issue, discussion is all about convention and trivialities..
The way they see it, AI can easily shit out a hundred product concepts, market research, slide decks, sales emails, reports, etc. No need to bring on MBAs or self-proclaimed visionaries or LinkedIn gurus.
I think both camps will be disappointed. The engineers will be disappointed that they'll still need vapid business/product people to keep them pointed at projects that will actually put food on the table. The business/product people will be disappointed that they'll still need sanctimonious engineers to make anything actually work.
Yeah, those who were creating sophisticated business models on Excel, fancy slides, etc, yeah, they will also get humbled.
Things go wrong as soon as I ask the AI to write something that I don't fully grasp, like some canvas code that involves choosing control points and clipping curves.
I currently use AI as a tool that writes code I could write myself. AI does it faster.
If I need to solve a problem in a domain that I haven't mastered, I never let the AI drive. I might ask some questions, but only if I can be sure that I'll be able to spot an incorrect hallucinated answer.
I've had pretty good luck asking AI to write code to exacting specifications, though at some point it's faster to just do it yourself
One of the things I noticed is that I'm pretty sure I was still more productive with AI, but I still had full control over the codebase, precisely because I didn't let AI take over any part of the mental modelling part of the role, only treating it as, essentially, really really good refactoring, autocompletion, and keyboard macro tools that I interact with through an InterLISP-style REPL instead of a GUI. It feels like a lever to actually enable me to add more error handling, make more significant refactors for clarity to fit my mental model, and so on. So I still have a full mental model of where everything is, how it works, how it passes data back and forth, and the only technologies I'm not familiar with the use of in the codebase are things I've made the explicit choice not to learn because I don't want to (TKinter, lol).
Meanwhile, when I introduced my girlfriend (a data scientist) to the same agentic coding tool, her first instinct was to essentially vibe code — let it architect things however it wanted, not describe logic, not build the mental model and list of features explicitly herself, and skim the code (if that) and we quickly ended up in a cul de sac where the code was unfixable without a ton of work that would've eliminated all the productivity benefits.
So basically, it's like that study: if you use AI to replace thinking, you end up with cognitive debt and have to struggle to catch up which eventually washes out all the benefits and leaves you confused and adrift
Typically, I just use something like QwenCode. One of the things I like about it, and I assume this is true of Gemini CLI as well, is that it's explicitly designed to make it as easy as possible to interrupt an agent in the middle of its thought or execution process and redirect it, or to reject its code changes and then directly iterate on them without having to recapitulate everything from the start. It's as easy as just hitting escape at any time. So I tell it what I want to do by usually giving like a little markdown formatted you know paragraph or so that's you know got some bullet points or some numbers maybe a heading or two, explaining the exact architecture and logic I want for a feature, not just the general feature. And then I let it kind of get started and I see where it's going. And if I generally agree with the approach that it's taking, then I let it turn out a diff. And then if I like the diff after reading through it fully, then I accept it. And if there's anything I don't like about it at all, then I hit Escape and tell it what to change about the disc before it even gets to merge it in.
There are three advantages to this workflow over the chat GPT copy and paste workflow.
One is that the agent can automatically use grep and find and read source files, which makes it much easier and more convenient to load it up with all of the context that it needs to understand the existing style architecture and purpose of your codebase. Thus, it typically generates code that I'm willing to accept more often without me doing a ton of legwork.
The second is that it allows the agent to automatically of its own accord, run things like linters, type checkers, compilers, and tests, and automatically try to fix any warnings or errors in that result, so that it's more likely to produce correct code that adheres to whatever style guide I've provided. Of course, again I could run those tools manually, manually and copy and paste the output into a chat window, but that's just enough extra effort and friction after I've gotten what's ostensibly something working that I know I would be likely to be lazy and not do that at some point. This sort of ensures that it's always done. Some tools like OpenCode even automatically run LSPs and linters and feed that back into the model after the diff is applied automatically, thus allowing it to automatically correct things.
Third, this has the benefit of forcing the AI to use small and localized diffs to generate code, instead of regenerating whole files or just autoregressively completing or filling in the middle for things, which makes it way easier to keep up with what it's doing and make sure you know everything that's going on. It can't slip subtle modifications past you, or, and doesn't tend to generate 400 lines of nonsense.
https://www.youtube.com/watch?v=EL7Au1tzNxE
I don't have the energy to do that for most things I am writing these days which are small PoC where the vibe is fine.
I suspect as you do more, you will create dev guides and testing guides that can encapsulate more of that direction so you won't need to micromanage it.
If you used Gemini CLI, you picked the coding agent with the worst output. So if you got something that worked to your liking, you should try Claude.
Definitely. Prompt adherence to stuff that's in an AGENTS/QWEN/CLAUDE/GEMINI.md is not perfect ime though.
>If you used Gemini CLI, you picked the coding agent with the worst output. So if you got something that worked to your liking, you should try Claude.
I'm aware actually lol! I started with OpenCode+GLM 4.5 (via OpenRouter), but I started burning through cache extremely quickly, and I can't remotely afford Claude Code, so I was using qwen-code mostly just for the 2000 free requests a day and prompt caching abilities, and because I prefer Qwen 3 Coder to Gemini... anything for agentic coding.
https://gist.github.com/WolframRavenwolf/0ee85a65b10e1a442e4...
We gave Gemini CLI a spin, it is kinda unhinged, I am impressed you were able to get your results. After reading through the Gemini CLI codebase, it appears to be a shallow photocopy knockoff of Claude Code, but it has no built in feedback loops or development guides other than, "you are an excellent senior programmer ..." the built in prompts are embarrassingly naive.
Qwen has it's own agent which I haven't used https://github.com/QwenLM/qwen-code
Another is https://github.com/sst/opencode
Yeah but I wouldn't get a generous free tier, and I am Poor lmao.
> I am impressed you were able to get your results
compared to my brief stint with OpenCode and Claude Code with claude code router, qwen-code (which is basically a carbon copy of gemini cli) is indeed unhinged, and worse than the other options, but if you baby it just right you can get stuff done lol
TIL according to Wikipedia, the more correct terms are "pull up" and "push down".
How should they learn terms for refactoring today? Should they too train to code and refactor and track customer expectations without LLMs? There's probably an opportunity to create a good refactoring exercise; with and without LLMs and IDEs and git diff.
System Prompt, System Message, User, User Prompt, Agent, Subagent, Prompt Template, Preamble, Instructions, Prompt Prefix, Few-Shot examples; which thing do we add this to:
First, summarize Code Refactoring terms in a glossary.
Code refactoring: https://en.wikipedia.org/wiki/Code_refactoring
"Ask HN: CS papers for software architecture and design?" (2017) https://news.ycombinator.com/item?id=15778396
"Ask HN: Learning about distributed systems?" (2020) https://news.ycombinator.com/item?id=23932271
Would methods for software quality teams like documentation and tests prevent this cognitive catch-up on so much code with how much explanation at once?
Generate comprehensive unit tests for this. Generate docstrings and add comments to this.
If you build software with genai from just a short prompt, it is likely that the output will be inadequate in regards to the unstated customer specifications and that then there will need to be revisions. Eventually, it is likely that a rewrite or a clone of the then legacy version of the project will be more efficient and maintainable. Will we be attached to the idea of refactoring the code or to refactoring the prompts and running it again with the latest model too?
Retyping is an opportunity to rewrite! ("Punch the keys" -- Finding Forrester)
Are the prompts worth more than the generated code now?
simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?
Catch up as a human coder, Catch up the next LLM chat context with the prior chat prompt sequences (and manual modifications, which aren't but probably should be auto-committed distinctly from the LLM response's modifications)
I've always had a subscription to both ChatGPT and Claude, but Claude has recently almost one-shotted major toxic waste dumps from the previous models.
I'll still use ChatGPT, it seems to be pretty good at algorithms, and bouncing ideas back and forth. but when things go off the rails Opus 4.1 bails me out.
I suppose it depends on the definition of model.
I currently do consider the transformer weights to be a world model, but having a rigid one based on statistical distributions tend to create pretty wonky behavior at times.
That's why I do agree, relying on your own understanding the code is the best way.
It's amazing seeing these things produce some beautiful functions and designs, and then promptly forget that it exists, and then begin writing incompatible, half re-implemented non-idiomatic code.
If you're blind to what they are doing, it's just going to be layers upon layers of absolute dreck.
I don't think they will get out of cul-de-sacs without a true deductive engine, and a core of hard, testable facts to build on. (I'm honestly a bit surprised that this behavior didn't emerge early in training to be honest).
Though I think humans minds are the same way, in this respect, and fall for the same sort of traps. Though at least our neurons can rewire themselves on the fly.
I know a LOT of people who sparingly use their more advanced reasoning faculties, and instead primarily rely on vibes, or pre-trained biases. Even though I KNOW they are capable of better.
So I fear I’m fighting a losing battle. I can’t and don’t want to review everything my coworkers put out, and code has always been a means to an end for leadership anyways so it seems difficult to justify carving out time for the team as a whole to learn, especially in the age of genAi.
Including gym equipment.
I have a feeling that the skills for fixing up AI slop will be in-demand quite soon.
But, more to the point - AI code is legacy code.
LLMs in a nutshell. They know everything because they are just guessing the answers.
My favorite one of these is when I was having Claude write a nix derivation to package kustomize at 4.5.5 and instead of getting the correct source version and building it, it just set some build args on the latest version to override the output of the --version CLI flag.
What was called "move fast and break things", is one example. This development model leaves behind a trail of shit, that is poorly integrated and often, fully understood by no one.
Like vibe coding, this is all great until something doesn't work (not IF something doesn't work, but UNTIL).
This failure to appreciate mastery is illustrated in even earlier "business model" strategies, such as elimination of large corporate R&D laboratories, and the LBO raiders.
It's widely understood that every product and service in the current era is going to shit (certainly for the users, even if not for the ownership). Toasters built in the early '50s still work, toasters built today are designed for the dump. This isn't a problem with toasters, it's a problem with the business model of unrestrained capitalism.
I find that when I'm reaching for AI it's because I'm actually trying to decide if I want to implement the idea I have, and need a PoC vs. expecting production ready things. For example, I was working on a UI and wanted to be able to "swap" two ul lists in JS. Not too hard of a thing to do, but I didn't remember the syntax, etc, and instead of hand writing, I asked the AI to do it.
It worked. The code was insanely overwrought and iterating over each list item 1 by 1 etc etc etc. But that's fine, I'm still not sure the "swap button" in the UI is the right UX, so I put a todo and am working on the "right problem" instead of banging out a clean list swap function. The mastery I'm seeking is not "most elegant usable list swap function, with maintainable code" but instead "Is this the best UX?" AI slop helped me stay in flow state towards that mastery.
None of it could have been done without AI, yet I am somehow inclined to agree with the sentiment in this article.
Most of what I've done lately is, in some strange sense, just typing quickly. I already knew what changes I wanted, in fact I had it documented in Trello. I already understood what the code did and where I wanted it to go. What was stopping me, then?
Actually, it was the dread loop of "aw gawd, gotta move this to here, change this, import that, see if it builds, goto [aw gawd]". To be fair, it isn't just typing, there ARE actual decisions to be made as well, but all with a certain structure in mind. So the dread loop would take a long long time.
To the extent that I'm able to explain the steps, Claude has been wonderful. I can tell it to do something, it will make a suggestion, and I will correct it. Very little toil, and being able to make changes quickly actually opens up a lot of exploration.
But I wonder if AI had not been invented at this point in my career, where I would be. I wonder what I will teach my teenager about coding.
I've been using a computer to write trading systems for a long time now. I've slogged through some very detailed little things over the years. Everything from how networks function to how c++ compiles things, how various exchanges work on the protocol level, how the strats make money.
I consider it all a very long apprenticeship.
But the timing of AI, for me, is very special. I've worked through a lot of minutiae in order to understand stuff, and just as it's coming together in a greater whole, I get this tool that lets me zip through the tedium.
I wonder, is there a danger to giving the tool to a new apprentice? If I send my kid off to learn coding using the AI, will it be a mess? Or does he get to mastery in half the time of his father? I'm not sure the answer is so obvious.
And it's not just complacence.. this illusion of mastery cuts even harder for those who haven't really developed the skills to do a review of the code. And, some code is just easier to write than it is to read, or easier to generate with confidence using an automata or some macro-level code, which often the LLMs will not produce, in preference to repeated-in-various-styles inlining of sub-solutions, unless you have enough mastery to know how to ask for the appropriate abstraction and would still rather not just write the deterministic version.
> I wonder, is there a danger to giving the tool to a new apprentice? If I send my kid off to learn coding using the AI, will it be a mess
As long as your kid develops the level of mastery needed to review the code, and takes the time to review it, I don't think it'll be a mess (or not too large to debug). A lot of this depends on how role models use the tool, I think. If it's always nonchalant "oh we'll just re-roll or change the prompt and see" then I doubt there will be mastery. If the response is "hmm, *opens debugger*" then it's much more likely.I don't think there's anything wrong with holding back on access to LLM code generators but that's like saying no access to any modern LLMs at this point, so maybe that's too restrictive; tbh I'm glad that's not a decision I'm having to make for any young person these days. But separate from that you can still encourage a status quo of due diligence for any code that gets generated.
I want to view AI coding as invention of new coding tools, at least in the way you described. I (hope) it will be more like punch-card->assembly-> BASIC/C -> scripting language -> some sort of well-structured natural language.
Their ability to look correct exceeds their ability to be correct. Optimized for form more than function. Like politicians and management at large companies. Like the rote student who can be correct, but wont know why. Being fair though, it is a useful tool among many others in our toolbox.
If everyone uses AI, then the standard of mastery hasn't lowered - it's increased!
We're now caught in a dilemma: Do we play the short term game and use AI at the expense of our skills, or do we play the long term game and avoid AI where possible to build up expertise while risking being out-competed by everyone else who is using AI?
gwynforthewyn•5mo ago
Like the great engineers who came before us and told us what they had learned, Rob Pike, Jez Humble, Martin Fowler or Bob Martin, it's up to those of us with a bit more experience to help the junior generation to get through this modern problem space and grow healthily. First, we need to name the problem we see, and for me that's what I wrote about here.
mindslight•5mo ago
TrueTom•5mo ago
saulpw•5mo ago
dimal•5mo ago
MoreQARespect•5mo ago
Even worse though, they all seem to think that the solution to becoming overwhelmed with complexity isnt to parcel it up with strategies like BDD and TDD but to just get better at stuffing more complexity into their brains.
To be honest, I see a similar attitude with LLMs where loads of people think you just need to stuff more into the context window and tweak the prompt and then it'll be reliable.