Although, I also made it create Rust and Go bindings. Two languages I don't really know that well. Or, at least not well enough for that kind of start-to-finish result.
Another commenter wrote a really interesting question: How do you not degrade your abilities? I have to say that I still had to spend days figuring out really hard problems. Who knew that 64-bit MinGW has a different struct layout for gettimeofday than 64-bit Linux? It's not that it's not obvious in hindsight, but it took me a really long time to figure out that was the issue, when all I have to go on is something that looks like incorrect instruction emulation. I must have read the LoongArch manual up and down several times and gone through instructions one by one, disabling everything I could think of, before finally landing on the culprit just being a mis-emulated kind-of legacy system call that tells you the time. ... and if the LLM had found this issue for me, I would have been very happy about it.
There are still unknowns that LLMs cannot help with, like running Golang programs inside the emulator. Golang has a complex run-time that uses signal-based preemption (sysmon) and threads and many other things, which I do emulate, but there is still something missing to pass all the way through to main() even for a simple Hello World. Who knows if it's the ucontext that signals can pass or something with threads or per-state signal state. Progression will require reading the Go system libraries (which are plain source code), the assembly for the given architecture (LA64), and perhaps instrumenting it so that I can see what's going wrong. Another route could be implementing an RSP server for remote GDB via a simple TCP socket.
As a conclusion, I will say that I can only remember twice I ditched everything the LLM did and just did it myself from scratch. It's bound to happen, as programming is an opinionated art. But I've used it a lot just to see what it can dream up, and it has occasionally impressed. Other times I'm in disbelief as it mishandles simple things like preventing an extra masking operation by moving something signed into the top bits so that extracting it is a single shift, while sharing space with something else in the lower bits. Overall, I feel like I've spent more time thinking about more high-level things (and occasionally low-level optimizations).
What helps me is:
- Prefer faster models like VSCode's Copilot Raptor Mini which, despite the name, is like 80% capable of what Sonnet 4.5 is. And is much faster. It is a fine tunned GPT 5 mini.
- Start writting the next prompt while LLMs work or keep pondering about the current problem at hand. This helps our chaotic brain to keep focused.
> [Claude Code and Cursor] can now work across entire codebases, understand project context, refactor multiple files at once, and iterate until it’s really done.
But I haven’t seen anyone doing this on e.g. YouTube? Maybe that kind of content isn’t easy to monetize, but if it’s as easy to use AI as everyone says surely someone would try.
Yeah, 18 months ago we were apparently going to have personal SaaSes and all sorts of new software - I don't see anything but an even more unstable web than ever before
I've done this many times over, and it's by far one of the least impressive things I've seen CC achieve with a good agent/skills/collab setup.
Even without AI you cannot do a tight 10 minute video on legacy code unless you have done a lot of work ahead of time to map it out and then what’s the point
For me, however, there is one issue: how can I utilize AI without degenerating my own abilities? I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber. I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other. Who benefits if we end up with a generation of software developers who can no longer program without AI? Programming is not just writing code, but a process of organizing, understanding, and analyzing. What I want above all is AI that helps me become better at my job and continue to build skills and knowledge, rather than making me dependent on it.
Compare ROI of that to being able to get kinda the software you need in a few hours of prompting; it's a new paradigm, progress is (still) exponential and we don't know where exactly things will settle.
Experts will get scarce and very sought after, but once they start to retire in 10-20-30 years... either dark ages or AI overlords await us.
Indeed I believe that, but in my experience these skills get more and more useless in the job market. In other words: retaining such (e.g. low-level coding) skills is an intensively practises hobby of such people that is (currently) of "no use" in the job market.
Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?
> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge
So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.
My cynical view is you can't, and that's the point. How many times before have we seen the pattern of "company operates at staggering losses while eliminating competition or becoming entrenched in enough people's lives, and then clamps down to make massive profits"?
But at this point, it's like refusing to use vehicles to travel long distances in fear of becoming physicaly unfit. We go to the gym.
This gets particularly tricky when the task requires a competency that you yourself lack. But here too the question is - would you be willing to delegate it to another human whom you don't fully trust (e.g. a contractor)? The answer for me is in many cases "yes, but I need to learn about this enough so that I can evaluate their work" - so that's what I do, I learn what I need to know at the level of the tech lead managing them, but not at the level of the expert implementing it.
That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.
AI assisted software building by day, artisanal coder by night perhaps.
As humans we have developed tools to ease our physical needs (we don’t need to run, walk or lift things) and now we have a tool that thinks and solve problems for us
This is not a new problem I think. How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?
If you're not careful and always rely on them as a crutch, they'll remain just that; without actually "incrementing" you.
I think this is a very good question. How should we actually be using our tools such that we're not degenerating, but growing instead?
By writing down every foreign word/phrase that I don't know, and adding a card for it to my cramming card box.
Personally I think my skill lies in solving the problem by designing and implementing the solution, but not how I code day-to-day. After you write the 100th getter/setter you're not really adding value, you're just performing a chore because of language/programming patterns.
Using AI and being productive with it is an ability and I can use my time more efficiently than if I were not to use it. I'm a systems engineer and have done some coding in various languages, can read pretty much anything, but am nowhere near mastery in any of the languages I like.
Setting up a project, setting up all the tools and boilerplate, writing the main() function, etc are all tasks that if you're not 100% into the language take some searching and time to fiddle. With AI it's a 2-line prompt.
Introducing plumbing for yet another feature is another chore: search for the right libraries/packages, add dependencies, learn to use the deps, create a bunch of files, sketch the structs/classes, sketch the methods, but not everything is perfectly clear yet, so the first iteration is "add a bunch of stuff, get a ton of compiler warnings, and then refine the resulting mess". With AI it's a small paragraph of text describing what I want and how I'd like it done, asking for a plan, and then simply saying "yes" if it makes sense. Then wait 5-15m. Meanwhile I'm free to look at what it's doing and if it's doing something stupid wrong, or think about the next logical step.
Normally the result for me has been 90% good, I may need to fix a couple things I don't like, but then syntax and warnings have already been worked out, so I can focus on actually reading, understanding and modifying the logic and catching actual logic issues. I don't need to spend 5+ days learning how to use an entire library, only to find out that the specific one I selected is missing feature X that I couldn't foresee using last week. That part takes now 10m and I don't have to do it myself, I just bring the finishing touches where AI cannot get to (yet?).
I've found that giving the tool (I personally love Copilot/Claude) all the context you have (e.g. .github/copilot-instructions.md) makes a ton of difference with the quality of the results.
I'm not too worried about degrading abilities since my fundamentals are sound and if I get rusty due to lack of practice, I'm only a prompt away from asking my expert assistant to throw down some knowledge to bring me back up to speed.
Whilst my hands on programming has reduced, the variety of Software I create has increased. I used to avoid writing complex automation scripts in bash because I kept getting blocked trying to remember its archaic syntax, so I'd typically use bun/node for complex scripts, but with AI I've switched back to writing most of my scripts in bash (it's surprising at what's capable in bash), and have automated a lot more of my manual workflows since it's so easy to do.
I also avoided Python because the lack of typing and api discovery slowed me down a lot, but with AI autocomplete whenever I need to know how to do something I'll just write a method stub with comments and AI will complete it for me. I', now spending lots of time writing Python, to create AI Tools and Agents, ComfyUI Custom Nodes, Image and Audio Classifiers, PIL/ffmpeg transformations, etc. Things I'd never consider before AI.
I also don't worry about its effects as I view it as inevitable, with the pendulum having swung towards code now being dispensable/cheap to create, what's more important is velocity and being able to execute your ideas quickly, for me that's using AI where I can.
These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.
e.g. had an issue with connecting to AWS S3, gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself. It can even find issues like "oh, you have an extra space in front of the build parameter that the user passed into a Jenkins job". Something that a human might have found in 30+ minutes of grepping, checking etc it found in <30 seconds.
It also makes it trivial to do things like "hey, convert all of the print statements in this python script to log messages with ISO 8601 time format".
Folks talk about "but it adds bugs" but I'm going to make the opposite argument:
The excuse of "we don't have time to make this better" is effectively gone. Quality code that is well instrumented, has good metrics and easy to parse logs is only a few prompts away. Now, one could argue that was the case BEFORE we had AI/LLMs and it STILL didn't happen so I'm going to assume folks that can do clean up (SRE/DevOps/code refactor specialists) are still going to be around.
I have yet to see any evidence of the third case. I'm close to banning AI for my junior devs. Their code quality is atrocious. I don't have time for all that cleanup. Write it good the first time around.
10 years ago google would have had a forum post describing your exact problem with solutions within the first 5 results.
Today google delivers 3 pages of content farm spam with basic tutorials, 30% of them vaguely related to your problem, 70% just containing "aws" somewhere, then stops delivering results.
The LLM is just fixing search for you.
Edit: and by the way, it can fix search for you just because somewhere out there there are forum posts describing your exact problem.
Honest question for the engineers here. Have you seen this happening at your company? Are strong engineers falling behind when refusing to integrate AI into their workflow?
A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.
I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.
It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.
We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.
It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.
AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.
One nice change however is that you can guide the latter towards a total refactor during code review and it takes them a ~day instead of a ~week.
I don’t think I will. I am glad I have made the radical decision, for myself, to wilfully remain strict in my stance against generative AI, especially for coding. It doesn’t have to be rational, there is good in believing in something and taking it to its extreme. Some avoid proprietary software, others avoid eating sentient beings, I avoid generative AI on pure principle.
This way I don’t have to suffer from these articles that want to make you feel bad, and become almost pleading, “please use AI, it’s good now, I promise” which I find frankly pathetic. Why do people care so much about it to have to convince others in this sad routine? It honestly feels like some kind of inferiority complex, as if it is so unbearable that other people might dislike your favourite tool, that you desperately need them to reconsider.
I don’t know why this is so controversial it’s just a tool, you should learn to use it otherwise as the author of this post said you will get left behind but don’t cut yourself on the new tool (lots of people are doing this).
I personally love it because it allows me to create personal tools on the side that I just wouldn’t have had time for in the past. The quality doesn’t matter so much for my personal projects and I am so much more effective with the additional tools I’m able to create.
Do you really "don't know why"? Are you sure?
I believe that ignoring the consequences that commercial LLMs are having on the general public today is just as radical as being totally opposed to them. I can at least understand the ethical concerns, but being completely unaware of the debate on artificial intelligence at this stage is really something that leaves me speechless, let me tell you.
If you disagree with the above statement, try replacing "AI" with "Docker", "Kubernetes", "Microservices architecture", "NoSQL", or any other tool/language/paradigm that was widely adopted in the software development industry until people realized it's awesome for some scenarios but not a be-all and end-all solution.
It's very clearly getting better and better rapidly. I don't think this train is stopping even if this bubble bursts.
The cold ass reality is: We're going to need a lot less software engineers moving forward. Just like agriculture now needs way less humans to do the same work than in the past.
I hate to be blunt but if you're in the bottom half of the developer skill bell curve, you're cooked.
Responsible use of ai means reading lots and lots of generated code, understanding it, reviewing and auditing it, not "vibe coding" for the purpose of avoiding ever reading any code.
I do like to read other people's code if it is of an exceptional high standard. But otherwise I am very vocal in criticizing it.
Do I use it? Yes, a lot, actually. But I also spend a lot of tunning prunning its overly verbose and bizantine code, my esc key is fading from the amount of times I've interrupted it to steer it towards a non-idiotic direction.
It is useful, but if you trust it too much, you're creating a mountain of technical debt.
They dismiss the religion like hype machine.
You want to market to engineers, stick to provable statements. And address some of their concerns. With something other than "AI is evolving constantly, all your problems will be solved in 6 months, just keep paying us."
Oh by the way, what is the OP trying to sell with these FOMO tactics? Yet another ChatGPT frontend?
Allow me to repeat myself: AI is for idiots.
Fully expect them to include youtube levels of advertising in 1-2 years though. Just to compensate for the results being somewhat not spammy.
You download a tool written by a human, you can reasonably expect that it does what the author claims it does. And more, you can reasonably expect that if it fails it will fail in the same way in the same conditions.
I wrote some Turing Machine programs back in my Philosophy of Computer Science class during the 80's, but since then my Turing Machine programming skills have atrophied, and I let LLMs write them for me now.
Really though, the potential in this tech is unknown at this point. The measures we have suggest there's no slowdown in progress, and it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go, or how we might need to adjust our societies around it.
What is posted to HN daily is beyond speculation. I suppose a psychologist has a term for what it is, I don't.
Edit: well, guess what? I asked an "AI":
Psychological Drivers of AI Hype:
Term | Driver | Resulting Behavior
-----------------------|-----------------------|------------------------------
ELIZA Effect | Symbolic projection | Treating a script like a person.
Automation Bias | Cognitive offloading | Trusting AI over your own eyes.
Techno-Optimism | Confirmation bias | Ignoring risks for "progress."
Interface Familiarity | Fluency heuristic | Friendly UI = "Smart" system.
By the way, the text formatting is done by the "AI" as well. Asked it to make the table look like a table on HN specifically.As a 0.1x low effort Hacker News user who can't lift a pinky to press a shift or punctuation key, you should consider using AI to improve the quality of your repetitive off-topic hostile vibe postings and performative opinions.
Or put down the phone and step away from the toilet.
AI is for idiots
And you just unwittingly proved my point, so I'm downgrading you to an 0.01x low effort Hacker News user.
If there are no other effects of AI than driving people like you out of the industry, then it's proven itself quite useful.
And that's where the "AI" is lacking.
"AI can write a testcase". Can it write a _correct_ test case (i.e. one that i only have to review, like i review my colleague work) ?
"AI can write requirements". Now, that i'm still waiting to see.
And is the test case useful for something? On non textbook code?
AI developers are 0.1x "engineers"
sorry you’re so angry though. best of luck
"She wrote a thing in a day that would have taken me a month"
This scares me. A lot.
I never found the coding part to be a bottle neck, but the issues arise after the damn thing is in prod. If i work on something big (that will take me a month) thats going to be anywhere from (im winging these numbers) 10K LOC to 25K LOC).
If thats a bechmark for me the next guy using AI will spew out at a bare minimun double the amount of code, and in many cases 3x-4x.
The surface area for bugs are just vastly bigger, and fixing these bugs will eventually take more time than you "won" using AI in the first place.
I am one of those dismissers. I am constantly trash talking AI. Also, I have tried more tools and more stress scenarios than a lot of enthusiasts. The high bars are not in my head, they are on my repositories.
Talk is cheap. Show me your AI generated code. Talk tech, not drama.
But I see where things are going. I tried some of the newer tooling over the past few weeks. They’re too useful to ignore now. It feels like we’re entering into an industrial age for software.
I can't imagine being so eager to socially virtue signal. Presumably some greybeard told him it was a waste of time and it upset him
I don't have a beard, but if I did I'm sure it would be white, beyond grey.
It's okay. It's okay to feel annoyed, you have a tough battle ahead of you, you poor things.
I may be labelled a grey beard but at least I get to program computers. By the time you have a grey beard maybe you are only allowed to talk to them. If you are lucky and the billionares that own everything let you...
Don't be so quick to point at old people and make assumptions. Sometimes all those years actually translate into useful experience :)
Possibly. The focus of a lot of young people should be to try and effect political change that allows billionares wealth grow unended. AI is all going to accelerate this very rapidly now. Just look at what kind of world some of those with the most wealth are wanting to impose on the others now. It's frightening.
How many people are actually saying this? Also how does one use modern coding tools in heavily regulated contexts, especially in Europe?
I can't disagree with the article and say that AI has gotten worse because it truly hasn't, but it still requires a lot of hand holding. This is especially true when you're 'not allowed' to send the full context of a specific task (like in health care). For now at least.
Let‘s wait with the evaluation until the honeymoon phase is over. At the moment there are plenty of companies that offer cheap AI tools. It will not stay that way. At the moment most of their training data is man made and not AI made which makes AIs worse if used for training. It will not stay that way.
You see it obviously with the artists and image/video generators too.
We did this with Dadaism and Impressionism and photography before this too with art.
Ultimately, it's just more abstraction that we have to get used to -- art is stuff people create with their human expression.
It is funny to see everyone argue so vehemently without any interest in the same arguments that happened in the past.
Exit through the giftshop is a good movie that explores that topic too, though with near-plagiarized mass production, not LLMs, but I guess that's pretty similar too!
https://daily.jstor.org/when-photography-was-not-art/
Allow me to repeat myself: AI is for idiots.
I feel like “Luddite” is a misunderstood term.
https://en.wikipedia.org/wiki/Luddite
> Malcolm L. Thomas argued in his 1970 history The Luddites that machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." [emph. added] Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes. An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centring on breaking threshing machines.
Luddites were closer to “class struggle by other means” than “identity politics.”
The early Industrial Revolution that the original Luddites objected to resulted in horrible working conditions and a power shift from artisans to factory workers.
Dadism was a reaction to WWI where the aristocracy's greed and petty squabbling led to 17 million deaths.
Quite a few come to mind: chemical and biological weapons, beanie babies, NFTs, garbage pail kids... Some take real effort to eradicate, some die out when people get bored and move on.
Today's version of "AI," i.e. large language models for emitting code, is on the level of fast fashion. It's novel and surprising that you can get a shirt for $5, then you realize that it's made in a sweatshop, and it falls apart after a few washings. There will always be a market for low-quality clothes, but they aren't "disrupting non-nudity."
So are beanie babies, NFTs and garbage pail kids -- Things that have fallen out of fashion isn't the same thing as eradicating a technology. I think that's part of the difficulty, how could you roll back knowledge without some Khmer Rouge generational trauma?
I think about the original use of steam engines and the industrial revolution -- Steam engines were so inefficient, their use didn't make sense outside of pulling its own fuel out of the ground -- Many people said haha look how silly and inefficient this robot labor is. We can see how that all turned out.[2]
1: https://www.armscontrol.org/factsheets/timeline-syrian-chemi...
2: https://en.wikipedia.org/wiki/Newcomen_atmospheric_engine
But it's really a mixed bag, because for the subsequent 3-4 tasks in a codebase that I was familiar with, Claude managed to produce over-commented, over-engineered slop that didn't do what I asked for and took shortcuts in implementing the requirements.
I definitely wouldn't dismiss AI at this point because it occasionally astounds me and does things I would never in my life have imagined possible. But at other times, it's still like an ignorant new junior developer. Check back again in 6 months I guess.
I'm so tired of this kind of reference to Stack Overflow. I used SO for about 15 years, and still visit plenty these days.
I rarely, if ever, copied from Stack Overflow. But I sure learned a great deal from SO.
The biggest problem with LLMs for code that is ongoing is that they have no ability to express low confidence in solutions where they don't really have an answer, instead they just hallucinate things. Claude will write ten great bash lines for you but then on the eleventh it will completely hallucinate an option on some linux utility you hardly have time to care about, where the correct answer is "these tools don't actually do that and I dont have an easy answer for how you could do that". At this point I am very keen to notice when Claude gets itself into an endless ongoing loop of thought that I'm going about something the wrong way. Someone less experienced would have a very hard time recognizing the difference.
This is plainly true, and you are just angry that you don't have a rebuttal
Missing in these discussions is what kinds of code people are talking about. Clearly if we're talking about a dense, highly mathematical algorithm, I would not have an LLM anywhere near that. We are talking about day-to-day boilerplate / plumbing stuff. The vast majority of boring grunt work that is not intellectually stimulating. If your job is all Carnegie-Mellon level PHD algorithm work, then good for you.
edit: I get that it looks like you made this account four days ago to troll HN on AI stuff. I get it, I have a bit of a mission here to pointedly oppose the entrenched culture (namely the extreme right wing elements of it). But your trolling is careless and repetitive enough that it looks like.....is this an LLM account instructed to troll HN users on LLM use ? funny
I keep seeing this over and over by so called "engineers".
You can dismiss the current crop of transformers without dismissing the wider AI category. To me this is like saying that users "dismiss Computers" because they dismiss Windows and instead prefer Linux. Rejecting modern practices for not getting on the microservice hype train or not using React.
Intellisense pre-GPT is a good example of AI that wasn't using transformers.
And of course, you can have both criticise some usages of transformers in IDEs and editors while appreciating and using others.
"My coworker uses Claude Code now. She finished a project last week that would’ve taken me a month". This is one of those generalisations. There is no nuance here. The range of usage from boilerplate to vibe code level is vast. Quickly churning out code is not a virtue. It is not impressive to ship something only to find critical bugs on the first day. Nor is it a virtue using it at the cost of losing understanding of the codebase.
This rigid thinking by devs needs to stop imo. For so called rational thinkers, the development world is rife with dogma and simplistic binary thinking.
If using transformers at any level is cost-effective for all, the data will speak for itself. Vague statements and broad generalisations are not going to sway anyone and will just make this kind of articles sound like validation seeking behaviour.
I'm not even sure there is much room left for one.
There is very little alignment in starting assumptions between most parties in this convo. One guy is coding mission critical stuff, the other is doing throw away projects. One guy depends on coding to put food on table, the other does not. One guy wants to understand every LoC, other is happy to vibe code. One is a junior looking for first job, other is in management in google after being promoted out of engineering. One guy has access to $200/m tech, the other does not. etc etc
We can't even get consensus on tab vs spaces...we're not going to get AI & coding down to consensus or who is "right".
Perhaps a bit a nihilistic & jaded, but I'm very much leaning towards "place your bets & may the odds be ever in your favour".
I also find that the actual coding is important. The typing may not be the most ineresting bit, but it's one of the steps that helps refine the architecture I had in my head.
That happens to produce good code as a side effect. And a chat bot is perfect for this.
But my obsession is not with output. Every time I use AI agents, even if it does exactly what I wanted, it’s unsatisfying. It’s not sometning I’m ever going to obsess over in my spare time.
I’m wondering if we can do something better…
My aunt was born in the 1940s, and was something of an old fashioned feminist. She didn't know why wasn't allowed to wear pants, or why she had to wait for the man to make the first move, etc. She tells a story about a man who ditched her at a dance once because she didn't know the "latest dance." Apparently in the 1950s, some idiot was always inventing a new dance that everyone _just had follow_. The young man was so embarrassed that he left her at the dance.
I still think about this story, and think about how awful it would have been to live in the 40s. There always has been social pressure and change, but the "everyone's got to learn new stupid dances all the time" sort of pressure feels especially awful.
This really reminds me of the last 10-20 years in technology. "Hey, some dumb assholes have built some new technology, and you don't really have the choice to ignore it. You either adopt it too, or are left behind."
I’m an early adopter and nowadays all I do is to co-write context documents so that my assistant can generate the code I need
AI gives you an approximated answer, it depends on you how to steer it to a good enough answer and this takes time and learning curve … and evolves really fast
Some people are just not good at constantly learning things
Many programmers work on problems (nearly) *all day* where AI does not work well.
> AI gives you an approximated answer, it depends on you how to steer it to a good enough answer
Many programmers work on problem where correctness is of essential importance, i.e. if a code block is "semi-right" it is of no use - and even having to deal with code blocks where you cannot trust that the respective programmer did think deeply about such questions is a huge time sink.
> Some people are just not good at constantly learning things
Rather: some people are just not good at constantly looking beyond their programming bubble where AI might have some use.
Sure if you want to learn programming languages for programming sake, then yeah don't Vibe Code (i.e. text prompting AI to code), use AI as a knowledgeable companion that's readily on hand to help you whenever you get stuck. But if your goal is to create Software that achieves your objectives then you're doing yourself a disservice if you're not using AI to its maximum potential.
Given my time on this earth is finite, I'm in the camp of using AI to be as productive as possible. But that's still not everything yet, I'm not using it for backend code as I need to verify every change. But more than happy to Vibe code UIs (after I spend time laying down a foundation to make it intuitive where new components/pages go and API integration).
Other than that I'll use AI where I can (UIs, automation & deployment scripts, etc), I've even switched over to using React/Next.js for new Apps because AI is more proficient with it. Even old Apps that I wouldn't normally touch because it used legacy tech that's deprecated, I'll just rewrite the entire UI in React/Next.js to get it to a place where I can use text prompts to add new features. It took about ~20mins for Claude Code to get the initial rewrite implemented (using the old code base as a guide) then a few hours over that to walk through every feature and prompt it to add features it missed or fix broken functionality [1]. I ended up spending more time migrating it from AWS/ECS/RDS to Hetzner w/ automated backups - then the actual rewrite.
[1] https://react-templates.net/docs/vibe-coding/rewrite-legacy-...
It's full of bloat; Unused http endpoints, lots of small utility functions that could have been inlined (but now come with unit tests!), missing translations, only somewhat correct design...
The quality wasn't perfect before, now it has taken a noticeable dip. And new code is being added faster than ever. There is no way to keep up.
I feel that I can either just give in and stop caring about quality, or I'll be fixing everyone else's AI code all of my time.
I'm sure that all my particular colleagues are just "holding it wrong", but this IS a real experience that I'm having, and it's been getting worse for a couple of years now.
I am also using AI myself, just in a much more controlled way, and I'm sure there's a sweet spot somewhere between "hand-coding" and vibing.
I just feel that as you inch in on that sweet spot, the advertised gains slowly wash away, and you are left with a tangible, but not as mindblowing improvement.
The Strange Case of "Engineers" Who Use AI
I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.
I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.
I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).
“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.
I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).
Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].
I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.
AI is a tool and it shuold be treated as such.
Also, beware of snake oil salesmen. Is AI going to integrate widely into the world? Yes. Is it also going to destroy all the jobs in the world? Of course not, luddites don't understand the naïvety of this position.
And even if LLMs turn out to really be a net positive and a requirement for the job, they're antithetical to what most software developers appreciate and enjoy (precision, control, predictability, efficiency...).
There sure seems to be two kinds of software developers: those who enjoy the practice and those who're mostly in for the pay. If LLMs win it will be second ones who'll stay on the job, and that's fine; it won't mean that the first group was made of luddites, but that the job has turned into crap that others will take over.
LLM-assisted coding feels like the next step in that same pattern. The difference is that this abstraction layer can confidently make stuff up: hallucinated APIs, wrong assumptions, edge cases it didn’t consider. So the work doesn’t disappear, it shifts. The valuable skill becomes guiding it: specifying the task clearly, constraining the solution, reviewing diffs, insisting on tests, and catching the “looks right but isn’t” failures. In practice it’s like having a very fast junior dev who never gets tired and also never says “I’m not sure”.
That’s why I don’t buy the extremes on either side. It’s not magic, and it’s not useless. Used carelessly, it absolutely accelerates tech debt and produces bloated code. Used well, it can take a lot of the grunt work off your plate (refactors, migrations, scaffolding tests, boilerplate, docs drafts) and leave you with more time for the parts that actually require engineering judgement.
On the “will it make me dumber” worry: only if you outsource judgement. If you treat it as a typing/lookup/refactor accelerator and keep ownership of architecture, correctness, and debugging, you’re not getting worse—you’re just moving your attention up the stack. And if you really care about maintaining raw coding chops, you can do what we already do in other areas: occasionally turn it off and do reps, the same way people still practice mental math even though Excel exists.
Privacy/ethics are real concerns, but that’s a separate discussion; there are mitigations and alternatives depending on your threat model.
At the end of the day, the job title might stay “software engineer”, but the day-to-day shifts toward “AI guide + reviewer + responsible adult.” And like every other tooling jump, you don’t have to love it, but you probably do have to learn it—because you’ll end up maintaining and reviewing AI-shaped code either way.
Basically, I think the author hit just in the point.
I feel like I'm living in a different world when every time a new model comes out, everyone is in awe, and it scores exceptionally well on some benchmark that no one heard of before before the model even launched. And then when I use it, it feels like it's exactly the same as all models before, and makes the same stupid mistakes as always.
dkdcio•1h ago
it does seem like the skepticism is fading. I do think engineers that outright refuse to use AI (typically on some odd moral principle) are in for a bad time