It's like the world has lost it's goddamn mind.
AI is always being touted as the tool to replace the other guy's job. But in reality it only appears to do a good job because you don't understand the other guy's job.
Management has an AI shaped hammer and they're hitting everything to see if it's a nail.
Have you considered that instead of resisting you should do more to figure out why you're not getting the results that a lot of us are talking about? If nothing has changed for you in the past 2 years in your productivity the problem is most likely you. Don't you think it's your responsibility as an engineer to figure out what you're doing wrong when there are a lot of people telling you that it's a life changing tool? Or did you just assume that everybody was lying and you were doing everything correctly?
Sorry to say it. It's an unpopular opinion but I think it's pretty much a harsh truth.
I think this pretty much speaks for itself.
IMO the problem occurs when "the results" are hyped up linkedIn posts not based in reality, AI is a boon but it's not lived up to the "IDEs are a thing of the past, youre all prompt engineers now" expectations that we hear from executives
A) all of this money being funneled into tech to build out trillions of dollars worth of infrastructure, a month over month increasing user base buying subscriptions for these llm services, every company buying seats for LLM because of the value that it provides - these people are wrong
B) yappers on hackernews that claim they derive no productivity boost out of llms while showing absolutely nothing about their workflow or method when the interface is basically a chat box with no guardrails - these people are wrong
Sorry I'm going to be it's B and you just suck at it
Regardless, I’m sure it’s a little of A and a little of B, plus some of C) yappers on Hackernews who think that the majority of the work of software engineering is writing code, and who generally write code in sufficiently simple contexts for the LLMs to produce something equivalent to their normal output.
All the jaw dropping ICOs, million dollar NFTs, and cryptocurrency price surges. Surely that proves its value in our daily lives.
Actually by the numbers AI is already bigger than bitcoin in both adoption and market value, so I'm not sure if you are making the point that you think you're making.
Many of my colleagues that I most admire are benefiting greatly and increasingly from LLM tooling.
I am maybe 10-20% more productive at certain tasks in the long run (which is pretty good!). Nowhere close to to the 10x or even 2x boost people are claiming.
If LLMS were really making software developers 10x more productive over the last year, we would be seeing massive shifts in the industry. In theory either 90% layoffs or 10x product velocity.
I really think we need to figure out how to cut back on management so we can get back to the business of actually doing work
https://www.epi.org/blog/americans-favor-labor-unions-over-b...
Management can and usually does suck but i can reason with a person, for now. And sadly only the product people actually know what they want, usually right when you've built it the way they used to want it lol.
This--all of this--seems exactly antithetical to computing/development/design/"engineering"/architecture/whatever-the-hell people call this profession as I understood it.
Typically, I labored under the delusion that competent technical decision makers would integrate tooling or choose to use a language, "service", platform, whatever, if they saw benefits and if they could make a "case" for why something was the correct approach, i.e how it met some product's needs, addressed some shortcomings, made things more efficient.
Like "here's my design doc, I chose $THING for caching for $REASON and $DATASTORE as it offers blah blah"
"Please provide feedback and questions"
This is totally alien to that approach.
Ideally, "hey we're going to use CoPilot/other LLM thingy, let us know if it aids your workflow, give us some report in a month and we'll go from there to determine if we want to keep paying for it"
This is a well considered point that not enough of us admit. Yes many jobs are rote or repetitive, but many more jobs, of all flavors, done well have subtleties that will be lost when things are automated. And no I do not think that some "80% done by AI is good enough" because errors propagate through a system (even if that system is a company or society), AND the people evaluating that "good enough" are not necessarily going to be those experienced in that same domain.
I mean, I'm guessing that's true. It'd make a lot of sense if they vehemently disliked that. It's hard to make sense of it all otherwise, really.
A reasonably smart CEO can pretty much understand, in depth, every aspect of their business. But when it comes to tech, which is often the most essential part, they are left grasping, and must rely on the expertise of other people, and thus their destiny is not really in their control, other than by hiring the best they can and throwing money at R&D.
The AI and the hype around it plays into their anxieties, and also makes them feel like they have control over the situation.
In biotech, the Chief Scientific Officer (CSO) is often given much more authority in startups than the CTO in tech startups, I have noticed.
I honestly really don't understand why this would be the case. Software isn't more complicated than any of the other aspects of the business. I think a "reasonably smart" CEO could just ... learn how it works? if it's really so critical to their business.
It's been a long time since I worked for a CEO who didn't understand software.
But if you are a non-technical CEO and your core business is, say, enterprise SaaS software, you don’t fundamentally understand what the heck is going on, and if you have a key deadline and blow it, don’t really understand why. So if a new VP says they can cut your costs dramatically by offshoring everything to India, etc., or replace half these expensive engineers with AI, it seems as plausible as anything else. Especially given the fawning press and hype, and salesmen pitching you all day.
If you think running the output of an LLM as a serverless function in some cloud is a good way to differentiate your business, build a moat, and make a profit, good luck!
Because if that programmer—if that thing, that CREATURE—walked into your stand-up in human form, typing half-correct garbage into your codebase while ignoring your architecture and disappearing during cleanup, you’d fire them before they could say "no blockers".
The description of the greenfield project first engineer. They are gone before you know it, or at the very least at some point offer a “well I had no choice you see, they really wanted to release something”.
> Throw away what is collapsing, bring up new stuff, rinse and repeat.
hearing you trying to explain yourself makes us even more worried than before.
These tools will get better, and they will eventually allow the best to extend their ability instead of both slowing them down and potentially encouraging bad practices. But it will take time, and increased context length. The world is full of people who don't care about best practice, and if that's all the task requires of them - keep on keeping on.
A lot of people are going to have to come to the realization that has already been mentioned before but many find it hard to grasp.
Your boss, stakeholders, and especially non-technical people literally give 0 fucks about "quality code" as long as it does what they want it to do. They do not care about tests insofar as if it works it works. Many have no clue about nor do they care about whether something just refetches the world in certain scenarios. And AI whether we like it or not, whether it repeats the same shit and isnt DRY, doesn't follow patterns, reinvents the wheel, etc - is already fairly good at that.
This is exactly why all your stakeholders and executives are pushing you to use it. they've been fed that it just gets shit done and pumps out code like nothing else.
I really think a lot of the reason some people say it doesn't give them as much productivity as they would like is due largely to a desire to write "clean" code based on years and years of our own training, and due to having to be able to pass code review done by your peers. If these obstacles were entirely removed and we went full bandaid off I do think AI even in its current state is fairly capable of replacing plenty of roles. But it does require a competent person to steer to not end up in a complete mess.
If you throw away the guardrails a little bit and not obsess about how nice code looks anymore, it absolutely will move things along faster than you could before.
And this is where a problem (still) appears - except now the AI-assiated authors have even less comprehension of the system.
Working code is a requirement.
You missed the point. AI slop doesn't just fail on point 1. It fails on point 2.
We have literally stood up entire services built practically entirely with AI that are deployed right now and consumers are using.
AI does work with competent people behind the wheel. People can't keep hiding behind saying that it always churns out code that doesn't work. We are way past those days. If you don't you will end up losing your job. Theres no way around it. The problem is we may end up losing our jobs either way.
What kind of services and how complex are they?
I've been using Cursor for a year and struggle to get the agent to make competent changes in a medium sized code base.
Even something isolated like writing a test for a specific function usually take multiple rounds of iteration and manual cleanup at the end.
Regarding tests: that is also something I find and many of my peers find that LLM's excel at. Given X inputs and Y outputs an llm will spit out a whole suite of tests for every case of your functions without issue except in complicated scenarios. End to end tests it may not do quite as well at since usually it requires a lot of externalities/setup, but it can help with generating some of the setup and given examples it can build from there. Of course this depends on how much you value those tests, since some don't even think tests are that useful nowadays.
So does extremely junior devs that are really bad but you code review EVERYTHING.
(Except jr programmers can learn, AI models can't really, they can be retrained from scratch by big corporations)
Not to mention you should still be code reviewing it anyway. In fact with AI you should be reviewing even more than you were before.
Short term. Not long term. The AI will never become a staff developer. Shifting review on to the senior developers is shifting responsibility and workload, which will have the expected outcome. Slower development cycles as you have to consider every footgun. Especially when the AI can't explain the reasoning for esoteric changes. I ask a Jr, it's likely they have a test (codified or manual) that led them to the decision.
But I also can't relate to people who think they can, today, build fully working software using just AIs, without people who know how software works and are able to understand and debug what is being generated.
Maybe it's true that this will no longer be the case a year from now. I honestly don't know. But at the moment, I think being a skilled practitioner who is also able to effectively use these powerful new tools is actually a pretty sweet spot, despite all the doom and gloom.
I think it's breaking a lot of brains that we have these tools now that are useful but not deterministically useful.
Quality control exists until The Business deems otherwise. The reasons vary: vulnerability, promotion, whatever. Usually not my place to say.
Personally, my 'product' isn't code. Even the 'code' isn't code. For every 8 hours of meetings I do, I crank out maybe 20 lines of YAML (Ansible). Then, another 4 hours of meetings handing that out/explaining the same basics for Political Points.
The problem(s) relating to speed or job security have remarkably little to do with code; generated or not. The people I work with are functionally co-dependent because they don't use LLMs or... manuals.
All this talk about "left behind"... to survive a bear, one doesn't have to be the fastest. Just not the slowest.
Machine doesn't get mad when an app takes forever to start or keeps constantly crashing, but we humans do. Writing "clean" code has the least importance when it comes to machine generated code.
This is so far from the truth that I really think anybody who still says this has not actually used it for anything real in at least a couple years.
Yes, I'm not saying it will always generate you the best code, sometimes it may even be bad.
What I am saying is it CAN generate code that is reasonably performant, sometimes even more performant than you would have written it given time constraints, and fulfills requirements (even if sometimes it requires a little bit of manual effort) much faster than we ever could before.
And those reasons are, it all collapses very quickly once the complexity reaches an medium amount.
And if I want to rely on things and debug them - I cannot just have a pile of generated garbage, that works as long as the sun is shining. For isolated tasks it works for me. For anything complex, I am faster on my own.
No, it is not always perfect. Yes you will have to manually edit some of the code it generates. But yes it can and will generate good code if you know how to use it and use sophisticated tools with good guidance. And there are times where it will even write better more performant code than you could given the time requirements.
Otherwise often not. But I am not worried. Give me a AI tool that can work with my whole codebase reliable and I gladly use it.
It’s nice when you need to do something simple in an unfamiliar but simple context, though.
It seems though that a lot of the narrative here from its proponents is that we’re just not trying hard enough to get it to solve our problems. It’s like vimmers who won’t shut up about how it’s worth the weeks of cratered productivity in order to reach editing nirvana (I say this as one of them).
Like with any tool, the learning curve has to be justified by the results, but the calculation is further complicated by the fact that the AI tooling landscape changes completely every 3-6 months. Do I want to spend all that time getting good at it now? No. I’ll probably spend more time learning to use it when it’s either easier to get results that actually feel useful or when it stops changing so often.
Until then I’ll keep firing it up every once in a while to have it write some bash or try to get it to write a unit test.
This is why I mention you need to be competent enough to understand what is being generated or they will find someone else who does. There's no 2 ways around it. AI is here to stay.
We're all competent enough to understand what is generated. That's why everyone is doomer about it.
What insights do you have above us when the LLM generates
true="false"
while i < 10 {
i++
}
What's the deep philosophical understanding that you have about this that makes us all sheeple for not understanding how this is actually the business's goose laying golden eggs. Not the engineers.Frankly. Businesses that use this, drop all their actual engineers, and then fall over when the slightest breeze comes.
I am actually in favour, in a accelerationist sense.
The real question is how many companies have to accidentally expose their databases, suffer business-ruining data losses, and have downtime they are utterly unable to recover from quickly before CxOs start adjusting their opinions?
Last time I saw a "Show HN" of someone showing off their vibecoded project, it leaked their OpenAI API key to all users. If that's how you want to run your business, go right ahead.
You write good code because you own it.
If you get ChatGPT or Copilot or Claude or whateverthe****else to write it, you're going to have a whole lot less fun when it's on fire.
The level of irresponsibility that "vibe coding" is introducing to the world is actually worse than the one that had people pouring their savings into a shitcoin. But it's the same arseholes talking it up.
That's why I bring such topics as maintenance and stability very early on into the discussions and ask those stakeholders how much system downtime they can tolerate, so that they can feel the weight of their decision making, and that gives me an opportunity to explain why quality matters.
Then it's up to them to decide how much crap they tolerate.
But the bulk of us aren't doing that... We're making CRUD apps for endless incoming streams of near identical user needs, just with slightly different integrations, schemas, and lipstick.
Let's be honest. For most software there is nothing new under the sun. It's been seen before thousands of times, and so why not recall and use those old nuggets? For me coding agents are just code-reuse on steroids.
Ps. Ironically, the article feels AI generated.
All of that is still difficult to get an LLM to do. This isn't AI generated. It's just good writing. Whether you buy the premise or not.
But you—at your most frazzled, sleep-deprived, raccoon-eyed best—you can try. You can squint at the layers of abstraction and see through them. Peel back the nice ergonomic type-safe, pure, lazy, immutable syntactic sugar and imagine the mess of assembly the compiler pukes up.
Amazing
We are all so simply reproducible. No one’s making anything special, anywhere, for the most part. If we all uploaded a TikTok video of daily coding, it would be the same fucking app over and over, just like the rest of TikTok.
Elon may have be right all along, there’s literally nothing left to do but goto Mars. Some of us were telling many of you that the LLMs don’t hallucinate as much as you think just two years ago, and I think the late to the party crowd need to hear us again - we humans are not really necessary anymore.
!RemindMe in 2 years
> Now? We’re building a world where that curiosity gets lobotomized at the door. Some poor bastard—born to be great—is going to get told to "review this AI-generated patchset" for eight hours a day, until all that wonder calcifies into apathy. The terminal will become a spreadsheet. The debugger a coffin.
On the other hand, one could argue that AI is just another abstraction. After all, some folks may complain that over-reliance on garbage collectors means that newbies never learn how to properly manage memory. While memory management is useful knowledge for most programmers, it rarely practically comes up for many modern professional tasks. That said, at least knowing about it means you have a deeper level of understanding and mastery of programming. Over time, all those small, rare details add up, and you may become an expert.
I think AI is in a different class because it’s an extremely leaky abstraction.
We use many abstractions every day. A web developer really doesn’t need to know how deeper levels of the stack work — the abstractions are very strong. Sure, you’ll want to know about networking and how browsers work to operate at a very high level, but you can absolutely write very nice, scalable websites and products with more limited knowledge. The key thing is that you know what you’re building on, and you know where to go learn about things if you need to. (Kind of like how a web developer should know the fundamental basics of HTML/CSS/JS before really using a web framework. And that doesn’t take much effort.)
AI is different — you can potentially get away with not knowing the fundamental basics of programming… to a point. You can get away with not knowing where to look for answers and how to learn. After all, AIs would be fucking great at completing basic programming assignments at the college level.
But at some point, the abstraction gets very leaky. Your code will break in unexpected ways. And the core worry for many is that fewer and fewer new developers will be learning the debugging, thinking, and self-learning skills which are honestly CRITICAL to becoming an expert in this field.
You get skills like that by doing things yourself and banging your head against the wall and trying again until it works, and by being exposed to a wide variety of projects and challenges. Honestly, that’s just how learning works — repetition and practice!
But if we’re abstracting away the very act of learning, it is fair to wonder how much that will hurt the long-term skills of many developers.
Of course, I’m not saying AI causes everyone to become clueless. There are still smart, driven people who will pick up core skills along the way. But it seems pretty plausible that the % of people who do that will decrease. You don’t get those skills unless you’re challenged, and with AI, those beginner level “learn how to program” challenges become trivial. Which means people will have to challenge themselves.
And ultimately, the abstraction is just leaky. AI might look like it solves your problems for you to a novice, but once you see through the mirage, you realize that you cannot abstract away your core programming & debugging skills. You actually have to rely on those skills to fix the issues AI creates for you — so you better be learning them along the way!!
Btw, I say this as someone who does use AI coding assistants. I don’t think it’s all bad or all good. But we can’t just wave away the downsides just because it’s useful
Isn't this just the rehashed argument against interactive terminals in the 60s/70s (no longer need to think very carefully about what you enter into your punch cards!), debuggers (no longer spending time looking carefully at code to find bugs), Intellisense/code completion (no need to remember APIs!) from the late 90s, or stackoverflow (no need to sift to answer questions that others have had before!) from the 00s? I feel like we've been here before and moved on from it (hardly anyone complains about these anymore, no one is suggesting we go back to programming by rewiring the computer), I wonder if this time it will be any different? Kids will just learn new ways of doing things on top of the new abstractions just like they've done for the last 70 years of programming history.
It feels reasonable, consistent to see it as another "old man yells at the skies" scenario, but I do think it's unprecedented for a machine to automate thought itself on an unbounded domain and with such unreliability. We know calculators made people worse at mental math, but at least calculators don't give you off-by-one errors 40–60% of the time with no method of verification.
The reason why we haven't lost literacy to Speakwrites and screen readers is because they required more time and effort than doing it yourself. With AI, the supposed time savings are obvious: you don't put hours into reading the source to write an essay, you just ask ChatGPT, you don't learn programming fundamentals, you just ask for a script that does X, Y, and Z, etc... It feels like a good choice, but you're permanently crippling you education, both in a structured course and in the wild, and the supposed oracle is a slot machine, costing you $avg_tokens*$model_rate a pull. The poor news is slot machines sell.
I, as a user of a library abstraction, get a well defined boundary and interface contract — plus assurance it’s been put through paces by others. I can be pretty confident it will honor that contract, freeing me up to not have to know the details myself or second guess the author.
I like writing front end code. I'm probably never going to have a job where i need or would even want to write a low level graphics library from scratch. Fine, I'm not red-eyed 3am hacker brained, but I'm passionate and good at what i do. I don't think a world where every person working in software has the author's mentality is realistic or desirable.
Gross. Also: you could have said this about the spreadsheet.
I knew plenty of software developers that hate the job: it's a just paid work for many people and AI doesn't change that
> 88% of the Excel spreadsheets have errors
https://www.cassotis.com/insights/88-of-the-excel-spreadshee...
How many companies mismanaged their finances because they had an enthusiastic spreadsheet user in charge? From that article, we know a country did.
https://www.newsweek.com/donald-trump-tariffs-chatgpt-205520...
Not true at all but you have to ask it.
This is one of the most beautiful pieces of writing I’ve come across in a while.
But what I heard over the din of whining was "It was hard for me, it should be hard for you". And... that's not how this or anything works. You get labor-saving stuff, you choose if you want to continue to solve hard problems, or if you want the same problems (which suddenly turned easy).
Yes, it's not perfect. Yes, you need to know how you use it, and misusing it causes horrible disfiguring incidents. Guess what, the same was true about C++. And C before it. And that new-fangled assembly stuff, instead of using blinkenlights like a real programmer. And computers instead of slide rules.
Up the complexity ladder we keep going.
I fully agree. This already happened with the explosion of DevOps bullshit, where people with no understanding of Linux got jobs by memorizing abstractions. “Stop gatekeeping,” they say. “Stop blowing up prod, and read docs” I fire back.
I look forward to a data driven system future where a few functions transform the machines electromagnetic geometry to solve a task based upon the most efficient energy model for solving a task as we continue to compress from the model all the non-essential syntax sugar of modern software.
I make no claims that I fully understand anything, but I do have a decent understanding of how a CPU works from the level of doped silicon and up. Crucially, I read every doc I could find at every one of those jobs. You can learn enough to do the job, or you can learn more. That is a choice that everyone makes.
More generally, I’ve been playing with Linux and computers in general for over 20 years, and when I finally got a job in tech about five years ago, I was stunned at how little people knew about how computers work. I don’t expect (nor do I think it’s helpful) anyone to know how a bus arbitration cycle works, but I assumed that things like IOPS and throughput would be generally understood.
My expertise is only in sleeping until 11am on weekends, but I too started in "tech" after being a lifelong hobbyist and have been continually shocked at how concepts like "pass by reference" are alien to a seemingly large portion of the people that I've worked with.
People often fail to know things that are basically "table stakes" in the domains they ostensibly work in, to say nothing of even being aware of something like L1 cache or how code they write could interact with it.
People who knew better: wild laughter.
The fact that somebody can "be DevOps" or work as a "DevOps Engineer" is exemplary of the fact that DevOps as conceived and DevOps as practiced are two very different things. The former would be engineers taking ownership of deployment, collaborating horizontally, and practicing tight feedback loops. DevOps as practiced is the time-honored tradition of a dev team and a cloud team playing tennis with a grenade that is a questionably-stable SaaS that people volley back and forth with rackets like "let's roll the pods" or "it worked on my machine".
> people with no understanding of Linux got jobs
This happens in every industry with every job title. I've worked with Senior+ developers that mutated React props, didn't know how to use Git, couldn't read Java stack traces, etc. I myself have been paid money to do a myriad of things that I have no business doing (like singing or playing guitar or mixing cocktails). It's the way of the world.
Otherwise everyone's just talking past each other.
Just try implementing a feature with a junior, in a mildly complex codebase and you'd catch all the unconscious tradeoffs that you're making as an experienced developer. AI has some concept of what these tradeoffs are, but that's mostly by observation.
AI _does_ help with writing code. Keyword there being - "help".
But thinking is the human's job. LLMs can't/don't "think". Thinking how to get the AI to produce the output you want is also your job. You'd think less and less if models get better.
"We'll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software—and the idea of squeezing every last drop of performance out of a system, or building something lean and wild and precise, will sound like folklore."
This somewhat lines up with my concerns about libraries and patterns before 2023 getting frozen in stone once we pass over the event horizon where most new code to train on is generated by LLMs. We aren't innovating, we are going to forever reinforce the screwed up dependency stack and terrible kludges of the last 30 years of development. Javascript is going to live forever.
Someone tell me I'm not alone.
Copilot’s fine for boilerplate. But lean on it too much, and you stop thinking. Stop thinking long enough, and you stop growing. That’s the real cost.
It's the same when you get a junior dev to work on things, it's just not how you would do it yourself and frequently wrong or naive. Sometime is brilliant and better than you would have done yourself.
That doesn't mean don't have junior devs, but having one doesn't mean you don't have to do corrective stuff and refinements to their work.
Most of us aren't changing the world with our code, we're contributing an incredibly small niche part of how it works. People (normal people, lol) only care what your system does for them, not how it works or how great the code is.
But it completely ignores the fact that AI generated code is getting better on a ~weekly basis. The author acknowledges that it is useful in some contexts for some uses, but doesn't acknowledge that the utility is constantly growing. We certainly could plateau sometime soon leaving us in the reckless intern zone, but I wouldn't bet on it.
Is it, in 2025, actually better than a real human at its designated task? Pretty universally no.
So i won't be surprised when the "last 10%" of software AI takes 30 years to close the gap that 20 years of "immanent self driving" is still yet to close.
We should all understand, i would think, that the last 10% is the hard part.
swyx•9h ago