It’s not simpler. It’s faster and cheaper and more consistent in quality. But way more complex.
If you are talking about code which isn’t what I said, then we aren’t there yet.
Making clay pottery can be simple. But to make “fine china” with increasingly sophisticated ornamentation and strength became more complex over time. Now you can go to ikea and buy plates that would be considered expensive luxuries hundreds of years ago.
and I think these people are benefitting from it the most, people with expertise, who know their way around and knew what and how to build but did not want to do the grunt work
Right now, "what code shouldn't be written" seems to have become an even more important part, as it's so easy to spit out huge amounts of code, but that's the lazy and easy way, not the one that let you slowly add in features across a decade, rather than getting stuck with a ball of spaghetti after a weekend of "agent go brrr".
unless you understand every inch of system and foresee what issues can be created by what kind of change, things will break when using AI
There are people who just want an object produced that allows for some outcome to be achieved closer to the present.
And there are other people who want to ensure that object that is produced, will be maintainable, not break other parts of the system etc.
Neither party is wrong in what they want. I think there should naturally be a split of roles - the former can prototype stuff so other individuals in the organisation can critique whether it is a thing of value / worth investing in for production.
I believe if we had something like this we could go to market early and understand the user behaviour to build a more scalable and robust system once we were sure if it was even of worth
The reality is humans are really bad at knowing what is worth investing into.. until the object is there for all to see and critique.
Every idea sounds great until you spend resources getting into the subtleties and nuances.
I think who's benefiting the most are people that said that syntax was the least interesting parts but could not program for shit.
Typing into a machine is not the least interesting part. It is the only interesting part. Everything else is a fairy tale
I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.
I also think that it is radically understated how much developers contribute to UX and product decisions. We are constantly having to ask "Would users really do that?" because it directly impacts how we design. Product people obviously do this more, but engineers do it as a natural part of their process as well. I can't believe how many people do not seem to know this.
Further, in my experience, even the latest models are terrible "experts". Expertise is niche, and niche simply is not represented in a model that has to pack massive amounts of data into a tiny, lossy format. I routinely find that models fail when given novel constraints, for example, and the constraints aren't even that novel - I was writing some lower level code where I needed to ensure things like "a lock is not taken" and "an allocation doesn't occur" because of reentrancy safety, and it ended up being the case that I was better off writing it myself because the model kept drifting over time. I had to move that code to a separate file and basically tell the model "Don't fucking touch that file" because it would often put something in there that wasn't safe. This is with aggressively tuning skills and using modern "make the AI behave" techniques. The model was Opus 4.5, I believe.
This isn't the only situation. I recently had a model evaluate the security of a system that I knew to be unsafe. To its credit, Opus 4.6 did much better than previous models I had tried, but it still utterly failed to identify the severity of the issues involved or the proper solutions and as soon as I barely pushed back on it ("I've heard that systems like this can be safe", essentially) it folded completely and told me to ship the completely unsafe version.
None of this should be surprising! AI is trained on massive amounts of data, it has to lossily encode all of this into a tiny space. Much of the expertise I've acquired is niche, borne of experience, undocumented, etc. It is unsurprising that a "repeat what I've seen before" machine can not state things it has not seen. It would be surprising if that were not the case.
I suppose engineers maybe have not managed to convey this historically? Again, I'm baffled that people don't see to know how much time engineers spend on problems where the code is irrelevant. AI is an incredible accelerator for a number of things but it is hardly "doing my job".
AI has mostly helped me ship trivial features that I'd normally have to backburner for the more important work. It has helped me in some security work by helping to write small html/js payloads to demonstrate attacks, but in every single case where I was performing attacks I was the one coming up with the attack path - the AI was useless there. edit: Actually, it wasn't useless, it just found bugs that I didn't really care about because they were sort of trivial. Finding XSS is awesome, I'm glad it would find really simple stuff like that, but I was going for "this feature is flawed" or "this boundary is flawed" and the model utterly failed there.
There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests.
Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning.
Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours.
There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.
Snobby programmers would never even return an email offering money for their services.
> Snobby programmers would never even return an email offering money for their services.
Why the would they? I don't respond to the vast majority of emails, and I'm already employed.
As a software engineer, I'd love if the industry had an actual breakthrough, if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
But not if the only reward for this would be to be laid off.
So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
I don't really believe this is possible. Or, it's the sort of thing that gets solved at a "product" level. Reality is complicated. People are complicated. The idea that software can exist without complexity feels sort of absurd to me.
That said, to your larger point, I think the goal is basically to eliminate the middle class.
Capitalism is reliant on the underclass (the homeless, the below minimum-wage) to add pressure to the broader class of workers in a way that makes them take jobs that they ordinarily wouldn't (Because they may be e.g. physically/emotionally unsafe, unethical, demeaning), for less money than they deserve and for more hours than they should. This is done in order to ensure that the price of work for companies is low, and that they can always draw upon a needy desperate workforce if required. You either comply with company requirements, or you get fired and hope you have enough runway not to starve. This was written about over a hundred years ago and it's especially true today in the modern form of it. Programmers as a field have just been materially insulated from the modern realities of "your job is timing your bathroom breaks, tracking how many hours you spend looking at the internet, your boss verbally abuses you for being slow, and you aren't making enough money to eat properly".
This is also why many places do de-facto 'cleansings' of homeless people by exterminating their shelter or removing their ability to survive off donations, and why the support that is given for people without the means to survive is not only tedious but almost impossible to get. The majority of workers are supposed to look at that and go "well fuck, glad that's not me!" with a little part of their brain going "if i lost my job and things went badly, that could become me."
This is also why immigration enforcement is a thing — so many modern jobs that nobody else in the western world wants to do are taken by immigrants. The employer won't look too closely at the visa, and in return the person gets work. With the benefit being towards the employer — if the person refuses to do something dangerous to themselves or others, or refuses to produce enough output to sustain the exponential growth at great personal cost, well, then the company can just cut the immigrant loose with no recourse, or outright call the authorities on them so they get deported. Significantly less risky to get people to work in intolerable conditions for illegal wages if there is no hope of them suing you for this.
Back in the 1900s there were international conventions to remove passports. Now? Well, they're a convenient underclass for political manoeuvring. Why would you want people to have freedom of movement if your own citizens could just leave when things get bad, and when the benefits are a free workforce that you don't have to obey workers rights laws about?
It's like the Bill Joy point about mediocre technology taken to the next level.
I believe vibe coding has always existed. I've known people at every company who add copious null checks rather than understanding things and fixing them properly. All we see now is copious null checks at scale. On the other hand, I've also seen excellent engineering amplified and features built by experts in days which would have taken weeks.
I convinced them that if they wanted to treat uncovered lines of code as tech debt, they needed to add an epic stories to their backlog to write tests. And their artificially setting some high target coverage threshold will produce garbage because developers will write do-nothing tests in order to get their work done and not trip the alarms. I argued that failing the builds on code coverage would be unfair because the tech debt created by past developers would unfairly hinder random current-day devs getting their work done.
Instead, I recommended they pick their current coverage percentage (it was < 10% at the time) and set the threshold to that simply to prevent backsliding as new code was added. Then, as their backlogged, legit tests were implemented, ratchet up the coverage threshold to the new high water mark. This meant all new code would get tests written for them.
And, instead of failing builds, I recommended email blasts to the whole team to indicate there was some recent backsliding in the testing regime and the codebase had grown without accompanying tests. It was not a huge shame event, but good a motivator to the team to keep up the quality. SonarQube was great for long-term tracking of coverage stats.
Finally, I argued the coverage tool needed to have very liberal "ignore" rules that were agreed to by all members of the team (including managers). Anything that did not represent testable logic written by the team: generated code, configurations, tests themselves, should not count against their code coverage percentages.
So there are tests that leverage mocks. Those mocks help validate software is performing as desired by enabling tests to see the software behaves as desired in varying contexts.
If the software fails, it is because the mocks exposed that under certain inputs, undesired behavior occurs, an assert fails, and a red line flags the test output.
Validating that the mocks return the desired output.... Maybe there is a desire that the mocks return a stream of random numbers and the mock validation tests asserts said stream adheres to a particular distribution?
Maybe someone in the past pushed a bad mock into prod, that mock validated a test that would have failed given better mock, and a post mortem when the bad software, now pushed into prod, was traced to a bad mock derived a requirement that all mocks must be validated?
# abstract internals for no reason
func doThing(x: bool)
if (x)
return true
else
return false
# make sure our logic works as expected
assert(doThing(true))
# ???
# profit
It's excellent software engineering because there are testsI'm saying you could make the same argument about useful tests themselves. What is testing that the tests are correct?
Uncle Bob would say the production code is testing the tests but only in the limited, one-time, acceptance case where the programmer who watches the test fail, implements code, and then watches it pass (in the ideal test-driven development scenario.)
But what we do all boils down to acceptance. A human user or stakeholder continues to accept the code as correct equals a job well done.
Of course, this is itself a flawed check because humans are flawed and miss things and they don't know what they want anyhow. The Agile Manifesto and Extreme Programming was all about organizing to make course corrections as cheap as possible to accommodate fickle humanity.
> Like, what are we even doing here?
What ARE we doing? A slapdash job on the whole. And, AI is just making slapdash more acceptable and accepted because it is so clever and the boards of directors are busy running this next latest craze into the dirt. "Baffle 'em with bullsh*t" works in every sector of life and lets people get away with all manner of sins.
I think what we SHOULD be doing is plying our craft. We should be using AI as a thinking tool, and not treat it like a replacement for ourselves and our thinking.
So much so that many people who were doing good engineering before have opted to move to doing three times as much bad engineering instead of doing 10% more good engineering.
Good engineering requires that you still pay attention to the result produced by the agent(s).
Bad engineering might skip over that part.
Therefore, via Amdahl's law, LLM-based agents overall provide more acceleration to bad engineering than they do to good engineering.
But once software becomes bigger and more complex, the LLM starts messing up, and the expert has to come in. That basicaly means your months project cannot be done in a week.
My personal prediction: plugins and systems that support plugins will become important. Because a plugin can be written at 10x speed. The system itself, not so much.
It's just a small CLI app in 3 TypeScript files.
ynow "defensive programming" is a thing, yeah? Sorry mate, but that statement I'd expect from juniors, which are also often the one's claiming their own technical superiority over others
I've seen legacy code bases during code review where someone will ask "should we have a null check there?" and often no-one knows the answer. The solution is to use nullability annotations IMO.
It's easy to just say "oh this is just something a junior would say", but come on, have an actual discussion about it rather than implying anyone who has that opinion is inexperienced.
If you gave an experienced house framer a hammer, hand saw and box of nails and a random person off the street a nail gun and powered saw who is going to produce the better house?
A confident AI and an unskilled human are just a Dunning-Kruger multiplier.
There's this mistake Engineers make when using LLMs and loudly proclaiming its coming for their jobs / is magic... you have a lot of knowledge, experience and skill implicitly that allows for you to get the LLM to produce what you want.
Without it... you produce crappy stuff that is inevitably going to get mangled and crushed. As we are seeing with Vibe code projects created by people with no exposure to proper Software Engineering.
And I keep seeing products and projects banning AI: "My new house fell down because of the nail gun used, therefore I'm banning nail guns going forward." I understand and sympathize with maintainers and owners and the pressure they are under, but the limitation is going to look ridiculous as we see more progress with the tools.
There are whole categories of problems that were creating that we have no solutions for at present: it isn't a crisis it's an opportunity.
I feel I become more like a Product than Software Engineer when reviewing AI code constantly satisfying my needs.
And benefits provided by AI are too good. It allows to prototype near to anything in short terms which is superb. Like any tool in right hands can be a dealbreaker.
1. Programmers viewing programming through career and job security lens 2. Programmers who love the experience of writing code themselves 3. People who love making stuff 4. People who don't understand AI very well and have knee-jerk cultural / mob reactions against it because that's what's "in" right now in certain circles.
It is fun to read old issues of Popular Mechanics on archive.org from 100+ years ago because you can see a lot of the same personality types playing out.
At the end of the day, AI is not going anywhere, just like cars, electricity and airplanes never went anywhere. It will obviously be a huge part of how people interact with code and a number of other things going forward.
20-30 years from now the majority of the conversations happening this year will seem very quaint! (and a minority, primarily from the "people who love making stuff" quadrant, will seem ahead of their time)
Sure, 'writing code' is not the difficult often, but when you have time constraints, 'writing code' becomes a limiting factor. And we all do not have infinite time in our hands.
So AI not only enables something you just could not afford doing in the past, but it also allows to spend more time of 'engineering', or even try multiple approaches, which would have been impossible before.
Give a woodcutter a chainsaw instead of an axe and he'll fell ten times more trees. He'll also likely cause more than ten times the collateral damage.
I think the author's post is far more nuanced that this one sentence that you apparently agree with fundamentally.
Feel like only people like this guy, with 4 decades of experience, understand the importance of this.
It only means job security for people with actual experience.
E.g. quite often a sound (e.g. music) brings back memories of a time when it was being listened to etc.
Our brains need something to 'prompt' (ironic I know) for stuff in the brain to come to the front. But the human is the final judge (or should be) if what is wrong / good quality vs high quality. A taste element is necessary here too.
Maybe if they "prompted the agent correctly", you get your infrastructure above at least 5 9s.
If we continue through this path, not only so-called "engineers" can't read or write code at all, but their agents will introduce seemingly correct code and introduce outages like we have seen already, like this one [0].
AI has turned "senior engineers" into juniors, and juniors back into "interns" and cannot tell what is maintainable code and waste time, money and tokens reinventing a worse wheel.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
I was hopeful that the title was written like LLM-output ironically, and dismayed to find the whole blog post is annoying LLM output.
Technology was never equaliser. It just divides more and yes ultimately some developers will get paid a lot more because their skills will be in more demand while other developers will be forced to seek other opportunities.
You can't satisfy every single paranoia, eventually you have to deem a risk acceptable and ship it. Which experiments you do run depends on what can be done in what limited time you have. Now that I can bootstrap a for-this-feature test harness in a day instead of a week, I'm catching much subtler bugs.
It's still on you to be a good engineer, and if you're careful, AI really helps with that.
Problem is.. discipline is hard for humans. Especially when exposed to a thing that at face-value seems like it is really good and correct.
I think we're all in denial about how bad software engineering has gotten. When I look at what's required to publish a web page today vs in 1996, I'm appalled. When someone asks me how to get started, all I can do is look at them and say "I'm so sorry":
So "coding was always the hard part". All AI does is obfuscate how the sausage gets made. I don't see it fixing the underlying fallacies that turned academic computer science into for-profit software engineering.
Although I still (barely) hold onto hope that some of us may win the internet lottery someday and start fixing the fundamentals. Maybe get back to what we used to have with apps like HyperCard, FileMaker and Microsoft Access but for a modern world where we need more than rolodexes. Back to paradigms where computers work for users instead of the other way around.
Until then, at least we have AI to put lipstick on a pig.
The Visual Basic comparison is more salient. I've seen multiple rounds of "the end of programmers", including RAD tools, offshoring, various bubble-bursts, and now AI. Just because we've heard it before though, doesn't mean it's not true now. AI really is quite a transformative technology. But I do agree these tools have resulted in us having more software, and thus more software problems to manage.
The Alignment/Drift points are also interesting, but I think that this appeals to SWE's belief that that taste/discernment is stopping this happening in pre-AI times.
I buy into the meta-point which is that the engineering role has shifted. Opening the floodgates on code will just reveal bottlenecks elsewhere (especially as AI's ability in coding is three steps ahead and accelerating). Rebuilding that delivery pipeline is the engineering challenge.
sshine•1h ago
AI is an amplifier of existing behavior.
tinmandespot•1h ago
Apropos. I’m stealing that line.
qsera•1h ago
So combine both facts here in context, with human nature, and you ll see where this will go.
genghisjahn•1h ago
staticassertion•1h ago
keeda•56m ago
maplethorpe•47m ago