frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI didn't simplify software engineering: It just made bad engineering easier

https://robenglander.com/writing/ai-did-not-simplify/
98•birdculture•2h ago

Comments

sshine•2h ago
It also made good engineering easier.

AI is an amplifier of existing behavior.

tinmandespot•2h ago
“ AI is an amplifier of existing behavior”

Apropos. I’m stealing that line.

qsera•2h ago
Internet enabled instant to access to all human knowledge as well as instant chit-chat across the globe. Guess what humanity choose to drown itself in?

So combine both facts here in context, with human nature, and you ll see where this will go.

genghisjahn•1h ago
I love that I don’t have to look thru SO looking for a problem that’s kind of like the one I’m having. I have a problem solved based on my exact code taking my entire code base into account. And if I want a biting sarcastic review of my many faults as a software developer, I can ask for that too.
staticassertion•1h ago
I think it's easier for good engineers to be good, perhaps. For example, I think property testing is "good" engineering. But I suspect that if you took a survey of developers the vast majority would not actually know what property testing is. So is AI really of any use then for those devs?
keeda•1h ago
Yep, and recent reports from the likes of DORA and DX validate this with data: https://cloud.google.com/blog/products/ai-machine-learning/a...
maplethorpe•1h ago
I think it depends on your process. Problems that require creative solutions are often solved through the act of doing. For me, it's the act of writing code itself that ignites the pathways and neural connections that I've built up in my brain over the years. Using AI circumvents that process, and those brain circuits go unused.
sunir•2h ago
Simplify? It’s like saying a factory made chair building… what?

It’s not simpler. It’s faster and cheaper and more consistent in quality. But way more complex.

yakattak•2h ago
Anecdotally I have not seen consistency in quality at all.
sunir•2h ago
My chairs resemble each other. Have you tried ikea?

If you are talking about code which isn’t what I said, then we aren’t there yet.

groundzeros2015•2h ago
It’s a bad analogy because the benefits of industrial machines were predictable processes done efficiently.
sunir•1h ago
That came later than the beginning. Workhouses came before the loom. You can see this in the progression of quality of things like dinner plates over time.

Making clay pottery can be simple. But to make “fine china” with increasingly sophisticated ornamentation and strength became more complex over time. Now you can go to ikea and buy plates that would be considered expensive luxuries hundreds of years ago.

j3k3•1h ago
Yeah... nah. As others have said, your analogy does not hold up to scrutiny.
groundzeros2015•1h ago
You’re not addressing any points I made.
Sharlin•2h ago
Compilers made programming faster, cheaper, and more consistent in quality. They are the proper analogy of machine tools and automation in physical industries. Reusable code libraries also made programming faster, cheaper, and more consistent in quality. They are the proper analogy of prefabricated, modular components in physical industries.
j3k3•1h ago
Consistent in quality.. what?
groundzeros2015•2h ago
I’m an AI skeptic and in no sense is it taking my peers job. But it does save me time. I can do research much better than Google, explore a code base, spit out helper functions, and review for obvious mistakes.
coffeefirst•1h ago
Yep. And the more time I spend with the agents the more I’m convinced that your way is the endgame.
a_void_sky•2h ago
"Coding was never the hard part. Typing syntax into a machine has always been the least interesting part of building a system."

and I think these people are benefitting from it the most, people with expertise, who know their way around and knew what and how to build but did not want to do the grunt work

embedding-shape•1h ago
Slight adjustment, but I'd see "maintaining code" is the same as before, matters more the people and their experience and knowledge how to manage that. But agree that the literal typing was never the difficult part, knowing what code should and shouldn't be written was a hard part, and still remains a hard part.

Right now, "what code shouldn't be written" seems to have become an even more important part, as it's so easy to spit out huge amounts of code, but that's the lazy and easy way, not the one that let you slowly add in features across a decade, rather than getting stuck with a ball of spaghetti after a weekend of "agent go brrr".

a_void_sky•1h ago
this is where years of experience working with freshers and junior devs helps, AI is smart enough to exactly do if you clearly tell it what to do and how

unless you understand every inch of system and foresee what issues can be created by what kind of change, things will break when using AI

j3k3•1h ago
I think that captures a lot of the LLM debate.

There are people who just want an object produced that allows for some outcome to be achieved closer to the present.

And there are other people who want to ensure that object that is produced, will be maintainable, not break other parts of the system etc.

Neither party is wrong in what they want. I think there should naturally be a split of roles - the former can prototype stuff so other individuals in the organisation can critique whether it is a thing of value / worth investing in for production.

a_void_sky•1h ago
me and my team have wasted so many hours (days) working on some product features which was "definitely going viral" only to be forgotten after a few weeks

I believe if we had something like this we could go to market early and understand the user behaviour to build a more scalable and robust system once we were sure if it was even of worth

j3k3•1h ago
Yeah thats a good example.

The reality is humans are really bad at knowing what is worth investing into.. until the object is there for all to see and critique.

Every idea sounds great until you spend resources getting into the subtleties and nuances.

heliumtera•1h ago
Hm. People that know what they are doing prefer to do it themselves. This is not a new thing, not because of llms, but it is how it was always. People that knows more, have heavier opinions. Given the option of accepting new code and new functionality, often, people that knows what they are doing would reject code that functions fine, but fails expectations.

I think who's benefiting the most are people that said that syntax was the least interesting parts but could not program for shit.

Typing into a machine is not the least interesting part. It is the only interesting part. Everything else is a fairy tale

staticassertion•2h ago
> Code Was Never the Hard Part

I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.

I also think that it is radically understated how much developers contribute to UX and product decisions. We are constantly having to ask "Would users really do that?" because it directly impacts how we design. Product people obviously do this more, but engineers do it as a natural part of their process as well. I can't believe how many people do not seem to know this.

Further, in my experience, even the latest models are terrible "experts". Expertise is niche, and niche simply is not represented in a model that has to pack massive amounts of data into a tiny, lossy format. I routinely find that models fail when given novel constraints, for example, and the constraints aren't even that novel - I was writing some lower level code where I needed to ensure things like "a lock is not taken" and "an allocation doesn't occur" because of reentrancy safety, and it ended up being the case that I was better off writing it myself because the model kept drifting over time. I had to move that code to a separate file and basically tell the model "Don't fucking touch that file" because it would often put something in there that wasn't safe. This is with aggressively tuning skills and using modern "make the AI behave" techniques. The model was Opus 4.5, I believe.

This isn't the only situation. I recently had a model evaluate the security of a system that I knew to be unsafe. To its credit, Opus 4.6 did much better than previous models I had tried, but it still utterly failed to identify the severity of the issues involved or the proper solutions and as soon as I barely pushed back on it ("I've heard that systems like this can be safe", essentially) it folded completely and told me to ship the completely unsafe version.

None of this should be surprising! AI is trained on massive amounts of data, it has to lossily encode all of this into a tiny space. Much of the expertise I've acquired is niche, borne of experience, undocumented, etc. It is unsurprising that a "repeat what I've seen before" machine can not state things it has not seen. It would be surprising if that were not the case.

I suppose engineers maybe have not managed to convey this historically? Again, I'm baffled that people don't see to know how much time engineers spend on problems where the code is irrelevant. AI is an incredible accelerator for a number of things but it is hardly "doing my job".

AI has mostly helped me ship trivial features that I'd normally have to backburner for the more important work. It has helped me in some security work by helping to write small html/js payloads to demonstrate attacks, but in every single case where I was performing attacks I was the one coming up with the attack path - the AI was useless there. edit: Actually, it wasn't useless, it just found bugs that I didn't really care about because they were sort of trivial. Finding XSS is awesome, I'm glad it would find really simple stuff like that, but I was going for "this feature is flawed" or "this boundary is flawed" and the model utterly failed there.

tonyedgecombe•21m ago
>I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.

For you and others here but you only need to look at the number of people who can’t code FizzBuzz to realise there are many who struggle with it.

It’s easy to take your own knowledge for granted. I’ve met a lot of people who know their business inside out but couldn’t translate that knowledge into code.

staticassertion•6m ago
I mean, of course not everyone can code. I'm not saying that programming is trivial, or anyone can just naturally write code. What I'm saying is that if you're a full time engineer then some part of your day is spent programming but the difficult work is not encompassed by "how do I write the code to do this?" - sometimes it is, but mostly you think about lots of other things surrounding the code.
woeirua•2h ago
How many model releases are we away from people like this throwing in the towel? 2? 3?
agentultra•2h ago
100%.

There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests.

Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning.

Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours.

There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.

carlosjobim•1h ago
I don't have any idea of what a unit test is, but with AI I can make programs that help me immensely in my real world job.

Snobby programmers would never even return an email offering money for their services.

staticassertion•1h ago
It's unclear what point you're even trying to make, other than that AI has been helpful to you. But surely you understand that if you don't know what a unit test is you're probably not in a position to comment on the value of unit testing.

> Snobby programmers would never even return an email offering money for their services.

Why the would they? I don't respond to the vast majority of emails, and I'm already employed.

carlosjobim•31m ago
Helpful to me and millions of others. Soon to be billions even.

You are employed because somewhere in the pipeline there are paying customers. They don't care about unit tests, they care about having their problems solved. Beware of AI.

staticassertion•25m ago
Right... I mean, no engineer is going to tell you that customers care about unit tests, so I think you're arguing against a straw man here. What engineers will tell you is that bugs cost money, support costs money, etc, and that unit tests are one of the ways we cheaply reduce those costs in order to solve problems, which is what we're in the business of doing.

We are all very aware of the fact that customers pay us... it seems totally strange to be that you think we wouldn't be aware of this fact. I suspect this is where the disconnect comes in, much to the point of the article - you seem to think that engineers just write tests and code, but the article points out how silly that is, we spend most of our time thinking about our customers, features, user experience, and how to match that to the technology choices that will allow us to build, maintain, and support systems that meet customer expectations.

I think people outside of engineering might be very confused about this, strangely, but engineers do a ton of product work, support work, etc, and our job is to match that to the right technology choices.

xg15•1h ago
> Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively.

As a software engineer, I'd love if the industry had an actual breakthrough, if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.

But not if the only reward for this would be to be laid off.

So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?

staticassertion•1h ago
> if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.

I don't really believe this is possible. Or, it's the sort of thing that gets solved at a "product" level. Reality is complicated. People are complicated. The idea that software can exist without complexity feels sort of absurd to me.

That said, to your larger point, I think the goal is basically to eliminate the middle class.

fao_•1h ago
> So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?

Capitalism is reliant on the underclass (the homeless, the below minimum-wage) to add pressure to the broader class of workers in a way that makes them take jobs that they ordinarily wouldn't (Because they may be e.g. physically/emotionally unsafe, unethical, demeaning), for less money than they deserve and for more hours than they should. This is done in order to ensure that the price of work for companies is low, and that they can always draw upon a needy desperate workforce if required. You either comply with company requirements, or you get fired and hope you have enough runway not to starve. This was written about over a hundred years ago and it's especially true today in the modern form of it. Programmers as a field have just been materially insulated from the modern realities of "your job is timing your bathroom breaks, tracking how many hours you spend looking at the internet, your boss verbally abuses you for being slow, and you aren't making enough money to eat properly".

This is also why many places do de-facto 'cleansings' of homeless people by exterminating their shelter or removing their ability to survive off donations, and why the support that is given for people without the means to survive is not only tedious but almost impossible to get. The majority of workers are supposed to look at that and go "well fuck, glad that's not me!" with a little part of their brain going "if i lost my job and things went badly, that could become me."

This is also why immigration enforcement is a thing — so many modern jobs that nobody else in the western world wants to do are taken by immigrants. The employer won't look too closely at the visa, and in return the person gets work. With the benefit being towards the employer — if the person refuses to do something dangerous to themselves or others, or refuses to produce enough output to sustain the exponential growth at great personal cost, well, then the company can just cut the immigrant loose with no recourse, or outright call the authorities on them so they get deported. Significantly less risky to get people to work in intolerable conditions for illegal wages if there is no hope of them suing you for this.

Back in the 1900s there were international conventions to remove passports. Now? Well, they're a convenient underclass for political manoeuvring. Why would you want people to have freedom of movement if your own citizens could just leave when things get bad, and when the benefits are a free workforce that you don't have to obey workers rights laws about?

neversupervised•1h ago
The goal has nothing to do with you being employed. Your job security is a consequence of the ultimate goal to build AGI. And software development salaries and employment will be affected before getting there. In my opinion, we already past the SWE peak as far as yearly salary. Yes there are super devs working on AI making a lot of dough, but I consider that a particular specialty. On average the salary of a new grad SWE in the US is past its peak if you consider how many new grads can’t get a job.
carlosjobim•1h ago
You can work with something else if there's no longer any demand for your current skills. If you refuse you should starve.
BoredPositron•1h ago
A lot of times bad engineering is all you need.
polynomial•1h ago
This is the under acknowledged "secret" of this reconfiguration.

It's like the Bill Joy point about mediocre technology taken to the next level.

tyleo•1h ago
I disagree with the premise. It made all engineering easier. Bad and good.

I believe vibe coding has always existed. I've known people at every company who add copious null checks rather than understanding things and fixing them properly. All we see now is copious null checks at scale. On the other hand, I've also seen excellent engineering amplified and features built by experts in days which would have taken weeks.

bensyverson•1h ago
In corporate app development, I would see tests to check that the mocks return the expected values. Like, what are we even doing here?
anilakar•1h ago
Someone was asked to test untestable code so verifying mock contents was the best they could come up with.
mvpmvh•1h ago
No. Someone was asked to meet an arbitrary code coverage threshold. I'm dealing with this malicious compliance/weaponized incompetence at $current_job
dwoldrich•1h ago
How will you deal with it? I successfully convinced $big_important_group at $day_job to not implement a policy of failing their builds when code coverage dips below their target threshold > 90%. (Insane target, but that's a different conversation.)

I convinced them that if they wanted to treat uncovered lines of code as tech debt, they needed to add an epic stories to their backlog to write tests. And their artificially setting some high target coverage threshold will produce garbage because developers will write do-nothing tests in order to get their work done and not trip the alarms. I argued that failing the builds on code coverage would be unfair because the tech debt created by past developers would unfairly hinder random current-day devs getting their work done.

Instead, I recommended they pick their current coverage percentage (it was < 10% at the time) and set the threshold to that simply to prevent backsliding as new code was added. Then, as their backlogged, legit tests were implemented, ratchet up the coverage threshold to the new high water mark. This meant all new code would get tests written for them.

And, instead of failing builds, I recommended email blasts to the whole team to indicate there was some recent backsliding in the testing regime and the codebase had grown without accompanying tests. It was not a huge shame event, but good a motivator to the team to keep up the quality. SonarQube was great for long-term tracking of coverage stats.

Finally, I argued the coverage tool needed to have very liberal "ignore" rules that were agreed to by all members of the team (including managers). Anything that did not represent testable logic written by the team: generated code, configurations, tests themselves, should not count against their code coverage percentages.

RobRivera•1h ago
I'm trying to wrap my head around here.

So there are tests that leverage mocks. Those mocks help validate software is performing as desired by enabling tests to see the software behaves as desired in varying contexts.

If the software fails, it is because the mocks exposed that under certain inputs, undesired behavior occurs, an assert fails, and a red line flags the test output.

Validating that the mocks return the desired output.... Maybe there is a desire that the mocks return a stream of random numbers and the mock validation tests asserts said stream adheres to a particular distribution?

Maybe someone in the past pushed a bad mock into prod, that mock validated a test that would have failed given better mock, and a post mortem when the bad software, now pushed into prod, was traced to a bad mock derived a requirement that all mocks must be validated?

bdangubic•1h ago
we use this https://github.com/auchenberg/volkswagen
steve_adams_86•1h ago

  # abstract internals for no reason
  func doThing(x: bool)
    if (x)
      return true
    else
      return false
  
  # make sure our logic works as expected
  assert(doThing(true))
  
  # ???
  
  # profit
It's excellent software engineering because there are tests
dwoldrich•1h ago
You could ask the same thing about tests themselves. And I'm not talking about tests that don't exercise the code in a meaningful manner like your assertions on mocks(?!)

I'm saying you could make the same argument about useful tests themselves. What is testing that the tests are correct?

Uncle Bob would say the production code is testing the tests but only in the limited, one-time, acceptance case where the programmer who watches the test fail, implements code, and then watches it pass (in the ideal test-driven development scenario.)

But what we do all boils down to acceptance. A human user or stakeholder continues to accept the code as correct equals a job well done.

Of course, this is itself a flawed check because humans are flawed and miss things and they don't know what they want anyhow. The Agile Manifesto and Extreme Programming was all about organizing to make course corrections as cheap as possible to accommodate fickle humanity.

> Like, what are we even doing here?

What ARE we doing? A slapdash job on the whole. And, AI is just making slapdash more acceptable and accepted because it is so clever and the boards of directors are busy running this next latest craze into the dirt. "Baffle 'em with bullsh*t" works in every sector of life and lets people get away with all manner of sins.

I think what we SHOULD be doing is plying our craft. We should be using AI as a thinking tool, and not treat it like a replacement for ourselves and our thinking.

furyofantares•1h ago
Well, it's made bad engineering massively easier and good engineering a little easier.

So much so that many people who were doing good engineering before have opted to move to doing three times as much bad engineering instead of doing 10% more good engineering.

nhaehnle•1h ago
I believe the article exaggerates to make a point. Yes, good engineering can also be assisted with LLM-based agents, but there is a delta.

Good engineering requires that you still pay attention to the result produced by the agent(s).

Bad engineering might skip over that part.

Therefore, via Amdahl's law, LLM-based agents overall provide more acceleration to bad engineering than they do to good engineering.

0xcafefood•1h ago
The connection to Amdahl's law is totally on point. If you're just using LLMs as a faster way to get _your_ ideas down, but still want to ensure you validate and understand the output, you won't get the mythical 10x improvement so many seem to claim they're getting. And if you do want that 10x speedup, you have to forego the validation and understanding.
koonsolo•1h ago
I do agree with you, but don't underestimate the projects where you can actually apply this 10x. For example, I wanted to get some analytics out of my database. What would have been a full weekend project was now done in an hour. So for such things there is a huge speed boost.

But once software becomes bigger and more complex, the LLM starts messing up, and the expert has to come in. That basicaly means your months project cannot be done in a week.

My personal prediction: plugins and systems that support plugins will become important. Because a plugin can be written at 10x speed. The system itself, not so much.

onlyrealcuzzo•1h ago
Even if you agree with the OP, there's a large portion of applications where it simply doesn't matter if the quality of the software is good or terrible as long as it sufficiently works.
tyleo•1h ago
Yeah, I've seen this too. I like to call them "single-serving apps". I made a flashcard app to study for interviews and one-shot it with Claude Code. I've had it add some features here and there but haven't really looked at the code.

It's just a small CLI app in 3 TypeScript files.

ffsm8•1h ago
> Ive known people at every company who add copious null checks rather than understanding things and fixing them properly.

ynow "defensive programming" is a thing, yeah? Sorry mate, but that statement I'd expect from juniors, which are also often the one's claiming their own technical superiority over others

retrodaredevil•38m ago
Adding null checks where they aren't needed means adding branching complexity. It means handling cases that may never need to be handled. Doing all that makes it harder to understand "could this variable ever be null?" If you can't answer that question, it is now harder to write code in the future, often leading to even more unnecessary null checks.

I've seen legacy code bases during code review where someone will ask "should we have a null check there?" and often no-one knows the answer. The solution is to use nullability annotations IMO.

It's easy to just say "oh this is just something a junior would say", but come on, have an actual discussion about it rather than implying anyone who has that opinion is inexperienced.

roncesvalles•23m ago
I think it's easy to forget that the LLM is not a magic oracle. It doesn't give great answers. What you do with the LLM's output determines whether the engineering you produce is good or bad. There are places where you can plonk in the LLM's output as-is and places you can't, or times when you have to keep nudging for a better output, and times when nothing the LLM produces is worth keeping.

It makes bad engineering easier because it's easy to fall into the trap of "if the LLM said so, it must be right".

zer00eyz•1h ago
"AI" (and calling it that is a stretch) is nothing more than a nail gun.

If you gave an experienced house framer a hammer, hand saw and box of nails and a random person off the street a nail gun and powered saw who is going to produce the better house?

A confident AI and an unskilled human are just a Dunning-Kruger multiplier.

j3k3•1h ago
Nicely put.

There's this mistake Engineers make when using LLMs and loudly proclaiming its coming for their jobs / is magic... you have a lot of knowledge, experience and skill implicitly that allows for you to get the LLM to produce what you want.

Without it... you produce crappy stuff that is inevitably going to get mangled and crushed. As we are seeing with Vibe code projects created by people with no exposure to proper Software Engineering.

zer00eyz•1h ago
> As we are seeing with Vibe code projects created by people with no exposure to proper Software Engineering.

And I keep seeing products and projects banning AI: "My new house fell down because of the nail gun used, therefore I'm banning nail guns going forward." I understand and sympathize with maintainers and owners and the pressure they are under, but the limitation is going to look ridiculous as we see more progress with the tools.

There are whole categories of problems that were creating that we have no solutions for at present: it isn't a crisis it's an opportunity.

j3k3•51m ago
I personally think its better to be cautious and wait for improvements in tooling to rise. Its not always necessary to be the one to take a risk when there's plenty of others willing to do so, for which the outcomes can then be assessed.
arty_prof•1h ago
In terms of the Tech Debt it is obviously allow to make it a lot. But this is controllable if analysing in depth what AI is doing.

I feel I become more like a Product than Software Engineer when reviewing AI code constantly satisfying my needs.

And benefits provided by AI are too good. It allows to prototype near to anything in short terms which is superb. Like any tool in right hands can be a dealbreaker.

dgxyz•1h ago
Not easier but faster. It’s really hard to catch shit now.
water_badger•1h ago
So somewhere here there is a 2x2 or something based on these factors:

1. Programmers viewing programming through career and job security lens 2. Programmers who love the experience of writing code themselves 3. People who love making stuff 4. People who don't understand AI very well and have knee-jerk cultural / mob reactions against it because that's what's "in" right now in certain circles.

It is fun to read old issues of Popular Mechanics on archive.org from 100+ years ago because you can see a lot of the same personality types playing out.

At the end of the day, AI is not going anywhere, just like cars, electricity and airplanes never went anywhere. It will obviously be a huge part of how people interact with code and a number of other things going forward.

20-30 years from now the majority of the conversations happening this year will seem very quaint! (and a minority, primarily from the "people who love making stuff" quadrant, will seem ahead of their time)

sega_sai•1h ago
When I see this: "One of the longest-standing misconceptions about software development is that writing code is the difficult part of the job. It never was." I don't think I can take this seriously.

Sure, 'writing code' is not the difficult often, but when you have time constraints, 'writing code' becomes a limiting factor. And we all do not have infinite time in our hands.

So AI not only enables something you just could not afford doing in the past, but it also allows to spend more time of 'engineering', or even try multiple approaches, which would have been impossible before.

anilakar•1h ago
Agree. Writing code has always been the most time-consuming part that distracts me from actual design. AI just emphasizes the fact that anyone can do the keyboard mashing while reading code is the actual skill that matters.

Give a woodcutter a chainsaw instead of an axe and he'll fell ten times more trees. He'll also likely cause more than ten times the collateral damage.

staticassertion•53m ago
It's hard to reconcile "I don't think I can take this seriously" followed by an immediate admission that you agree but that there's some nuance.

I think the author's post is far more nuanced that this one sentence that you apparently agree with fundamentally.

lowbloodsugar•1h ago
> and ensuring that the system remains understandable as it grows in complexity.

Feel like only people like this guy, with 4 decades of experience, understand the importance of this.

polynomial•1h ago
Understandable as always a proxy for predictable.
jazz9k•1h ago
Juniors that are relying too heavily on AI now will pay the price down the line, when they don't even know the fundamentals, because they just copy and pasted it from a prompt.

It only means job security for people with actual experience.

hyperbovine•1h ago
I’m seeing a real distinction emerge between “software engineering” and “research”. AI is simply amazing for exploratory research — 10x ability to try new ideas, if not more. When I find something that has promise, then I go into SWE mode. That involves understanding all the code the AI wrote, fixing all the dumb mistakes, and using my decades of experience to make it better. AI’s role in this process is a lot more limited, though it can still be useful.
j3k3•1h ago
Thats because an LLM can access breadth at any given moment that you cannot. That's the advantage it has.

E.g. quite often a sound (e.g. music) brings back memories of a time when it was being listened to etc.

Our brains need something to 'prompt' (ironic I know) for stuff in the brain to come to the front. But the human is the final judge (or should be) if what is wrong / good quality vs high quality. A taste element is necessary here too.

rvz•1h ago
Is that why there are so many outages across many companies adopting AI, including GitHub, Amazon, Cloudflare and Anthropic even with usage?

Maybe if they "prompted the agent correctly", you get your infrastructure above at least 5 9s.

If we continue through this path, not only so-called "engineers" can't read or write code at all, but their agents will introduce seemingly correct code and introduce outages like we have seen already, like this one [0].

AI has turned "senior engineers" into juniors, and juniors back into "interns" and cannot tell what is maintainable code and waste time, money and tokens reinventing a worse wheel.

[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

furyofantares•1h ago
AI Didn't Simplify Blogging: It Just Made Bad Blogging Easier

I was hopeful that the title was written like LLM-output ironically, and dismayed to find the whole blog post is annoying LLM output.

polynomial•1h ago
Robots making fun of us complaining about them.
_pdp_•1h ago
Put a bad driver in an F1 car and you won't make them a racer. You will just help them crash faster. Put a great driver in that same car, and they become unstoppable.

Technology was never equaliser. It just divides more and yes ultimately some developers will get paid a lot more because their skills will be in more demand while other developers will be forced to seek other opportunities.

__MatrixMan__•1h ago
Naw, I just yesterday caught something in test that would've made it to prod without AI. It happens all the time.

You can't satisfy every single paranoia, eventually you have to deem a risk acceptable and ship it. Which experiments you do run depends on what can be done in what limited time you have. Now that I can bootstrap a for-this-feature test harness in a day instead of a week, I'm catching much subtler bugs.

It's still on you to be a good engineer, and if you're careful, AI really helps with that.

j3k3•1h ago
Change 'good' for 'disciplined'.

Problem is.. discipline is hard for humans. Especially when exposed to a thing that at face-value seems like it is really good and correct.

__MatrixMan__•31m ago
We can cheat on discipline if we design our workflows with more careful thought about incentives.

I wound up in a role where I throw away 100% of the code that I write within a few months. My job is about discovering cases where people are operating under false assumptions (typically about how some code will or won't be surprising in context with some dataset), and inform them of the discrepancy. "proofs" would be too strong of a word, but I generate a lot of code that then generates evidence which I then use in an argument.

I do try to be disciplined about the code I rely on, but since I have no incentive to sneak through volumes of unreliable code before moving on to the next feature, it's easy to do. When I'm not diligent, the pain comes quickly, and I once again learn to be diligent. At the end of the day I end up looking at a dashboard I had an agent throw together and I decide if the argument I intend to make based on that dashboard is convincing.

Also, agent sycophancy isn't really a problem because the agents are only asked to collect and represent the data. They don't know what I'm hoping to see, so it's very uncommon that they end up generating something deceptive. Their incentives are also aligned.

I think we can structure much of our work this way (I just lucked into it) where there's no conflict of interest and therefore the need to be disciplined is not in opposition to anything else.

zackmorris•1h ago
A -> (expletive) -> B

I think we're all in denial about how bad software engineering has gotten. When I look at what's required to publish a web page today vs in 1996, I'm appalled. When someone asks me how to get started, all I can do is look at them and say "I'm so sorry":

https://xkcd.com/1168/

So "coding was always the hard part". All AI does is obfuscate how the sausage gets made. I don't see it fixing the underlying fallacies that turned academic computer science into for-profit software engineering.

Although I still (barely) hold onto hope that some of us may win the internet lottery someday and start fixing the fundamentals. Maybe get back to what we used to have with apps like HyperCard, FileMaker and Microsoft Access but for a modern world where we need more than rolodexes. Back to paradigms where computers work for users instead of the other way around.

Until then, at least we have AI to put lipstick on a pig.

tw-20260303-001•1h ago
Or it simply made one step over the draft stage faster. It all depends how one uses it.
jwilliams•1h ago
There are some interesting points here, but I think this essay is a little too choppy - e.g. the Aircraft Mechanic comparison is a long bow to draw.

The Visual Basic comparison is more salient. I've seen multiple rounds of "the end of programmers", including RAD tools, offshoring, various bubble-bursts, and now AI. Just because we've heard it before though, doesn't mean it's not true now. AI really is quite a transformative technology. But I do agree these tools have resulted in us having more software, and thus more software problems to manage.

The Alignment/Drift points are also interesting, but I think that this appeals to SWE's belief that that taste/discernment is stopping this happening in pre-AI times.

I buy into the meta-point which is that the engineering role has shifted. Opening the floodgates on code will just reveal bottlenecks elsewhere (especially as AI's ability in coding is three steps ahead and accelerating). Rebuilding that delivery pipeline is the engineering challenge.

yubainu•1h ago
Ultimately, I believe the most important thing is how we effectively utilize AI. We can't and shouldn't entrust everything to AI in any field, not even after AGI is perfected. Sometimes it's important to mass-produce low-quality code, and other times it's important to create beautifully crafted code.
heliumtera•1h ago
Exactly, a large portion of software development was rejecting code from a intellectually functional human being. AGI is not sufficient to achieve minimum quality code because intelligence was never sufficient
rednafi•52m ago
AI just lowered the cost of replication. Now you can replicate good or bad stuff but that doesn't automatically make AI the enabler of either.
jinko-niwashi•23m ago
Your "don't fucking touch that file" experience is the exact pattern I kept hitting. After 400+ sessions of full-time pair programming with Claude, I stopped trying to fix it with prompt instructions and started treating it as a permissions problem.

The model drifts because nothing structurally prevents it from drifting. Telling it "don't touch X" is negotiating behavior with a probabilistic system — it works until it doesn't. What actually worked: separating the workflow into phases where certain actions literally aren't available. Design phase? Read and propose only. Implementation phase? Edit, but only files in scope.

Your security example is even more telling — the model folding under minimal pushback isn't a knowledge gap, it's a sycophancy gradient. No amount of system prompting fixes that. You need the workflow to not ask the model for a judgment call it can't be trusted to hold.

What happens when US economic data becomes unreliable

https://mitsloan.mit.edu/ideas-made-to-matter/what-happens-when-us-economic-data-becomes-unreliable
19•inaros•29m ago•1 comments

Montana passes Right to Compute act (2025)

https://www.westernmt.news/2025/04/21/montana-leads-the-nation-with-groundbreaking-right-to-compu...
140•bilsbie•3h ago•93 comments

1M context is now generally available for Opus 4.6 and Sonnet 4.6

https://claude.com/blog/1m-context-ga
1019•meetpateltech•1d ago•422 comments

Baochip-1x: What it is, why I'm doing it now and how it came about

https://www.crowdsupply.com/baochip/dabao/updates/what-it-is-why-im-doing-it-now-and-how-it-came-...
190•timhh•2d ago•26 comments

An Ode to Bzip

https://purplesyringa.moe/blog/an-ode-to-bzip/
10•signa11•1h ago•1 comments

Python: The Optimization Ladder

https://cemrehancavdar.com/2026/03/10/optimization-ladder/
150•Twirrim•3d ago•45 comments

NMAP in the Movies

https://nmap.org/movies/
48•homebrewer•1h ago•5 comments

Cookie jars capture American kitsch (2023)

https://www.eater.com/23651631/cookie-jar-trend-appreciation-collecting-history
12•NaOH•23h ago•1 comments

Megadev: A Development Kit for the Sega Mega Drive and Mega CD Hardware

https://github.com/drojaazu/megadev
84•XzetaU8•8h ago•5 comments

Wired headphone sales are exploding

https://www.bbc.com/future/article/20260310-wired-headphones-are-better-than-bluetooth
321•billybuckwheat•2d ago•548 comments

9 Mothers Defense (YC P26) Is Hiring in Austin

https://jobs.ashbyhq.com/9-mothers?utm_source=x8pZ4B3P3Q
1•ukd1•3h ago

Show HN: GitAgent – An open standard that turns any Git repo into an AI agent

https://www.gitagent.sh/
16•sivasurend•3h ago•1 comments

Philosoph Jürgen Habermas Gestorben

https://www.spiegel.de/kultur/philosoph-juergen-habermas-mit-96-jahren-gestorben-a-8be73ac7-e722-...
87•sebastian_z•3h ago•30 comments

Everything you never wanted to know about visually-hidden

https://dbushell.com/2026/02/20/visually-hidden/
9•PaulHoule•4d ago•3 comments

Online astroturfing: A problem beyond disinformation

https://journals.sagepub.com/doi/10.1177/01914537221108467
27•xyzal•1h ago•8 comments

XML Is a Cheap DSL

https://unplannedobsolescence.com/blog/xml-cheap-dsl/
183•y1n0•5h ago•181 comments

RAM kits are now sold with one fake RAM stick alongside a real one

https://www.tomshardware.com/pc-components/ram/fake-ram-bundled-with-real-ram-to-create-a-perform...
176•edward•7h ago•120 comments

Nominal Types in WebAssembly

https://wingolog.org/archives/2026/03/10/nominal-types-in-webassembly
23•ingve•4d ago•12 comments

Show HN: Channel Surfer – Watch YouTube like it’s cable TV

https://channelsurfer.tv
569•kilroy123•3d ago•167 comments

UBI Is Your Productivity Dividend – The Only Way to All Share What We All Built

https://scottsantens.substack.com/p/universal-basic-income-is-your-productivity
9•2noame•50m ago•0 comments

Mouser: An open source alternative to Logi-Plus mouse software

https://github.com/TomBadash/MouseControl
400•avionics-guy•22h ago•129 comments

The Isolation Trap: Erlang

https://causality.blog/essays/the-isolation-trap/
115•enz•2d ago•47 comments

Digg is gone again

https://digg.com/
326•hammerbrostime•22h ago•331 comments

Hammerspoon

https://github.com/Hammerspoon/hammerspoon
329•tosh•22h ago•120 comments

Can I run AI locally?

https://www.canirun.ai/
1338•ricardbejarano•1d ago•323 comments

Starlink Militarization and Its Impact on Global Strategic Stability

https://interpret.csis.org/translations/starlink-militarization-and-its-impact-on-global-strategi...
63•msuniverse2026•8h ago•74 comments

Secure Secrets Management for Cursor Cloud Agents

https://infisical.com/blog/secure-secrets-management-for-cursor-cloud-agents
32•vmatsiiako•4d ago•5 comments

I found 39 Algolia admin keys exposed across open source documentation sites

https://benzimmermann.dev/blog/algolia-docsearch-admin-keys
147•kernelrocks•18h ago•43 comments

Recursive Problems Benefit from Recursive Solutions

https://jnkr.tech/blog/recursive-benefits-recursive
40•luispa•3d ago•23 comments

Show HN: Ink – Deploy full-stack apps from AI agents via MCP or Skills

https://ml.ink/
15•august-•3d ago•2 comments