(Much the same as programmers will be replaced by "A.I." ... They won't as long as they educate themselves on proper use of the tools available to them. It'll likely someday be like the "Star Trek" computers, but even they still needed folks with the technical skills to use those computers well / properly.)
Accountants didn't stop increasing in numbers.
Episode 606: Spreadsheets! https://www.npr.org/sections/money/2015/02/25/389027988/epis...
The 1984 story it was inspired by (acording to the episode description).
https://medium.com/backchannel/a-spreadsheet-way-of-knowledg...
There are of course still accountants.
That is why, it's not a web app. (As Javascript is the most used language on the internet)
This sort of software (language runtime) requires that it HAS to be correct and no room for clumsiness.
Why do you think almost every AI copilot "demo" is on typical web apps and not on the Linux kernel or on the PyTorch compiler?
It would ruin the "AGI" coding narrative.
The AI boosters need to show that the hype train isn't slowing down. Tell it to replace the Linux kernel developers and watch it struggle in real time.
I sure don't, and noone I've asked so far knows anyone who's had that happen to then either.
I want to say it's a sign of the times to try and make technology a political issue, but my understanding is the same thing happened during the industrial revolution, so maybe it's just human nature?
Well the industrial revolution didn't make us all homeless beggars, so I doubt AI will, either.
Generally what actually seems to be happening is companies want to focus on AI so they close non-AI parts of their business and beef up the AI parts. Which isn’t the same as “replacing engineers with AI”
I think these two audiences - tech CEOs and random anons on the internet - tend to be full of shit and I give very little credence to what they say, but to your point, some people are at least claiming things of the sort.
The target market of these companies are software shops. You don’t see them advertising the model capabilities in i.e. Civil Engineering much. They are trying specifically to sell a cheaper replacement of software engineers to those CEOs. You have to read the news and announcements with this lens:
A vacuum cleaner manufacturer claims that dirty dusty house is a thing of the past. Of course they say that.
Additionally, I’m not hiring anymore which is kind of the same thing as firing. I did have a few roles open for junior assistant types but those are now filled by using AI. AI helps improve my productivity without the added overhead of dealing with staff / contractors. I even hired junior devs and put them on full time study mode to try to skill them up to be useful, one guy was in that mode for 6 months before I had to fire him for learning too slowly. Technically LLMs did learn faster than he did and it was his full time job. It’s easier for me to communicate with the AI, especially with the quick responses and being always available.
I figure the AI will eventually get to my level and eat my lunch but hopefully there are a few years before that happens.
And designers, OMG AI is far easier to deal with and produces far better results.
But if you are asking if I know of an organization that was successful after firing their people... Nope, I don't know any of those.
Look no further than the Microsoft layoff they announced earlier this week.
Of course, you are free to doubt the honesty of the people making those announcements.
It's a natural selection mechanism whereby poorly run companies will fail extremely quickly.
These are bad companies with horrible leadership that are wasting resources. Their existence is a net negative to society and our industry. Good bye to the garbage Klarna's and DuoLingos. No one will miss you.
Sure, but at the short term cost of engineers' livelihoods.
Or were you thinking C-suite would be held to account for the failure?
Poorly run companies can be an opportunity to direct development towards whatever I'm curious about, and that's usually beneficial to both parties.
You just need investors and consumers to allocate capital to their competitors. The flow of capital and consumer preference is a decentralized communication method. Price signals, supply and demand, and government regulations are a form of stigmergy.
Who gives a crap if bad companies fail or don't fail? And for that matter how is it good or bad for you and me if bad companies fail or succeed despite being poorly run?
I'm curious about the context of your judgment.
If a CEO is making a tech product, but knows so little about tech that he's replacing engineers with 2025's AIs, we're all better off if that CEO goes and does something else, and those engineers go work for technical people.
Temporary stability is not a substitute for long-term prosperity. Creative destruction.
If we accept that we should be productive, then it seems easy to justify that engineers should be working at good, well run companies producing real value for society.
In practice having worked in financial services and ad tech, at high salaries in each, it was absolutely the equivalent of intellectual digging of ditches for pay. The only job I ever had that felt productive was in academia, and the pay was off the charts low.
Watching AI drive Microsoft employees insane https://news.ycombinator.com/item?id=44050152 21-may-2025 544 comments
Those humans explaining why fixes aren’t complete are proving data for the next training run - really training their replacements.
Then when you do have AIs fixing bugs you won’t need the more mediocre engineers.
Claude 4 Opus and Sonnet seem much better for me. The models needed alignment and feedback but worked fairly well. I know Copilot uses Claude but for whatever reason I don't get nearly the same quality as using Claude Code.
Claude is expensive, $10 to implement a feature, $2 to add some unit tests to my small personal project. I imagine large apps or apps without clear division of modules/code will burn through tokens.
It definitely works as an accelerator but I don't think it's going to replace humans yet, and I think that's still a very strong position for AI to be in.
I've tinkered with the pay as you go, but wonder if a higher cap on max for 100/month would be worth it?
I have not tried the $100/month subscription. If it's net cheaper than buying credits I would consider it, since that's basically 10 features per month.
Only an AGI will ever "replace" developers. Current AI merely boosts us.
> Become the AI expert on your team. Don't fight the tools, master them. Be the person who knows when AI helps and when it hurts.
Is something I’ve been wondering about. I haven’t played with this AI stuff much at all, despite thinking it is probably going to be a basically interesting tool at some point. It just seems like it is currently a bit bad, and multiple companies have bet millions of dollars on the idea that it will eventually be quite good. I think that’s a self fulfilling prophecy, they’ll probably get around to making it useful. But, I wonder if it is really worthwhile to learn how to work around the limitations of the currently bad version?
Like, I get that we don’t want to become buggywhip manufacturers. But I also don’t want to specialize in hand-cranking cars and making sure the oil is topped off in my headlights…
I would bet about 90% of the people commenting how useless llms are for their job are people that installed copilot and tried it out for a few days at some point not in the last month. They haven’t even come close to exploring the ecosystem that’s being built up right now by the community.
Like, you say these folks have clearly tried the tool out too long ago… but, I mean, at the time they tried it out they could have found other comments just like yours, advertising the fact that now the tool really is revolutionary right now. So, we can see where the skepticism comes from, right?
^^^ This is actually one of the currently "in-demand" skills in "The Industry" right now... ;)
My point of view, I guess, is that we might want to wait until the field is developed to the point where chauffeurs or C programmers (in this analogy) become a thing.
It's not even comparable to free tiers. I have no idea how big the machines or clusters running that are, but they must be huge.
I was very unimpressed with the local models I could run.
But this is the same for any tech that will span the industry. You have people who want to stay in their ways and those who are curious and moving forward, and at various times in people's careers they may be one or the other. This is one of those changes where I don't think you get the option to defer.
The prompt was "Please create a basic NSTextView wrapper in SwiftUI, that uses TextKit2 and SwiftUI best practices."
Claude 4 Sonnet produces something I can't type in and the font is invisible, Gemini 2.5 Pro produces a bunch of compiler errors.
I use this stuff for React at work, and while it's slightly better in that context, it still makes completely idiotic mistakes and it really is a toss up on whether I save any time at all IMO.
I think the positive experiences are some mix of people who:
- Lack the skill/experience to recognize bad code or issues
- Don't care about the quality at all (future upgrades or maintainability)
- Only care about the visual output
--------------------
The more I use these tools the more I think the raw generation aspect is a complete crapshoot. The only thing I've had success with is data marshaling or boilerplate code where I provide examples - and even then they'll do random things I specifically instruct them not to in the prompt. Even for small context windows. And to do anything useful I have to be a fucking surgeon with the context and spend a lot of time crafting a really good prompt, and they still mess it up frequently.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667... was posted in a comment here a few days ago, and I think it explains a lot of the issues. Programming works because it's a rigid language to communicate precise instructions to the machine. We're trying to circumvent that by using natural language, but natural language isn't precise. So the LLM has all this ambiguity to deal with, and output isn't what we want.
To counteract this the new idea is to provide tools to the LLM so it can check that its output is rigid/valid to at least fix some of the issues, but it's never going to be 100% due to the fundamental way this all works (LLMs using probabilities/randomness and us instructing them with natural language).
--------------------
Not to be completely negative, I think there are really exciting use cases that we can leverage well today:
- Embeddings for semantic/similarity search is amazing and will have all kinds of applications
- Processing error messages/logs with LLMs is awesome. They can parse out the actual issues much faster than I can
https://en.wikipedia.org/wiki/AppleScript
But AppleScript does not let you steal other people's code.
So the point is, it is not about whether AI can fix a bug or do something useful. It is about reporting and staying competitive via claiming. Just like many other reports which don't have any specific other purpose other than reporting itself.
A few years back, I asked an Architect who was authoring an architecture document, about who the audience for this document is. She replied saying the target audience is the reviewers. I asked, does anyone use it after the review? She says, not sure. And not surprisingly, the project which took 3 years to develop with a large cost, was shelved after being live for an hour in prod, because the whole thing was done only for a press release, saying the company has gone live with a new tech. They didn't lie.
Check my SQL stored procedure for possible logical errors. It found a join error that I didn't remember including in my SQL. After double-checking, I found that it had hallucinated a join that wasn't there before and reported it as a bug. After I asked for more information, it apologized for adding that.
I also asked for a C# code with some RegEx. It compiled, but it didn't work; it replaced the order of two string parameters. I had to copy and paste it back to show why it didn't work, and then it realized that it had changed the order of the parameters.
I asked for a command-line option to zip files in a certain way. It hallucinated a nonexistent option that would be crucial. In the end, it turned out that it was not possible to zip the files the way I wanted.
My manager plans to open our Git repository for AI code and pull request (PR) review. I already anticipate the pain of reviewing nonsensical bug reports.
csallen•3h ago
Just like any other tool, there are people who use it poorly, and people who use it well.
Yes, we're all tired of the endless parade of people who exaggerate the abilities of (current day) AI and claim it can do more than it can do.
But I'm also getting tired of people writing articles that showcase people using AI poorly as if that proves some sort of point about its inherent limitations.
Man hits thumb with hammer. Article: "Hammers can't even drive a simple nail!"
despera•3h ago
They call it a tool and so people leave reviews like any other tool.
OutOfHere•3h ago
Precisely. AI needs appropriate and sufficient guidance to be able to write code that does the job. I make sure my prompts have all of the necessary implementation detail that the AI will need. Without this guidance, the expected result is not a good one.
spookie•3h ago
OutOfHere•3h ago
DragonStrength•3h ago
Oh, you want to fire your engineers? Easy, just perfectly specify exactly what you want and how it should work! Oh, that's what the engineers are for? Huh!
OutOfHere•2h ago
dinfinity•3h ago
Note that the example (shitty Microsoft) implementation was not able to properly run tests during its work, not even tests it had written itself.
If you have an existing codebase that already has a plenty tests and you ask AI to refactor something whilst giving it the access it needs to run tests, it can already sometimes do a great job all by itself.
Good specification and documentation also do a lot, of course, but the iterative approach with feedback if things are actually working as intended is a game changer. Not unsurprisingly also a lot closer to how humans do things.
OutOfHere•1h ago
croes•3h ago
You mean the people who create and sell these AIs.
You would blame the hammer or at least the manufacturer if they claimed the hammer can do it al by itself.
This is more a your-car-can-drive-without-supervision-but-it-hit-a-another-car case.
baxtr•3h ago
deadlydose•3h ago
I wouldn't because I'm not stupid and I know what a hammer is and isn't capable of despite any claims to the contrary.
itishappy•3h ago
I drove 200 screws in one weekend using this hammer!
With hammers like these, who needs nails?
Hammers are all you need
Hammers deemed harmful
tedunangst•3h ago
benreesman•2h ago
AI coding stuff is a massive lever on some tasks and used by experts. But its not self-driving and the capabilities of tge frontier vendor stuff might be trending down, they're certainly not skyrocketing.
Any other tool: a compiler, an editor, a shell, even a browser, but I'd say build tools are the best analogy: you have chosen to become proficient or even expert or you haven't and rely on colleagues or communities that provide that expertise. Pick a project or a company: you know if you should be messing around with the build or asking a build person.
AI is no diffetent. Claude 4 Opus just went GA and its in power user tune still, they don't have the newb/cost-control defaults dialed in yet and so its really useful and probably will be for a few days until they get the PID controller wired up to whatever a control vector is these days, and then it will tank to useless slop just like 3.7.
For a week I'll get a little boost in my ouyput and pay them a grand and be glad I did, and then it will go back to worse than useless.
These guys only know one business plan.
mplanchard•2h ago
nessbot•2h ago
steventruong•2h ago
Kinrany•2h ago
collyw•2h ago
nessbot•1h ago
AstralStorm•34m ago
You cannot manually train it for your case.
You cannot tell it to not touch particular parts of your project either. It will stomp over all of the code.
You cannot even easily detect this tool has been used except for typical failures.
The tool may also leak your secrets to some central database. You cannot tell it to not do that.
(If you try either tack of those, it will lie to you that it complied while actually not doing that at all.)
When your networking fails, the tool does not work. It's fragile in all cases.
benreesman•1h ago
These are hooked up to control theory algorithms based on aggregate and regional KV and prompt cache load. This is true of both fixed and per-token billing. The agent will often be an asset at 4am but a liability at 2pm.
You get experiment segmented always, you get behavior scoped multi-armed badit rotated into and out of multiple segment categories (an experiment universe will typically have not less than 10000 segments, each engineer will need maybe 2 or 3 and maybe hundreds of arms per project/feature, so that's a lot of universes).
At this stage of the consumer internet cycle its about unit economics and regulatory capture and stock manipulation via hype rollercoaster. and make no mistake about what kind of companies these are: they have research programs with heavy short-run applications in mind and a few enclaves where they do AlphaFold or something. I'm sure they created an environment Carmack would tolerate at least for a while, but I gibe it a year or two we saw that movie at Oculus and Bosworth is a pretty good guy, he's like Jesus compared to the new boss.
In this extended analogy about users, owners, lenders, borrowers and hammers, I'd be asking what is the hammer and who is the nail.
bcyn•2h ago
Not to say that LLMs are at the same reliability of tractors vs. manual labor, but just think that your classification of what's a tool vs. not isn't a fair argument.
pempem•1h ago
Does what it says: When you swing a hammer and make contact, it provides greater and more focused force than your body at that same velocity. People who sell hammers make this claim and sometimes show you that the hammer can even pull out nails really well. The claims about what AI can do are noisy, incorrect and proffered by people who - I imagine OP thinks and would agree - know better. Essentially they are saying "Hammers are amazing. Swing them around everywhere"
Right to repair: Means an opportunity to understand the guts of a thing and fix it to do what you want. You cannot really do this to AI. You can prompt differently but it can be unclear why you're not getting what you want
benreesman•1h ago
People on HN love to bring up farm subsidies, and its a real issue, but big agriculture has special deals and what not. They have redundancy and leverage.
The only time this stuff kicks in is when the person with the little plot needs next harvest to get solvent and the only outcome it ever achieves is to push one more family farm on the brink into receivership and directly into the hands of a conglomorate.
Software engineers commanded salaries that The Right People have found an affront to the order of things long after they had gotten doctors and lawyers and other high-skill trades largely brought to heel via the joint licensing and pick a number tuition debt load. This isn't easy in software for a variety of reasons but roughly that the history of computer science in academia is kind of a unique one: it's research oriented in universities (mostly, there are programs with an applied tilt) but almost everyone signs up, graduates, and heads to industry without a second thought, and so back when the other skilled trades were getting organized into the class system it was kind of an oddity, regarded as almost an eccentric pursuit by deans and shit.
So while CS fundamentals are critical to good SWE's, schools don't teach them well as a rule any more than a physics undergraduate is going to be an asset at CERN: its prep for theory research most never do. Applied CS is just as serious a topic, but you mostly learn that via serious self study or from coworkers at companies with chops. Even CS graduates who are legends almost always emphasize that if you're serious about hacking then undergrad CS is remedial by the time you run into it (Coders at Work is full of this sentiment).
So to bring this back to tractors and AI, this is about a stubborn nail in what remains of the upwardly mobile skilled middle class that multiple illegal wage fixing schemes have yet to pound flat.
This one will fail too, but that's another mini blog post.
johnisgood•2h ago
nessbot•1h ago
johnisgood•33m ago
> those aren't what the vast majority of folks are using or pushing.
Good for them.
If you do not want to use SaaS, use a local model.
AstralStorm•27m ago
More if you want to actually train it.
hoppp•1h ago
JHer•2h ago
_se•2h ago
pier25•2h ago
Jackson__•2h ago
Man hits thumb with hammer. Hammer companies proclaim Hammers will be able to build entire houses on their own within the next few years [0].
[0] https://www.nytimes.com/2025/05/23/podcasts/google-ai-demis-...
pempem•1h ago