Also, something being a liability and something having upkeep costs are not the same thing.
What would your definition of /liability/ be then? 'An ongoing commitment to pay future costs' is a pretty good one.
> And as we'll see, that's the one skill AI isn't close to replacing.
Yet we never 'see' this in the article. It just restates it a few times without providing any proof.
I'd argue the opposite: specifically asking AI for designing an architecture already yields better results than what a good 30% of 'architects' I've encountered could ever come up with. It's just that a lot of people using AI don't explicitly ask for these things.
I.e writing the prompt, understanding the answers, pushing back etc.
Unless AI is introduced as a regular coworker and stakeholder wants to communicate with it regularly, I don't see this changing anytime soon.
Reverse engineering stuff when non-cooperative stakeholders dominate the project has its limits too and requires "god mod" type of access to internal infrastructure, which is not something anybody gets.
Architects are like managers, it's way harder than people imagine and very few people can actually do the work.
Also I hate that "architect" is used as a synonym of "cloud architect". There is much more to software architecture than cloud.
I don't know whether you were actually looking for an architect? There are different types of architects. For example, you have got enterprise architects that indeed will never touch yaml, you have got solution architects who have a more narrow focus, and you have got engineers with a plate of overbearing work and team responsibilities. The latter are better called lead engineer. In my experience, being a good (lead) engineer doesn't make one a good architect, but companies try to make their job posting more sexy by titling it with "architect". One would imho do better by taking lead engineers seriously just in their own right.
Architects in general need to be very skilled in abstract reasoning, information processing & conceptual thinking. However, the people hiring for an "architect" often look for a 2x/nx engineer, who is able to code x widgets per hour. That is a stupid mismatch.
I would agree however that someone without previous practical experience would be rather unsuitable, especially "below" enterprise architect level.
> Andreessen said that venture capital might be one of the few jobs that will survive the rise of AI automation. He said this was partly because the job required several “intangible” skills and was more of an art than a science.
https://fortune.com/article/mark-andreessen-venture-capitali...
I've seen a lot more ai-generated art than ai-generated science.
Ex: If you're a lazy typist like most, then a code assistant can speed you up significantly, when you use it as an autocomplete plus. But if you're a very practiced vim user and your fingers fly over the keyboard, or a wizard lisp hacker who uses structural editing, then a code assistant slows you down or distracts you even.
Ohhh, you mean power point writers. Sorry, lost you for a minute there.
On the other hand, they are pretty poor at reasoning from first principles to solve problems that are far outside their training corpus. In some domains, like performance-sensitive platforms, the midwit solution is usually the wrong one and you need highly skilled people to do the design work using context and knowledge that isn’t always available to LLMs. You could probably use an LLM to design a database kernel but it will be a relatively naive one because the training data isn’t available to do anything close to the state-of-the-art.
I'm honestly shocked by the number of upvotes this article has on Hacker News. It's extremely low quality. It's obviously written with ChatGPT. The tells are:
(1) Incorrect technology "hype cycle". It shows "Trigger, Disillusionment, Englightnment Productivity". It's missing the very important "Inflated Expectations".
(2) Too many pauses that disrupt the flow of ideas:
- Lots of em-dashes. ChatGPT loves to break up sentences with em-dashes.
- Lots of short sentences to sound pithy and profound. Example: "The executives get excited. The consultants circle like sharks. PowerPoint decks multiply. Budgets shift."
(3) "It isn't just X, it's X+1", where X is a normal descriptor, where X+1 is a more emphatic rephrasing of X. ChatGPT uses this construct a lot. Here are some from the article:
- "What actually happens isn't replacement, it's transformation"
- "For [...] disposable marketing sites, this doesn't matter. For systems that need to evolve over years, it's catastrophic."
Similarly, "It's not X, it's inverse-X", resulting in the same repetitive phrasing:
- "The NoCode movement didn't eliminate developers; it created NoCode specialists and backend integrators."
- "The cloud didn't eliminate system administrators; it transformed them into DevOps engineers"
- "The most valuable skill in software isn't writing code, it's architecting systems."
- "The result wasn't fewer developers—it was the birth of "NoCode specialists""
- "The sysadmins weren't eliminated; they were reborn as DevOps engineers"
- "the work didn't disappear; it evolved into infrastructure-as-code,"
- "the technology doesn't replace the skill, it elevates it to a higher level of abstraction."
- "code is not an asset—it's a liability."
---------
I wish people stopped using ChatGPT. Every article is written in the same wordy, try-to-hard-to-sound-profound, ChatGPT mannerisms.
Nobody writes in their own voice anymore.
Nobody knows how the future looks like, but I would change that sentence slightly:
"It's architecting systems. And that's the one thing AI can't yet do."
And the disdain for marketing sites continues. I'd argue the thing that's in front of your customer's face isn't "disposable"! When the customer wants to tinker with their account, they might get there from the familiar "marketing site". Or when potential customers and users of your product are weighing up your payment plans, these are not trivial matters! Will you really trust Sloppy Jo's AI in the moment customers are reaching for their credit cards? The 'money shot' of UX. "Disposable"? "Doesn't matter"? Pffff!
I don't think "disposable" is being used here as a pejorative adjective for them. They are important, but they are built in a special way indeed.
Funny, because I did some freelancing work fixing disposable vibe-coded landing pages recently. And if there's one thing we can count on is that the biggest control-freaks will always have that one extra stupid requirement that completely befuddles the AI and pushes it into making an even bigger mess, and then I'll have to come fix it.
It doesn't matter how smart the AI becomes, the problems we face with software are rarely technical. The problem is always the people creating accidental complexity and pushing it to the next person as if it was "essential".
The biggest asset of a developer is saying "no" to people. Perhaps AIs will learn that, but with competing AIs I'm pretty sure we'll always get one or the other to say yes, just like we have with people.
It's just never been that hard to solve technical problems with code except for an infinitesimal percentage of bleeding edge cases.
"People problems" are problems mainly caused by lack of design consistency, bad communication, unclear vision, micromanagement.
A "people solution" would be to, instead of throwing crumbs to the developers, actually have a shared vision that allows the developers/designers/everyone to plan ahead, produce features without fear (causing over-engineering) or lack of care (causing under-engineering).
Even if there is no plan other than "go to market ASAP", everyone should be aware of it and everyone should be aware of the consequences of swerving the car 180 degrees at 100km/h.
Feedback both ways is important, because if you only have top-down communication, the only feedback will be customer complaints and developers getting burned out.
100% management-induced problems.
For example in chess AI is already far better than humans. Including on tasks like evaluating positions.
Admittedly, I use "AI" in a broad sense here, despite the article being mostly focused on LLMs.
Sometimes. I have often had to say "no" because the customer request is genuinely impossible. Then comes the fun bit of explaining why the thing they want simply cannot exist, because often they'll try "But what if you just ... ?" – "No! It doesn't work that way, and here's why..."
I had to explain recently that `a * x !== b * x when a !== b`*... it is infuriating hearing "but the result is the same in this other competitor" coupled with the "maybe the problem here is you're not knowledgeable enough to understand".
We've definitely had our fair share of "IDK what to tell you, those guys are mathing wrong".
TBF, though, most customers are pretty tolerant of explainable differences in computed values. There's a bunch of "meh, close enough" in finance. We usually only run into the problem when someone (IMO) is looking for a reason not to buy our software. "It's not a perfect match, no way we can use this" sort of thing.
At this particular job I used the plus, the star (multiplication) and once I even got to use the minus.
There's a legend going around that a friend of mine has used division, but he has a PhD.
What you have to do is dig into the REASONS they want X or Y or Z (all of which are either expensive, impossible, or both) - then show them a way to get to their destination or close to it.
In the business user’s mind, negotiation means the developer can do X but the developer is lazy. Usually, it is requirement X doesn’t make any sense because a meeting was held where the business decided to pivot to a new direction and decided the new technical solution. The product owner simply gives out the new requirement without the context. If an architect or senior developer was involved in the meeting, they would have told the business you just trashed six months of development and we will now start over.
But fundamentally it just means "the base of data", the same way "a codebase" doesn't just mean a Git repository. "Downloading the database" just means there's a way to download all the data, and CSV is a reasonable export format. Don't get confused into thinking it means a way to download the Postgres data folder.
I think AI will get there when it comes to “you asked for a gif but they don’t support transparency”, but I am 100% sure people will continue to write “make the logo a square where every point is equidistant from the center” requirments.
EDIT: yes jpg, not gif, naughty typo + autocorrect
Why wouldn't the AI deal with that the same way human developers do? Follow up with questions, or iterate requirements?
[edit] In other words, llms that lie less will be more valuable for certain people, therefore llms that tell you when you are dumb will eventually win in those circles, regardless of how bruising it is to the user's ego.
Even experienced engineers can be surprisingly bad at this. Not everyone can tell their boss “That’s a stupid requirement and here’s why. Did you actually mean …” when their paycheck feels on the line.
The higher you get in your career, the more that conversation is the job.
They just want a yes-bot.
There's also no "ruling out" the Earth will get zapped by a gamma-ray burst tomorrow, either. You seem to be talking about something that, if done properly, would require AGI.
You can do anything with AI. Anything at all. The only limit is yourself.
The thing is, with enough magical thinking, of course they could do anything. So that let's unscrupulous salesmen sell you something that is not actually possible. They let you do the extrapolation, or they do it for you, promising something that doesn't exist, and may never exist.
How many years has Musk been promising "full self driving", and how many times recently have we seen his cars driving off the road and crashing into a tree because it saw a shadow, or driving into a Wile E Coyote style fake painted tunnel?
While there is some value in evaluating what might come in the future when evaluating, for example, whether to invest in an AI company, you need to temper a lot of the hype around AI by doing most of your evaluation based on what the tools are currently capable of, not some hypothetical future that is quite far from where they are.
One of the things that's tricky is that we have had a significant increase in the capability of these tools in the past few years; modern LLMs are capable of something far better than two or three years ago. It's easy to think "well, what if that exponential curve continues? Anything could be possible."
But in most real life systems, you don't have an unlimited exponential growth, you have something closer to a logistic curve. Exponential at first, but it eventually slows down and approaches a maximum asymptotically.
Exactly where we are on that logistic curve is hard to say. If we still have several more years of exponential growth in capability, then sure, maybe anything is possible. But more likely, we've already hit that inflection point, and continued growth will go slower and slower as we approach the limits of this LLM based approach to AI.
This is what people never understand about no coding solutions. There is still a process that takes time to develop things, and you will inevitably have people become experts at that process who can be paid to do it much better and quicker than the average person.
The out put was verbose, but it tried and then corrected me
> Actually, let me clarify something important: what you've described - "every point equidistant from the center" - is actually the definition of a circle, not a square!
here's the prompt
> use ascii art, can you make me an image of a square where every point is equidistant from the center?
If we're just looking for clever solutions, the set of equidistant points in the manhattan metric is a square. No clarifications needed until the client inevitably rejects the smart-ass approach.
Depends on what “transparent” means.
The task has been set; the soul weeps
> On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
From Passages from the Life of a Philosopher (1864), ch. 5 "Difference Engine No. 1"
You put in 2+2, and 4 comes out. That's the right answer.
If you put in 1+1, which are the wrong figures for the question of 2+2, will 4 still come out? It's easy to make a machine that always says 4.
They are either asking: is the machine capable of genuine thought, and therefore capable of proactively spotting an error in the input and fixing it? Or, they were asking: how sensitive is the output to incorrect permutations in the input (ie., how reliable is it)?
I sometimes take them to be asking the former question, as when someone asks, "Is the capital of France, paree?" and one responds, "Yes, it's spoken by the french like paree, but written Paris"
But they could equally mean, "is the output merely a probable consequence of the input, or is the machine deductively reliable"
Babbage, understanding the machine as a pure mechanism is oblivious to either possibility, yet very much inclined to sell it as a kind of thinking engine -- which would require, at least, both capacities
Given those requirements, I would draw a square on the surface of a sphere, making each point of the square equidistant from the sphere's center.
We can argue that AI can do this or that, or that it can't do this or that. But what is the alternative that is better? There often isn't one. We have already been through this repeatedly in areas such as cloud computing. Running you own servers is leaner, but then you have to acquire servers, data centers and operations. Which is hard. While cloud computing has become easy.
In another story here there are many defending that HN is simple [0]. Then it is noted that it might be getting stale [1]. Unsurprisingly as the simple nature of HN doesn't offer much over asking an LLM. There are things an LLM can't do, but HN doesn't do much of that.
For people to be better we actually need people. Who have housing, education and healthcare. And good technologies that can deliver performance, robustness and security. But HN is full of excuses why those things aren't needed, and that is something that AI can match. And it doesn't have to be that good to do it.
[0] https://news.ycombinator.com/item?id=44099357 [1] https://news.ycombinator.com/item?id=44101473
It's not just on HN; there's a lot of faith in the belief that eventually AI will enable enlightened individuals infinite leverage that doesn't hinge on pesky Other People. All they need to do is trust the AI, and embrace the exponentials.
Calls for the democratization of art also fall under this. Part of what develops one's artistic taste is the long march of building skills, constantly refining, and continually trying to outdo yourself. In other words: The Work. If you believe that only the output matters, then you're missing out on the journey that confers your artistic voice.
If people had felt they had sufficient leverage over their own lives, they wouldn't need to be praying to the machine gods for it.
That's a much harder problem for sure. But I don't see AI solving that.
In my experience this is always the hardest part of the job, but it's definitely not what a lot of developers enjoy (or even consider to be their responsibility).
I think it's true that there will always be room for developers-who-are-also-basically-product-managers, because success for a lot of projects will boil down to really understanding the stakeholders on a personal level.
There's also an art to it in how you frame the response, figuring out what the clients really want, and coming up with something that gets them 90% there w/o mushrooming app complexity. Good luck with AI on that.
In my personal experience there’s no substitute to building relationships if you’re an individual or small company looking for contract/freelance work.
It starts slow, but when you’re doing good work and maintain relationships you‘ll be swimming in work eventually.
Much better is variations of "We could do that, but it would take X weeks and/or cost this much in expenses/performance.
This leads to a conversation that enlightens both parties. It may even result in you understanding why and how the request is good after all.
Onus is yours to explain the difficulty and ideally the other party decides their own request is unreasonable, once you’ve provided an unreasonable timeline to match.
Actually straight-up saying no is always more difficult because if you’re not actually a decision-maker then what you’re doing is probably nonsense. You’re either going to have to explain yourself anyways (and it’s best explained with an unreasonable timeline), or be removed from the process.
It’s also often the case that the requestor has tried to imagine himself in your shoes in an attempt to better explain his goals, and comes up with some overly complex solution — and describes that solution instead of the original goal. Your goal with absurd requests is to pierce that veil and reach the original problem, and then work back to construct a reasonable solution to it
Product wanted it done in 6 months, to which I countered that the timeframe was highly unlikely no matter how many devs could be onboarded. We then proceeded to do weekly scope reduction meetings. After a month we got to a place where we comfortably felt a team of 5 could knock it out... ended up cutting the number of bugs down only marginally as stability was a core need, but the features were reduced to only 5.
Never once did I push back and say something wasn't a good idea, much of what happened was giving high level estimates, and if something was considered important enough, spending a few hours to a few days doing preliminary design work for a feature to better hone in on the effort. It was all details regarding difficultly/scope/risk to engender trust that the estimates were correct, and to let product pick and choose what were the most important things to address.
Finally, a Microsoft FrontPage for the 2020's
"control-freak" not necessary. For any known sequence/set of feature requirements it is possible to choose an optimal abstraction.
It's also possible to order the requirements in such a way, that introduction of next requirement will entirely invalidate an abstraction, chosen for the previously introduced requirements.
Most of the humans have trouble recovering from such a case. Those who do succeed are called senior software engineers.
Until a level of absolutely massive scale, modern tooling and code and systems reasonably built can handle most things technically, so it usually comes down to minimizing complexity as the #1 thing you can do to optimize development. And that could be
- code design complexity - design complexity - product complexity - communication complexity - org complexity
And sometimes minimizing these ARE mutually exclusive (most times not, and in an ideal world never… but humans...) which is why much of our job is to all push back and trade off against complexity in order to minimize it. With the understanding that there are pieces of complexity so inherent in a person/company/code/whatever’s processes that in the short term you learn to work around/with it in order to move forward at all, but hopefully in the long term make strategic decisions along the way to phase it out
> It's architecting systems. And that's the one thing AI can't do.
Why do people insist on this? AI absolutely will be able to do that, because it increasingly can do that already, and we are now goalposting around what "architecting systems" means.
What it cannot do, even in theory, is decide for you to want to do something and decide for you what that should be. (It can certainly provide ideas, but the context space is so large that I don't see how it would realistically be better at seeing an issue that exists in your world, including what you can do, who you know, and what interests you.)
For the foreseeable future, we will need people who want to make something happen. Being a developer will mean something else, but that does not mean that you are not the person most equipped to handle that task and deal with the complexities involved.
In fact, I think this is the scary thing that people are ringing the alarm bells about. With enough surveillance, organizations will be able to identify you reliably out in the world enough to build significant amount of context, even if you aren’t wearing a pair of AI glasses.
And with all that context, it will become a reasonable task for AI to guess what you want. Perhaps even guess a string of events or actions or activities that would lead you towards an end state that is desirable by that organization.
This is primarily responding to that one assertion though and is perhaps tangential to your actual overall point.
I think that's the key. The only ones who can provide enough accurate context are software developers. No POs or managers can handle such levels of detail (or abstraction) to hand them over via prompts to a chatbot; engineers are doing this on a daily basis.
I laugh at the image of a non-technical person like my PO or the manager of my manager giving "orders" to an LLM to design a high-scalable tiny component for handling payments. There are dozens of details that can go wrong if not-enough details are provided: from security, to versioning, to resilience, to deployment, to maintainability...
It's also unlikely that context windows will become unbounded to the point where all that data can fit in context, and even if it can it's another question entirely whether the model can actually utilize all the information.
Many, many unknown unknowns would need to be overcome for this to even be in the realm of possibility. Right now it's difficult enough to get simple agents with relatively small context to be reliable and perform well, let alone something like what you're suggesting.
I don’t know if I really believe that it would be better than a human in every domain. But it definitely won’t have a cousin on the board of a competitor, reveal our plans to golfing buddies, make promotions based on handshake strength, or get canceled for hitting on employees.
To be CEO is to have opinions and convictions, even if they are incorrect. That's beyond LLMs.
More to the point, I was under the impression that current super-subservient LLMs were just a result of the fine-tuning process. Of course, the LLM doesn’t have an internal mental state so we can’t say it has an opinion. But, it could be fine-tuned to act like it does, right?
Who is fine-tuning the LLM? If you're having someone turns the dials and setting core concepts and policies so that they persist outside the context window it seems to me that they're the actual leader.
I’m speculating about a company run by an LLM (which doesn’t exist yet), so it seems plausible enough that all of the employees of the company could use it together (why not?).
Think about it: if, in 10 years, I create a company and my only employee is a highly capable LLM that can execute any command I give, who's going to be liable if something goes wrong? The LLM or me? It's gonna be me, so I better give the damm LLM explicit and non-ambiguous commands... but hey I'm only the CEO of my own company, I don't know how to do that (otherwise, I would be an engineer).
See, "AI" don't even have to guess it, I make full public disclosure of it. If anything can help with such a goal, including automated inference (AI) devices, there is no major concern with such a tool per se.
The leviathan monopolizing the tool for its own benefit in a detrimental way for human beings is an orthogonal issue.
¹ this is a bit anthropocentric statement, but it's a good way to favor human agreement, and I believe still actually implicitely require living in harmony with the rest of our follow earth inhabitants
For example, my mother calls and asks if I want to come over.
How is an AI ever going to have the context to decide that for me? Given the right amount and quality of sensors starting from birth or soon after – sure, it's not theoretically impossible.
But as a grown up person that has knowledge about the things we share, and don't share, the conflicts in our present and past, the things I never talked about to anyone and that I would find hard to verbalize if I wanted to, or admit to myself that I don't.
It can check my calendar. But it can't understand that I have been thinking about doing something for a while, and I just heard someone randomly talking about something else, that resurfaced that idea and now I would really rather do that. How would the AI know? (Again, not theoretically impossible given the right sensors, but it seems fairly far away.)
I could try and explain of course. But where to start? And how would I explain how to explain this to mum? It's really fucking complicated. I am not saying that llm's would not be helpful here by generalization monsters, actually it's both insane and sobering how helpful they can be giving the amount of context that they do not have about us.
Which means it cannot architect a software solution just by itself, unless it could read people's minds and know what they might want.
AIs can ——— on some time horizon ——— do anything that Is can do.
Just because one is organic-based doesn’t necessitate superior talent.
My guess though is that the lack of hiring is simply a result of the over saturation of the market. Just looking at the growth of CS degrees awarded you have to conclude that we'd be in such a situation eventually.
These things don't happen overnight though, it'll probably take a few years yet for the shock of whatever is going on right now to really play out.
We’ve been seeing layoffs for over 3 years…
That was before the pandemic and AI.
It was predictable that some layoffs would eventually happen, we just didn't know it would be so fast.
I wonder if it's AI, the market, both, or some other cause... :/
You think executives are gonna be saying, “yeah we’re laying off people because our revenue stinks and we have too high of costs!” They’re gonna tell people, “yeah, we definitely got AI. It’s AI, that’s our competitive edge and why we had to do layoffs. We have AI and our competitors don’t. That’s why we’re better. (Oh my god, I hope this works. Please get the stock back up, daddy needs a new yacht.)”
The layoffs started exactly as soon as the US government decided to stop giving free money to investment funds. A few days before they announced it.
A bit before it, it was clear it was going to happen. But I do agree that years earlier nobody could predict when it wold stop.
Depends on where too. Was just talking to a friend yesterday who works for a military sub (so not just software) and they said their projects are basically bottlenecked by hiring engineers.
Fintech unicorn that has AI in its name, but still forbids usage of LLMs for coding (my previous job) --> no hiring of juniors since 2023.
YC startup funded in 2024 heavily invested in AI (my current job) --> half the staff is junior.
I think the demand for developers will similarly fluctuate wildly while LLM:s are still being improved towards the point of being better programmers than most programmers. Then programmers will go and do other stuff.
Being able to make important decisions about what to build should be one of those things that should increase in demand as the price of building stuff goes down. Then again, making important technical decisions and understand their consequences have always been part of what developers do. So we should be good at that.
They were most certainly not! Which is why you had a solid 60+ years of sail+steam ships. And even longer for cargo! [0]
Parent picked a great metaphor for AI adoption: superior in some areas, inferior in others, with the balance changing with technological advancement.
[0] https://en.m.wikipedia.org/wiki/SS_Great_Western https://en.m.wikipedia.org/wiki/SS_Sirius_(1837) https://www.reddit.com/r/AskHistorians/comments/4ap0dn/when_...
So what are the areas that AI are superior to traditional programming? If your answer is suggestion, then refinement with traditional tooling, then it's just a workflow addon like contextual help, google search, and github code search. And I prefer the others because they are more reliable.
We have six major phases in the software development lifecycle: 1) Planning, 2) Analysis, 3) Design, 4) Implementation, 5) Testing, 6) Maintenance. I failed to see how LLM assistance is objectively better even in part than not having it at all. Everything I've read is mostly anecdote where the root cause is inexperience and lack of knowledge.
This is already happening. Over the past 4-5 years I've known more than 30 senior devs either transition into areas other than development, or in many case, completely leave development all together. Most have left because they're getting stuck in situations like you describe. Having to pick up more managerial stuff and AI isn't capable of even doing junior level work so many just gave up and left.
Yes, AI is helping in a lot of different ways to reduce development times, but the offloading of specific knowledge to these tools is hampering actual skill development.
We're in for a real bumpy ride over the next decade as the industry comes to gripes with how to deal with a lot of bad things all happening at the same time.
Planning is the ability to map concerns to solutions and project solution delivery according to resources available. I am not convinced AI is anywhere near getting that right. It’s not straightforward even when your human assets are commodities.
Acting on plans is called task execution.
Architecture is the design and art of interrelated systems. This involves layers of competing and/or cooperative plans. AI absolutely cannot do this. A gross hallucination at one layer potentially destroys or displaces other layers and that is catastrophically expensive. That is why real people do this work and why they are constantly audited.
And yet I can ask it how to architect my database in a logical way, and it clearly has solid ideas that again, it doesn’t script itself.
So really it teaches us or instructs us one way to do things, it’s not executing in any realm…yet
If people still had to think for themselves, what would be the point?
I mean we literally have a industry that just do that (Vercel,Netfly etc)
Not really. Programming means explaining the machine what to do. How you do it has changed over the years. From writing machine language and punching cards to gluing frameworks and drawing boxes. But the core is always the same: take approximative and ambiguous requirements from someone who doesn't really knows what he wants and turn it into something precise the machine can execute reliably, without supervision.
Over the years, programmers have figured out that the best way to do it is with code. GUIs are usually not expressive enough, and English is too ambiguous and/or too verbose, that's why we have programming languages. There are fields that had specialized languages before electronic computers existed, like maths, and for the same reason.
LLMs are just the current step in the evolution of programming, but the role of the programmer is still the same: getting the machine to do what people want, be it by prompting, drawing, or writing code, and I suspect code will still prevail. LLMs are quite good at repeating what has been done before, but having them write something original using natural language descriptions is quite a frustrating experience, and if you are programming, there is a good chance there is at least something original to it, otherwise, why not use an off-the-shelf product?
We are at the peak of the hype cycle now, but things will settle down. Some things will change for sure, as always when some new technology emerges.
I feel like a lot of people need to go re-read moon is a harsh mistress.
On the bright side, the element of development that is LEAST represented in teaching and interviewing (how to structure large codebases) will be the new frontier and differentiator. But much as scripting language removed the focus on pointers and memory management, AI will abstract away discrete blocks of code.
It is kind of the dream of open source software, but advanced - don't rebuild standard functions. But also, don't bother searching for them or work out how to integrate them. Just request what you need and keep going.
"Your job is now to integrate all of this AI generated slop together smoothly" is a thought that is going to keep me up at night and probably remove years from my life from stress
I don't mean to sound flippant. What you are describing sounds like a nightmare. Plumbing libraries together is just such a boring, miserable chore. Have AI solve all the fun challenging parts and then personally do the gruntwork of wiring it all together?
I wish I were closer to retirement. Or death
At this clip it isn't very hard to imagine the developer layer becoming obsolete or reduced down to one architect directing many agents.
In fact, this is probably already somewhat possible. I don't really write code anymore, I direct claude code to make the edits. This is a much faster workflow than the old one.
I don't see LLMs as much different really, our jobs becoming easier just means there's more things we can do now and with more capabilities comes more demand. Not right away of course.
The hard part is to have a consistent system that can evolve without costing too much. And the bigger the system, the harder it is to get this right. We have principles like modularity, cohesion, information hiding,... to help us on that front, but not a clear guideline on how to achieve it. That's the design phase.
Once you have the two above done, coding is often quite easy. And if you have a good programming ecosystem and people that know it, it can be done quite fast.
Largely, because there were still upstream blockers that constrained throughput.
Typically imprecise business requirements (because someone hadn't thought sufficiently about the problem) or operation at scale issues (poorly generalizing architecture).
> our jobs becoming easier just means there's more things we can do now and with more capabilities comes more demand
This is the repeatedly forgotten lesson from the computing / digitization revolution!
The reason they changed the world wasn't because they were more capable (versus their manual precursors) but because they were economically cheaper.
Consequently, they enabled an entire class of problems to be worked on that were previously uneconomical.
E.g. there's no company on the planet that wouldn't be interested in more realtime detail of its financial operations... but that wasn't worth enough to pay bodies to continually tabulate it.
>> The NoCode movement didn't eliminate developers; it created NoCode specialists and backend integrators. The cloud didn't eliminate system administrators; it transformed them into DevOps engineers at double the salary.
Similarly, the article feels around the issue here but loses two important takeaways:
1) Technologies that revolutionize the world decrease total cost to deliver preexisting value.
2) Salary ~= value, for as many positions as demand supports.
Whether are more or fewer backend integrators, devops engineers, etc. post-transformation isn't foretold.
In recent history, those who upskill their productivity reap larger salaries, while others' positions disappear. I.e. the cloud engineer supporting millions of users, instead of the many bodies that used to take to deliver less efficiently.
It remains to be seen whether AI coding will stimulate more demand or simply increase the value of the same / fewer positions.
PS: If I were career plotting today, there's no way in hell I'd be aiming for anything that didn't have a customer-interactive component. Those business solution formulation skills are going to be a key differentiator any way it goes. The "locked in a closet" coder, no matter how good, is going to be a valuable addition for fewer and fewer positions.
I am finding LLMs useful for coding in that it can do a lot of heavy lifting for me, and then I jump in and do some finishing touches.
It is also sort of decent at reviewing my code and suggesting improvements, writing unit tests etc.
Hidden in all that is I have to describe all of those things, in detail, for the LLM to do a decent job. I can of course do a "Write unit tests for me", but I notice it does a much better job if I describe what are the test cases, and even how I want things tested.
If anyone's moving the "architecture" goal posts it would be anyone who thinks that "architecture" so much as fits into the context window of a modern LLM, let alone that they are successfully doing it. They're terrible architects right now, like, worse than useless, worse than I'd expect from an intern. An intern may cargo cult design methodologies they don't understand yet but even that is better than what LLMs are producing.
Whatever the next generation AI is, though, who can tell. What an AI could do that could actually construct symbolic maps of a system, manipulate that map directly, then manifest that in code, could accomplish is difficult to say. However nobody knows how to do that right now. It's not for lack of trying, either.
This was the response by non-developers to make it obsolete to need to spell out your business details to an expensive programmer who, we presume, will just change them anyhow and make up their own numbers!
That didn't work for shit either, although to the authors point it did create a ton of jobs!
> Code is not an asset, it's a liability.
> Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables.
> The skill that survives and thrives isn't writing code. It's architecting systems. And that's the one thing AI can't do.
There's a pretty big herd of sacred cows in programming and this debate always surfaces them. But I remember similar arguments being made about Go once and its version of human beings' special something. We saw then that sacred cows don't live long when AI really arrives.
It can happen that AI will get good enough to help with the human aspect of software development but using playing Go as an analogy doesn't really work.
SQL, with all its warts, has not been dethroned. HTML is still king.
This is like saying your transportation fleet as a delivery company isn’t an asset but a liability, it makes no sense.
Almost all assets require varying amounts of maintenance.
1. The push for "software architects" to create plans and specifications for those pesky developers to simply follow. I remember around 2005, there was some hype around generating code from UML and having developers "just" fill in the blanks. The result I've observed were insanely over engineered systems where even just adding a new field to be stored required touching like 8 files across four different layers.
2. The "agile transformation" era that followed shortly after, where a (possibly deliberate) misunderstanding of agile principles lead to lots of off-the-shelf processes, roles, and some degree of acceptance for micro managing developers. From what I've seen, this mostly eroded trust, motivation and creativity. Best case scenario, it would create a functioning feature factory that efficiently builds the wrong thing. More often than not, it just made entire teams unproductive real fast.
What I've always liked to see is non-developers showing genuine interest in the work of developers, trying to participate or at least support, embracing the complexity and clarifying problems to solve. No matter what tools teams use and what processes they follow, I've always seen this result in success. Any effort around reducing the complexity inherent in software development, did not.
Is this bad news? It means that managers who think too much of their "great" ideas (without having a deep knowledge of the respective area) and want "obedient" subordinates will be in for a nasty surprise. :-)
It's entirely possible that in 5 or 10 years at least some developers will be fully replaced.
(And probably a lot of people in HR, finance, marketing, etc. too.)
I think you hit on something with finance as well. Give Microsoft a decade of improving AI's understanding of Excel and I'm thinking a whole lot of business analyst types would be unnecessary. Today, in an organization of 25 or 50 thousand employees, you may have dozens to hundreds depending on the industry. Ten years from now? Well, let's just say no one is gonna willingly carry hundreds of business analysts salaries on their books while paying the Microsoft 365AI license anyway. Only the best of those analysts will remain. And not many of them.
But also thousands of companies are going to be able to implement with a team of 1-10 people what before was only available to organizations of 25 or 50 thousand employees.
Maybe 5 or 10 years things will change, but at this point I can't see myself being replaced without some sort of paradigm shift, which is not what the current brand of AI improvements are offering - it seems like they are offering iterations of the same thing over and over, each generation slightly more refined or with more ability to generate output based upon its own output - so I see no reason to assume my job is in jeopardy just because it might be at some later date.
Someone needs to tell me what exactly is going to change to cause this sudden shift in what AI can do, because right now I don't see it. It seems to have given people a licence to suggest science fiction be treated like a business plan.
And don't think that because the crazy "everything will be AI in 6 months" predictions predictably haven't come to pass that that means it won't ever happen.
I'm old enough to remember the failure of online clothes shopping in the dot-com era. Sometimes things just take a while.
Sure it not yet happening doesn't mean it won't ever happen, but it also no evidence that it will. When the latest apocalypse cult fails to predict the end of the world does that make you more or less convinced that the world will end the next time someone yells it? The longer this future developer apocalypse is delayed the less credible it seems.
I mean... hopefully it is really obvious why those are very different!
But these are the precise improvements that require a fundamental change to how these systems work.
So far, no one has figured out how to make AI systems achieve this. And yet, we're supposed to believe that tinkering with LLMs will get us there Real Soon Now.
They can be a useful tool, but their current capabilities and (I personally believe) their ability to improve indefinitely are wildly overhyped. And the industry as a whole has some sort of blinders on, IMO, related to how progress made with them is lumpy and kind of goes in both directions in the sense that every time someone introduces their grand new model and I play around with it I'll find some things it is better at than the previous version and some things it is worse at than the previous version. But number go up, so progress... I guess?
On one hand I can laugh this all off as yet another management fad (and to be clear, I don't think LLM usage is a fad, just the idea that this is going to be world-changing technology rather than just another tool), but what scares me most about the current AI hype isn't whether LLMs will take all of our jobs, but rather the very real damage that is likely to be caused by the cadre of rich and now politically powerful people who are pushing for massive amounts of energy production to power all of this "AI".
Some of them are practically a religious cult in that they believe in human-caused climate change, but still want to drastically ramp up power production to absurd levels by any means necessary while handwaving away the obvious impact this will have by claiming that whatever damage is caused by the ramp up in power production will be solved when the benevolent godlike AI that comes out on the other side will fix it for us.
Yeah, I uh don't see it working out that way. At all.
Some were entirely replaced already, like landing page developers. But the amount of AI/nocode developers is much bigger and growing fast so no dev roles were eliminated. That is just of more of the same in tech, keeping up with it.
This could explain the cycle by itself. Dynamic equations often tend to oscillate. Anything that temporarily accelerates the production of code imposes a maintenance cost later on.
And the most valuable skill in defending a stance is moving goal posts.
This time it really _is_ different, and we're looking at a world totally saturated with an abundance of bits. This will not be a simple restructuring of labor markets but something very significant and potentially quite severe.
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
That's interesting, I remember talking to the CTO of a big American bank in the nineties who told me the opposite. They wanted to buy rather than build.
> The result wasn't fewer developers
Makes me wonder if the right thing to do is to get rid of the non-developers instead?
In other words, they are very economical replacements for many non-developers.
>What actually happens isn't replacement, it's transformation.
"Statement --/, negation" pattern is the clearest indicator of ChatGPT I currently know.
The current generation of generative AI based on LLMs simply won't be able to properly learn to code large code bases, and won't make the correct evaluative choices of products. Without being able to reason and evaluate objectively, you won't be a good "developer" replacement. Similar to how asking LLMs about (complex) integrals, it will often end it's answer with "solution proved by derivation", not because it has actually done it (it will also end with this on incorrect integrals), but because that's what its training data does.
The same day that a tutorial from Capgemini consultant on how to write code using AI appeared here, I heard from a project manager who has AI write up code that is the reviewed by the human project team--because that is far easier.
I expect most offshoring to go the way of the horse and buggy because it may be easier to explain the requirements to cursor, and the turnaround time is much faster.
Visual programming (NoCode/LowCode) tools have been very successful in quite a few domains. Animation, signal processing, data wrangling etc. But they have not been successful for general purpose programming, and I don't think they ever will be. More on this perenial HN topic at:
https://successfulsoftware.net/2024/01/16/visual-vs-text-bas...
Writing code should almost be an afterthought to understanding the problem deeply and iteratively.
I don't quite agree. I see the skill in translating the real world with all its inconsistencies in to something a computer understands.
And this is where all the no/lo-code platforms fall apart. At some point that translation step needs to happen and most people absolutely hate it. And now you hire a dev anyway. As helpful as they may be, I haven't seen LLMs do this translation step any better.
Maybe there is a possibility that LLMs/AI remove the moron out of "extremely fast moron" that are computers in ways I haven't yet seen.
I suspect that if current trends continue, today's higher level languages will eventually become lower level languages in the not so distant future. It will be less important to know them, just like it is not critical to know assembly language to write a useful application today.
System architecture will remain critical.
Building a software was always about using those abstractions to solve a problem. But what clients give us are mostly wishes and wants. We turn those into a problem, then we solve that problem. It goes from "I want $this" (requirement) to "How can $this be done?" (analysis), then to "$this can be done that way" (design). We translate the last part into code. But there's still "Is $this done correctly?" (answered by testing) and "$this is no longer working" (maintenance).
So we're not moving to natural language, because the whole point of code is to ossify design. We're moving towards better representation of common design elements.
God I felt like I was the only one that noticed. People would say 'DevOps can code' as if that made DevOps a new thing, but being able to automate anything was a core principle of the SAGE-style systems admin in the 90s / early 2000s.
The author of this post is right, code is a liability, but AI leaders have somehow convinced the market that code generation on demand is a massive win. They're selling the industry on a future where companies can maintain "productivity" with a fraction of the headcount.
Surprisingly, no one seems to ask (or care) about how product quality fares in the vibe code era. Last month Satya Nadella famously claimed that 30% of Microsoft's code was written by AI. Is it a coincidence that Github has been averaging 20 incidents a month this year?[1] That's basically once a work day...
Nothing comes for free. My prediction is that companies over-prioritizing efficiency through LLMs will pay for it with quality. I'm not going to bet that this will bring down any giants, but not every company buying this snake oil is Microsoft. There are plenty of hungry entrepreneurs out there that will swarm if businesses fumble their core value prop.
I am in the other camp. Companies ignoring AI are in for a bad time.
My point was more of a response to the inflated expectations that people have about AI. The current generation of AI tech is rife with gotchas and pitfalls. Many companies seem to be making decisions with the hope that they will out-innovate any consequences.
[1] AI: Accelerated Incompetence. https://www.slater.dev/accelerated-incompetence/
But if it's false, there's no saying you can't eventually have an ai model that can read your entire aws/infra account, look at logs, financials, look at docs and have a coherent picture of an entire business. At that point the idea that it might be able to handle architecture and long term planning seems plausible.
Usually when I read about developer replacement, it's with the underlying assumption that the agents/models will just keep getting bigger, better and cheaper, not that today's models will do it.
When AIs are largely on their own, their practices will evolve as well, but without there being a population of software developers who participate and follow those changes in concepts and practices. There will still have to be a smaller number of specialists who follow and steer how AI is doing software development, so that the inevitable failure cases can be analyzed and fixed, and to keep the AI way of doing things on a track that is still intelligible to humans.
Assuming that AI will become that capable, this will be a long and complex transition.
I keep saying that - AI is the brick maker - you build the house. and its your decision to build that house that only needs bricks in the right place ...
Seeing the difference in complexity between a distributed "monolith" and an actual one makes me wonder how serious some of us are about serving the customer. The speed with which you can build a rails or PHP app makes everything proposed since 2016 seem kind of pointless from a business standpoint. Many SaaS B2B products could be refactored into a single powershell/bash script.
It can take a very firm hand to guide a team away from the shiny distractions. There is no way in hell an obsequious AI contraption will be able to fill this role. I know for a fact the LLMs are guiding developers towards more complexity because I have to constantly prompt things like "do not use 3rd party dependencies" and "demonstrate using pseudocode first" to avoid getting sucked into npm Narnia.
However, if the tooling has improved 10x, then the product complexity has gone up 100x. Nowadays, you can one-shot a Tetris game using an LLM. Back in the day this would take weeks, if not months. But now, nobody is impressed by a Tetris level game.
The only people that value quality are engineers. Any predictions of the future by engineers that rely on other people suddenly valuing quality can safely be ignored.
If customers didn't value quality, then every startup would have succeeded, just by providing the most barely functioning product at the cheapest prices, and making enough revenue by volume.
You've just described hustle culture. And yes, it does lead to business success. Engineers don't like hustle.
> Hustles don't fail
Then by definition there should be few engineers with financial problems, right? Almost every engineer wants to succeed with their side hustle
You're still only thinking in terms of startups. I'm thinking about landscapers and ticket scalpers. No engineer is doing that. But if you were willing to, you'd make money.
It sounds a lot like you're saying "all engineers are lazy" and that's just obviously wrong.
> providing the most barely functioning product at the cheapest prices, and making enough revenue by volume
This is hustling.
Plenty of engineers are building barely-functional products as fast (cheap, because time is money) as can be and doing a ton of volume. The entire Bangalore contractor scene was built this way, as well as a ton of small Western contractor shops. You honestly think no engineers understand undercutting competition? Really?
I'm not sure though I'd call business folks with software products that are hustling engineers. Different mindset.
The only people that often value quality are engineers.
I might even add that the overwhelming majority of engineers are happy to sacrifice quality - and ethics generally - when the price is right. Not all, maybe.
It's a strange culture we have, one which readily produces engineer types capable of complex logic in their work, and at the same time, "the overarching concern of business is always profit" seems to sometimes cause difficulty.
We are in this weird twilight zone where everything is still relativity high quality and stuff sort of works but in a few decades shit will start degrading faster than you can say “OpenAI”.
Weird thing will start happening like tax systems for the government not being able to be upgraded while consuming billions, infrastructure failing for unknown reasons, simple non or low-power devices that are now ubiquitous will become rare. Everything will require subscriptions and internet access and nothing will work right. You will have to talk to LLMs all day.
But, then there will be a demand for "all-in-one" reliable mega apps to replace everything else. These apps will usher in the megacorp reality William Gibson described.
First of all, all the most successful software products have had very high quality. Google search won because it was good and fast. All the successful web browsers work incredibly well. Ditto the big operating systems. The iPhone is an amazing product. Facebook, Instagram, TikTok; whatever else you think, these are not buggy or sluggish products, (especially in their prime). Stripe grew by making a great product. The successful B2B products are also very high quality. I don't have much love for Databricks, but it works well. I have found Okta to be extremely impressive. Notion works really well. (There are some counterexamples: I'm not too impressed by Rippling, for instance.)
Where are all these examples of products that have succeeded despite not valuing quality?
If think you're really close with one nuance.
Business does not value CODE quality. Their primary goal is to ship product quickly enough that they can close customers. If you're in a fast moving or competitive space, quality matters more because you need to ship differentiating features. If the space is slow moving, not prone to migration, etc, then the shipping schedule can be slower and quality is less important.
That said, customers care about "quality" but they likely define it very differently.. primarily as "usability"
They don't care about the code behind the scenes, what framework you used, etc as long as the software a) does what they want and b) does it "quick enough" in their opinion.
Business folks love to say this, but a lot of this time this is glossing over a pretty inherent coupling between code quality and doing what users want quick enough. I've worked on a lot of projects with messy code, and that mess always translated into problems which users cared about. There isn't some magical case where the code is bad and the software is great for the users--that's not a thing that exists, at least not for very long.
If someone in the supply chain before you cares more about something being cheap, then that is all you get.
They build hardware-based amp/pedal modelers (e.g. virtual pedalboards + amp) for guitars that get a very steady stream of updates. From a feel and accuracy perspective, they outcompete pretty much everyone else, even much bigger companies such as Line 6 (part of Yamaha). Pretty small company AFAIK, maybe less than 20 people or so. Most of the improvements stem from the CEO's ever-improving understanding of how to model what are very analog systems accurately.
They do almost everything you shouldn't do as a startup:
* mostly a hardware company
* direct sales instead of going through somewhere like Sweetwater
* they don't pay artists to endorse them
* no subscriptions
* lots of free, sometimes substantial updates to the modeling algorithms
* didn't use AI to build their product quickly
Quality is how they differentiate themselves in a crowded market.
This isn't an edge case, either. This is how parts of the market function. Not every part of every market is trapped in a race to the bottom.
Which makes it really obvious their aim is to get rid of (expensive) developers, not to unlock our time to enable us to work on higher things
Can it persist in times when borrowing money is not free (nonzero interest rates)
Think about it this way: five years ago plenty of companies hired more SWEs to increase productivity, gladly accepting additional cost. So it’s not about cost imo.
I might be wrong, but perhaps a useful way to look at all of this is to ignore stated reasons for layoffs and look at the companies themselves.
I think this time there is a key difference: AI coding is fully embedded into a software dev's workflow, and it indeed cuts loads of work for at least some of the projects and engineers. In contrast, few, if none, engineers would go to a No-Code/Low-Code tool and them maintain them in their repo.
The impact would be that we will need fewer number of engineers as the productivity of us increases. That alone may not be enough to change the curve of supply and demand. However, when this is combined with the current market condition of lacking business growth, the curve will be changed: the fewer new problems we have, the more repetitive solutions we will get, the more repetitive solutions we will work on, the more accurate the code generated by AI will be, and therefore the less code we will need a human to write.
So, this time it will not be about AI replacing engineers, but about AI replacing enough repetitive work that we will need fewer engineers.
I agree with the core of the idea though, and I have written about it as well (https://www.linkedin.com/posts/brunocborges_ai-wont-eliminat...).
Now just for the heck of it I’ll attempt to craft the strongest rebuttal I can:
This blog misses the key difference between AI and all other technologies in software development. AI isn’t merely good at writing code. It’s good at thinking. It’s not going to merely automate software development, it’s going to automate knowledge work. You as a human have no place in a world where your brain is strictly less capable in all realms of decisionmaking compared to machines.
I think a better comparison is to Jevons Paradox. New Technologies make developers more efficient and thus cheaper. This increases demand more than what is gained by the efficiency increases.
I don't see us anytime soon running out of things that are worth automating, especially if the cost for that continues to drop.
Weak conclusion as AI already does that quite well.
But AI can do some architecting. It's just not really the sort of thing where an unskilled person with a highly proficient LLM is going to be producing a distributed system that does anything useful.
It seems to me that the net effect of AI will be to increase the output of developers without increasing the cost per developer. Effectively, this will make software development cheaper. I suppose it's possible that there is some sort of peak demand for software that will require less developers over time to meet, but, generally, when something becomes cheaper, the demand for that thing will tend to increase.
I think the rumors of our demise are overblown.
nhumrich•8h ago
Yes, this. 100% this. The goal is for a program to serve a goal/purpose with the least a amount of code possible. AI does the exact opposite. Now that code generation is easy, there is no more natural constraint preventing too much liability.
artrockalter•8h ago
westoque•8h ago
a_imho•8h ago
https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD103...
coliveira•8h ago
dakiol•8h ago
skydhash•8h ago
coliveira•6h ago
dakiol•5h ago
What's the difference (from an LLM point of view) between code generated one week ago and code generated now? How does the LLM know where or how to fix the bug? Why the LLM didn't generate the code without that particular bug to begin with?
coliveira•5h ago
dakiol•3h ago
> Because, until recently, it was very costly to replace the code. AI "programmers" will create completely new code in a few minutes so there's no need to maintain it. If there are new problems tomorrow, they'll generate the code again.
In order for the programmer to know what change needs to be made to fix the bug, the programmer needs to debug the code first. But if code is costly to replace (and we'll use LLMs to regenerate code from scratch in that case), code is also costly to debug (reason for code being costly to replace is that code has grown to be an unmaintanable mess... that's the very same reason debugging is also costly).
Also, it doesn't make sense to ask programmers to debug and tell LLMs to code. Why not tell directly the LLM to debug as well?
So, your scenario of generating "new code" every time doesn't really sustain itself. Perhaps for very tiny applications it could work, but for the vast majority of projects where usually ~100 engineers work, it would lead to an unmaintainable mess. If it's unmaintainable, then no programmer can debug it efficiently, and if no programmer can debug it, no one can tell the LLM to fix it.
coliveira•3h ago
AIs can debug code too. And the "programmer" doesn't need to know how to fix, only describe what error is happening.
bpicolo•1h ago
Bostonian•8h ago
jollyllama•8h ago
I guess the question is "replaced with what?" How can you be sure it's a 1:1 replacement?
RankingMember•4h ago
1shooner•1h ago
Have you looked at much top-tier website code lately?