> If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?
Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.
> You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.
These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.
And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?
I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.
I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.
> “You got way more productive, so we’re letting you go” is not a sentence that makes a lot of sense.
Actually, this sentence makes perfect sense if you tweak it slightly:
> You and your teammate got way more productive, so we’re letting (just) you go
This literally happens all the time with automation. Does anyone think the number of people employed in the field of accounting would be the same or higher without the use of calculators or computers?
It can write some types of code. It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.
If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms. The $3B is better spent on coming up with truly novel technology that these companies could monopolise with their models. Well, the can't, not yet.
My gut feeling tells me that this might be actually possible at some point but at enormous cost that will make it impractical for most intents and purposes. But even if it possible tomorrow, you would still need people that understand the systems because without them we are simply doomed.
In fact, I would go as much as saying that the demand for programmers will not plummet but skyrocket requiring twice as many programmer we have today. The world simply wont have enough programmers to supply. The reason I think this might actually happen is because the code produced by AI will be so vast overtime that even if humans need to handle/understand 1% that will require more than the 50M developers we have today.
> (few people like writing unit tests)
The TDD community loves tests and finds writing code without tests more painful than writing tests before code.
Is your point that the TDD community is a minority?
> It does a better job at writing unit test (not perfect) then the fellow human programmer
I see a lot of very confused tests out of cursor etc that do not understand nor communicate intent. Far below the minimum for a decent human programmer.
AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.
Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health ... what have the Romans ever done for us?
This is one of the harder parts of the job IMHO. What is missing from writing “the code” that is not required for bug fixes?
SWEs that do not have (or develop) this skill (to fill-in the 10% that doesn’t work and fully understand the 90% that works and very, very quickly) will be plumbers in a few years if not earlier.
That's the problem I've been facing. The AI does 90% of the work, but the 10% that I have to do myself is 20x harder because I don't have as much knowledge of the codebase as I would if I had written all of it by myself, like in the old days (2 years ago).
Similar to "git clone bigproject@github.git"? There is nothing fascinating about creating something that has existed around the training set. It is fascinating that the AI can make some variations from the original content though.
> If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms.
This is where all the "vibe-coders" disappears. LLMs can write code fast but so does copy-paste. Most of the "vibe-coded" stuff I see on the Internet is non-functional slop that is super-unoptimized and has open supabase databases.
To be clear, I am not against LLMs or embracing new technologies. I also don't have this idea that we have some kind of "craft" when we have been replacing other people for the last couple decades.
I've been building a game (fully vibe-coded, that is the rule is that I don't write/read any lines of code) and it has reached a stage where any LLM is unable to make any change without fully breaking it (for the curious: https://qpingpong.codeinput.com). The end result is quite impressive but it is far from replacing anyone who has been doing serious programming anytime soon.
Coding agents in particular can be very helpful for senior engineers as a way to carry out investigations, double-check assumptions, or automate the creation of some parts of the code.
One key point is to use their initial output as a draft, as a starting point that still needs to be checked and iterated, often through pair programming with the same tool.
The mid-term impact of this transition is hard to anticipate. We will probably get a wide range of cases, from hyper-productive small teams displacing larger but slower ones, to AI-enhanced developers in organisations with uneven adoption quietly enjoying a lot more free time while keeping the same productivity as before.
Related - how do you get that thing to stop writing comments? If asked not to do so, it will instead put that energy into docstrings, debug logs and what not, poisoning the code for any further "AI" processing.
Stuff like (this is an impression, not an actual output):
// Simplify process by removing redundand operations
int sz = 100;
// Optimized algorithm, removed mapping:
return lookup(load(sz));
Most stuff in comments is actively misleading.Also the horrible desire of writing new code and not reading docs, either in-project or on the web...
For writing ffmpeg invocations or single-screen bash scripts, great thing! For writing programs? Actively harmful
And these "oh, I understand, C is completely incorrect" then proceeding to completely sabotage and invalidate everything.
Or assembling some nuclear python script like some McGyver and running it, to nuke even the repo itself if possible.
Best AAA comedy text adventure. Poor people who are forced to "work" like that. But cleanup work will be glorious. If companies will survive that long.
This matches my experience. It's not useful for producing something that you wouldn't have been able to produce yourself, because you still need to verify the output itself and not just the behavior of the output when executed.
I'd peg this as the most fundamental difference in use between LLMs and deterministic compilers/transpilers/codegens.
it's purely for myself, no one else.
I think this is what AI can do at the moment, in terms of mass market SaaS vibe codes, it will be harder. Happy to be proven wrong.
but guess who advises that architecture and implements it... the principal developer/architect.
You can use good security tools, badly.
I was already quite adept in the language and frameworks involved and the risk was very small so it wasn't a big time sink to review the application PR's. Had I not been it would have sucked.
For me the lesson learned wrt agentic coding is to adjust my expectations relative to online rhetoric and it can be sometimes be useful for small isolated one-offs.
Also it's once in a blue moon I can think of a program suitable for agentic coding so I wouldn't ever consider purchasing a personal license.
We need regulations to prevent such large scale abuse of economic goods especially if the final output is mediocre.
It is the Mc Donalds version of programming, except that Mc Donalds does not steal the food they serve.
Waymo has been about to replace the need for human drivers for more than a decade and is just starting to get there in some places, but has had basically no impact on demand yet, and that is a task with much less skill expression.
The challenge is writing code in such a way that you end up with a system which solves all the problems it needs to solve in an efficient and intuitive way.
The difference between software engineering and programming is that software engineering is more like a discovery process; you are considering a lot of different requirements and constraints and trying to discover an optimal solution for now and for the foreseeable future... Programming is just churning out code without much regard for how everything fits together. There is little to no planning involved.
I remember at university, one of my math lecturers once said "Software engineering? They're not software engineers, they're programmers."
This is so wrong. IMO, software engineering is the essence of engineering. The complexity is insane and the rules of how to approach problems need to be adapted to different situations. A solution which might be optimal in one situation may be completely inappropriate for a slightly different situation due to a large number of reasons.
When I worked on electronics engineering team projects at university, everyone was saying that writing the microcontroller software was the hardest part. It's the part most teams struggled with, more so than PCB design... Yet software engineers are looked down upon as members of an inferior discipline... Often coerced into accepting the lesser title of 'developer'.
I'm certain there will be AIs which can design optimal PCBs, optimal buildings, optimal mechanical parts, long before we have AI which can design optimal software systems.
> Programming is just churning out code without much regard for how everything fits together
What you’re describing is a beginner, not a programmer
> There is little to no planning involved > trying to discover an optimal solution for now and for the foreseeable future
I spend so much time fighting against plans that attempt to take too much into account and are unrealistic about how little is known before implementation. If the most Important Difference is that software engineers like planning, does that mean that being SE makes you less effective?
I agree that you shouldn't plan too much, but my experience is that anticipating requirements is possible and highly valuable. It doesn't require planning but it requires intuition and an ability to anticipate problems and adhere to certain principles.
For example, in my current job, I noticed a pattern that was common to a lot of tasks early on. It was non-obvious. I implemented a module which everyone on my team ended up using for essentially every task thereafter. It could have been implemented in 100 different ways, it could have been implemented at a later time, but the way I implemented it meant that it saved everyone a huge amount of time since early on in the project.
Also, we didn't have to do any refactoring and were later able to add extra capabilities and implement complex requirement changes to all parts of the code retroactively thanks to this dependency.
One time we learned that we had to calculate the target date/time differently and our requirements engineer was very worried that this would require a large refactoring to all our processes. It didn't; we changed it in one place and didn't have to update even a single downstream process.
It was a relatively complex module which required some understanding of the business domain but it provided exactly the right amount of flexibility. Now, all my team members know how to update the config on their own. We haven't yet encountered a case it couldn't handle easily.
I have similar stories to tell about many companies I worked for. When AI can replace me, it will be able to replace most entrepreneurs and managers.
If you don't understand that language, code becomes a mystery, and you don't understand what the problem is we're trying to solve.
It becomes this entity, "the code". A fantasy.
Truth is: we know. We knew it way before you. Now, can you please stop stating the obvious? There are a lot of problems to solve and not enough time to waste.
The big issue is that I didn’t know the APIs very well, and I’m not much of a Python programmer. I could have done this by hand in around 5 days with a ramp up to get started, but it took less than a day just to tell the AI what I wanted and then to iterate with more features.
Thus, the root cause of the meetings' existence is BS mostly. That's why you have BS meetings.
The fastest way to drive AI adoption is thus by thinning out org layers.
Mechanical and later electical calculators replaced human calculators. Accountants switched from having to delegate computation to owning a calculator.
I'm by no means a doomer, but its obviously a huge change.
Generative coding models will never be 100% perfect. The speed of their convergence to acceptable solutions will decline in complex and novel systems, and at some point there will be diminishing returns to increasing investment in improving their performance.
The cost of software will fall precipitously and it seems unlikely that the increase in the value of programmers / engineers as they currently practice will offset the decline in the price in software. However, following the law of supply and demand, the supply and the amount of software produced will surely grow, and I think someone has to use the models to build software. I expect being trained in software engineering will be very helpful for making effective use of these tools, but such training may not sufficient for a person to succeed in the new labor market.
The scope of problem that a valuable engineer is expected to manage will grow enormously, requiring not only new skills in using generative coding/language models, but also in reasoning about the systems they help create. I anticipate growth in crossover PM / engineering roles. I guess that people who generalize across the stack and current sub-disciplines will thrive and valuable specialties and side-disciplines will include software architecture, electrical engineering, robotics, communication, and business management.
Some people will thrive in this new field, but it may be a difficult transition for many. I suspect that confusion about model capabilities and how to make the most of them and which people are doing valuable things will put a lot of friction and inefficiency into the transition time-frame.
Last thought, given how great models are at coding compared to general of knowledge, administrative, and bureaucratic work, I expect models are widely used to build systems that are supply shocks on such work. I don't think my argument above applies to such workers. I'm worried most about them.
Currently AI models are inconsistent and unpredictable programmers, but less so when applied to non-novel small and focused programming tasks. Maybe that will change resulting in it being able to do your job. Are you just writing lines of code, organized into functions and modules using a “hack it till it works” methodology? If so, I suggest be open to change.
Building a house is like a CRUD app, it already was a solved problem technically. AI is like prefabs or power tools. If your job and what you were interested in was building houses AI is great. If you were a brick layer not so much.
Engineer is not an aggrandizing title, it’s the job. Being paid for the hobby of writing code was just an anomaly that AI will close in the majority of the industry IMO.
AI fundamentally changed the programming experience for me in a positive way, but I‘m glad that it’s not my full time job. I think it can also have bad effects which can not be easily avoided in fulltime roles under market conditions.
Meanwhile it's depressing that AI is doing the fun part of the job (for me of course, some people quite like the other aspects of the job; and I do like some of them as well, but writing code is still the most fun I have)
rwaksmunski•1mo ago
zingar•1mo ago
Or a more meta point that “LLMs are capable of a lot”?
rwaksmunski•1mo ago