> If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?
Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.
> You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.
These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.
And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?
I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.
I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.
It can write some types of code. It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.
If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms. The $3B is better spent on coming up with truly novel technology that these companies could monopolise with their models. Well, the can't, not yet.
My gut feeling tells me that this might be actually possible at some point but at enormous cost that will make it impractical for most intents and purposes. But even if it possible tomorrow, you would still need people that understand the systems because without them we are simply doomed.
In fact, I would go as much as saying that the demand for programmers will not plummet but skyrocket requiring twice as many programmer we have today. The world simply wont have enough programmers to supply. The reason I think this might actually happen is because the code produced by AI will be so vast overtime that even if humans need to handle/understand 1% that will require more than the 50M developers we have today.
> (few people like writing unit tests)
The TDD community loves tests and finds writing code without tests more painful than writing tests before code.
Is your point that the TDD community is a minority?
> It does a better job at writing unit test (not perfect) then the fellow human programmer
I see a lot of very confused tests out of cursor etc that do not understand nor communicate intent. Far below the minimum for a decent human programmer.
AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.
Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health ... what have the Romans ever done for us?
This is one of the harder parts of the job IMHO. What is missing from writing “the code” that is not required for bug fixes?
SWEs that do not have (or develop) this skill (to fill-in the 10% that doesn’t work and fully understand the 90% that works and very, very quickly) will be plumbers in a few years if not earlier.
it's purely for myself, no one else.
I think this is what AI can do at the moment, in terms of mass market SaaS vibe codes, it will be harder. Happy to be proven wrong.
I was already quite adept in the language and frameworks involved and the risk was very small so it wasn't a big time sink to review the application PR's. Had I not been it would have sucked.
For me the lesson learned wrt agentic coding is to adjust my expectations relative to online rhetoric and it can be sometimes be useful for small isolated one-offs.
Also it's once in a blue moon I can think of a program suitable for agentic coding so I wouldn't ever consider purchasing a personal license.
We need regulations to prevent such large scale abuse of economic goods especially if the final output is mediocre.
It is the Mc Donalds version of programming, except that Mc Donalds does not steal the food they serve.
The challenge is writing code in such a way that you end up with a system which solves all the problems it needs to solve in an efficient and intuitive way.
The difference between software engineering and programming is that software engineering is more like a discovery process; you are considering a lot of different requirements and constraints and trying to discover an optimal solution for now and for the foreseeable future... Programming is just churning out code without much regard for how everything fits together. There is little to no planning involved.
I remember at university, one of my math lecturers once said "Software engineering? They're not software engineers, they're programmers."
This is so wrong. IMO, software engineering is the essence of engineering. The complexity is insane and the rules of how to approach problems need to be adapted to different situations. A solution which might be optimal in one situation may be completely inappropriate for a slightly different situation due to a large number of reasons.
When I worked on electronics engineering team projects at university, everyone was saying that writing the microcontroller software was the hardest part. It's the part most teams struggled with, more so than PCB design... Yet software engineers are looked down upon as members of an inferior discipline... Often coerced into accepting the lesser title of 'developer'.
I'm certain there will be AIs which can design optimal PCBs, optimal buildings, optimal mechanical parts, long before we have AI which can design optimal software systems.
> Programming is just churning out code without much regard for how everything fits together
What you’re describing is a beginner, not a programmer
> There is little to no planning involved > trying to discover an optimal solution for now and for the foreseeable future
I spend so much time fighting against plans that attempt to take too much into account and are unrealistic about how little is known before implementation. If the most Important Difference is that software engineers like planning, does that mean that being SE makes you less effective?
If you don't understand that language, code becomes a mystery, and you don't understand what the problem is we're trying to solve.
It becomes this entity, "the code". A fantasy.
Truth is: we know. We knew it way before you. Now, can you please stop stating the obvious? There are a lot of problems to solve and not enough time to waste.
The big issue is that I didn’t know the APIs very well, and I’m not much of a Python programmer. I could have done this by hand in around 5 days with a ramp up to get started, but it took less than a day just to tell the AI what I wanted and then to iterate with more features.
Thus, the root cause of the meetings' existence is BS mostly. That's why you have BS meetings.
The fastest way to drive AI adoption is thus by thinning out org layers.
rwaksmunski•1h ago
zingar•1h ago
Or a more meta point that “LLMs are capable of a lot”?