The amount of bugs and tech debt boggles the mind here
But in practice, now that I am working with it, what I needed from those tools already works, with no major bugs so far. I haven’t recreated the tools! just the parts I need to able to plug in and plug out features. Also, many of those features are usually available in great libraries (like Tiptap).
Is it a common tourist setup or do people stay there long term? Whats the rent on that?
Thank god i had company lodging when visiting South Korea
1. https://substackcdn.com/image/fetch/$s_!kXTm!,f_auto,q_auto:...
The moment LLM's can replace engineers do we need VCs? Because any one at any time can will any application they want into existence.
Don't need social connections if the end-consumer can just conjure up apps themselves with AI.
How ironic if that's how it ends.
Someone has to maintain that code and there is not a single mention of that caveat after the software is built in the article and 99.9999% of the most widely used software is maintained by humans - even if parts of it was vibe-coded, a human has to maintain it so that it functions correctly, especially a must if it is mission critical software.
It is like we are celebrating mediocrity under the guise of AI and rebranding 'prototyping' as 'vibe-coding' but worse - software with fast accumulating technical debt, slapping on third-party risks and close to no tests at all.
Eventually, someone has to maintain that software and surely 9 times out of 10, a typical senior software engineer will look at that vibe-coded slop and will either throw it all away or reduce these third party services with existing robust open source versions.
Vibe-coding gets you faster to maintain the same negatives from traditional software engineering without you understanding what you are doing.
But I do believe we'll get to the point they will replace more and more engineers. yes, I don't know how fast, I don't know if LLM will be able to reach that point. But eventually, all that money in research will get somewhere I believe.
Once you do that, then it is not "vibe coding".
> But I do believe we'll get to the point they will replace more and more engineers.
Well some software engineers will get replaced. However, your claim was "all engineers", which isn't realistic.
Given the amount of safety mission critical software that runs the internet, air traffic control for planes, cars and embedded devices, etc they will always need human software engineers to review, test and maintain all of that, including the Linux kernel itself which runs almost everywhere.
Fully replacing all of them with LLMs would be outright irresponsible.
> But eventually, all that money in research will get somewhere I believe.
Of course, LLM security researchers and consultants breaking vibe-coded apps.
ok :)
All engineers? This doesn't match my hands-on experience at all.
If you give a chainsaw to everyone, it doesn't make everyone a lumberjack. And a chainsaw itself certainly isn't a lumberjack.
If you give Claude Code or the like to everyone, it's doesn't make everyone a highly skilled software engineer. And Claude code itself isn't a highly skilled software engineer.
I've come around to this view. When I first began using these things for building software (the moment ChatGPT dropped), I was immediately skeptical of the view that these things are merely glorified autocomplete. They felt so different than that. These computers would do what I _meant_, not what I _said_. That was a first and very unlike any autocomplete I'd ever seen.
Now, with experience using them to build software and feeling how they are improving, I believe they are nothing more or less than fantastically good auto complete. So good that it was previously unimaginable outside of science fiction.
Why autocomplete and not highly skilled software engineer? They have no taste. And, at best, they only pretend to know the big picture, sometimes.
They do generate lots of code, of course. And you need something / someone that you can trust to review all that code. And THAT thing needs to have good taste and to know the big picture, so that it can reliably make good judgement calls.
LLMs can't. So, using LLMs to write loads of code just shifts the bottleneck from writing code to reviewing code. Moreover, LLMs and their trajectory of improvements do not, to this experienced software engineer, feel like they are kind of solution and kind of improvements needed to get to an automated code review system so reliable that a company would bet its life on it.
Are we going to need fewer software engineers per line of code written? Yes. Are lines of code going to go way up? Yes. Is the need for human code review going to go way up? Yes, until something other than mere LLM improvements arrive.
Even if you assume 100% of code is written by LLMs, all engineers aren't going to be replaced by LLMs.
rmoriz•4h ago
nanark•3h ago
rmoriz•3h ago
nanark•3h ago