I don't even know what this means, but my take: we should stop listening to VCs (especially those like A16Z) who have an obvious vested interest that doesn't match the rest of society. Granting these people an audience is totally unwarranted; nobody but other tech bros said "we will vibe code everything" in the first place. Best case scenario: they all go to the same exclusive conference, get the branded conference technical vest and that's were the asteroid hits.
(1) https://philippdubach.com/posts/the-saaspocalypse-paradox/
(2) https://philippdubach.com/posts/the-impossible-backhand/
Acharya’s framing is different from mine (he’s talking book on software stocks) but the conclusion is the same: the “innovation bazooka” pointed at rebuilding payroll is a bad allocation of resources. Benedict Evans called me out on LinkedIn for this (https://philippdubach.com/posts/is-ai-really-eating-the-worl...) take, which I take as a sign the argument is landing..
We work in Software ENGINEERING. Engineering is all about what tools makes sense to solve a specific problem. In some cases, AI tools do show immediate business value (eg. TTS for SDR) and in other cases this is less obvious.
This is all the more reason why learning about AI/ML fundamentals is critical in the same way understanding computer architecture, systems programming, algorithms, and design principles are critical to being a SWE.
Given the number of throwaway accounts that commented, it clearly struck a nerve.
Or maybe they own the debt.
Listen to some of the Marc Andreessen interviews promoting cryptocurrency in 2021.
Do that and you will never listen to him or his associates again.
I really hate the expression "the new normal", because it sort of smuggles in the assumption that there exists such thing as "normal". It always felt like one of those truisms that people say to exploit emotions like "in these trying times" or "no one wants to work anymore".
But I really do think that vibe coding is the "new normal". These tools are already extremely useful, to a point where I don't really think we'll be able to go back. These tools are getting good enough that it's getting to a point where it's getting to where you have to use them. This might sound like I'm supportive of this, and I guess am to some extent, but I find it to be exceedingly disappointing because writing software isn't fun anymore.
One of my most upvoted comments on HN talks about how I don't enjoy programming, but instead I enjoy problem solving. This was written before I was aware of vibe coding stuff, and I think I was wrong. I guess I actually did enjoy the process of writing the code, instead of just delegating my work to a virtual intern while I just watch the AI do the fun stuff.
A very small part of me is kind of hoping that once AI has to be priced at "not losing money on every call" levels that I'll be forced to actually think about this stuff again.
I think I would still kind of ask the same questions, though maybe a bit more conceptual. Like, for example, I might see if I could get someone to explain how to build something, and then ask them about data structures that might be useful (e.g. removing a lock by making an append-only structure). I find that Codex will generally generate something that "works" but without an understanding data structures and algorithms, its implementation will still be somewhat sub-optimal, meaning that understanding the fundamentals has value, at least for now.
To illustrate, I'll share what I'm working on now. My companies ops guy vibe coded a bunch of scripts to manage deployments. On the surface, they appear to do the correct thing. Except they don't. The tag for the Docker image used is hardcoded in a yaml file and doesn't get updated anywhere unless you do it manually. The docs don't even mention half of the necessary scripts/commands or implicit setup necessary for any of it to work in the first place, much less the tags or how any of it actually works. There are two completely different deployment strategies (direct to VM with docker + GCP and a GKE-based K8s deploy). Neither fully work, and only one has any documentation at all (and that documentation is completely vibed, so has very low information density). The only reason I'm able to use this pile of garbage at all is because I already know how all of the independent pieces function and can piece it together, but that's after wasting several hours of "why the fuck aren't my changes having an effect." There are very, very few lines of code that don't matter. We already have huge problems with overcomplicated crap made exclusively by humans, that's been hard enough to manage.
Vibe coding consistently gives the illusion of progress by fixing an immediate problem at the expense of piling on crap that obscures what's actually going on and often breaks exiting functionality. It's frankly not sustainable.
That being said, I've gotten some utility out of vibe coding tools, but it mostly just saves me some mental effort of writing boring shit that isn't interesting, innovative, or enjoyable, which is like 20% of mental effort and 5% of my actual work. I'm not even going to get started on the context switching costs. It makes my ADHD feel happy but I'm confident I'm less productive because of the secondary effects.
| The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.
- Dijkstra
All of you have experienced the ambiguity and annoyances of natural language. Have you ever: - Had a boss give you confusing instructions?
- Argued with someone only to find you agree?
- Talked with someone and one of you doesn't actually understand the other?
- Talked with someone and the other person seems batshit insane but they also seem to have avoided a mental asylum?
- Use different words to describe the same thing?
- When standing next to someone and looking at the same thing?
- Adapted your message so you "talk to your audience"?
- Ever read/wrote something on the internet? (where "everyone" is the audience)
Congrats, you have experienced the frustrations and limitations of natural language. Natural language is incredibly powerful and the ambiguity is a feature and a flaw, just like how in formal languages the precision is both a feature and a flaw. I mean it can take an incredible amount of work to say even very simple and obvious things with formal languages[1], but the ambiguity disappears[2].Vibe Coding has its uses and I'm sure that'll expand, but the idea of it replacing domain experts is outright laughable. You can't get it to resolve ambiguity if you aren't aware of the ambiguity. If you've ever argued with the LLM take a step back and ask yourself, is there ambiguity? It'll help you resolve the problem and make you recognize the limits. I mean just look at the legal system, that is probably one of the most serious efforts to create formalization in natural language and we still need lawyers and judges to sit around and argue all day about all the ambiguity that remains.
I seriously can't comprehend how on a site who's primary users are programmers this is an argument. If we somehow missed this in our education (formal or self) then how do we not intuit it from our everyday interactions?
[0] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
[1] https://en.wikipedia.org/wiki/Principia_Mathematica
[2] Most programming languages are some hybrid variant. e.g. Python uses duck typing: if it looks like a float, operates like a float, and works as a float, then it is probably a float. Or another example even is C, what used to be called a "high level programming language" (so is Python a celestial language?). Give up some precision/lack of ambiguity for ease.
I don't think that's the argument. The argument I'm seeing most is that most of us SWEs will become obsolete once the agentic tools become good enough to allow domain experts to fully iterate on solutions on their own.
sanction8•1h ago
This is your regular reminder that
1) a16z is one the largest backers of LLMs
2) They named one of the two authors of the Fascist Manifesto their patron saint
3) AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. (Quoted from Professor Woodrow Hartzog "How AI Destroys Institutions"). Or to put it another way, being plausible but slightly wrong and un-auditable—at scale—is the killer feature of LLMs and this combination of properties makes it an essentially fascist technology meaning it is well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda (quoted from the A plausible, scalable and slightly wrong black box: why large language models are a fascist technology that cannot be redeemed post).
arjie•59m ago
> ...
> For this WE WANT:
> On the political problem:
> Universal suffrage by regional list voting, with proportional representation, voting and eligibility for women.
> ...
> On the social problem:
> WE WANT:
> The prompt enactment of a state law enshrining the legal eight-hour workday for all jobs.
> ...
> On the military issue:
> WE WANT:
> The establishment of a national militia with brief educational services and exclusively defensive duty.
> ...
> On the financial problem:
> WE WANT:
> A strong extraordinary tax on capital of a progressive nature, having the form of true PARTIAL EXPROPRIATION of all wealth.
> ...
0: https://it.wikipedia.org/wiki/Programma_di_San_Sepolcro#Test...
1: https://en.wikipedia.org/wiki/Fascist_Manifesto#Text