Seems when a project goes beyond a certain scope, LLMs lose the ability of separating what they come across, so everything "blends into a global state" or something, and it'll indirectly be inspired by unrelated things. All LLMs/agents seems to suffer from this more or less, as far as I can tell.
It definitely seems to help the LLM retain focus and architectural integrity.
I think a lot of LLM speedups boil down to this (from personal and read experience). Which is fine in some limited use cases, but also the antithesis of a good developer. Your job is to learn how things work if it’s work you’re putting your name behind.
Using LLMs to shortcut the blank page in these scenarios and get you learning faster is the way to go (though sometimes it’ll send you in the absolute wrong direction initially).
> Since then, I have vibecoded every single feature … now, this has lead to a number of hilarious failures
When I was a junior dev I wouldn’t read every line of a PR. Eventually you learn you should be reading lines that aren’t changed in a PR as well as every line that is. The author seems like a smart guy but more of a researcher than somebody I’d pay to build reliable software.
> 1. AI coding makes it absolutely trivial to add new features later on if you do need it.
When I do programming, I often think deeply about the features and abstractions that I need to get them right.
If I need an additional feature, it thus often means that I deeply misunderstood the problem domain (which does happen), so the changes that are necessary for adding new features in the code are often deep and mean "seeing the world with new eyes (that can also see infrared or ultraviolet light)". A little bit like going from the universal algebra definition of a group to group objects in a braided monoidal category (which, for example, show that there is a deep abstract relationship between groups and Hopf algebras; see for example [2]).
I really cannot imagine how an AI can be capable of doing such deep transformations, which basically mean "rebuild the whole program so that the entire source code is now based on a completely different thinking about the respective problem domain, which goes like this: ...".
[1] https://en.wikipedia.org/wiki/Group_object
[2] https://en.wikipedia.org/wiki/Hopf_algebra#Analogy_with_grou...
I have to refute this. It may make it easier or faster...but definitely not trivial. I had it add a new feature to my simple app, and the way it did it worked, it just fetched the entire list of entities for each entity in a list. Why it didn't create a new endpoint for "get entity" or just cache the whole list then do lisp) local lookup, I don't know...but it absolutely wrecked performance.
mooreds•3mo ago