Another variant of "you're holding it wrong" which attempts to cover up a real and serious issue with ai codegen: it's simply not reliable at producing good code, which is completely understandable when we think about it as what it IS (a token predictor) vs what we WISH it to be - an intelligent entity which understands the domain it's in and is able to contribute meaningfully to the codebase.
Yes, your ai codegen agent may occasionally produce "good enough" code, but it will always require validation by a designer who understands the domain, the tooling, and the requirements. At least, it will always display this behavior as long as we're using the current mechanisms to train and execute. This is why LLMs will never reach sentience, why smarter people are quietly working on AGI differently from the ai grifters who are, in most likelihood, simply going to precipitate the next bubble-burst, as soon as the reacharounds between all these companies reaches critical mass and can no longer be supported by the previously-gullible masses.
We should also all be wary of companies aiming to shift the blame for their design failures onto the user.
davydm•3h ago
Yes, your ai codegen agent may occasionally produce "good enough" code, but it will always require validation by a designer who understands the domain, the tooling, and the requirements. At least, it will always display this behavior as long as we're using the current mechanisms to train and execute. This is why LLMs will never reach sentience, why smarter people are quietly working on AGI differently from the ai grifters who are, in most likelihood, simply going to precipitate the next bubble-burst, as soon as the reacharounds between all these companies reaches critical mass and can no longer be supported by the previously-gullible masses.
We should also all be wary of companies aiming to shift the blame for their design failures onto the user.