I'm not sure why he thinks current LLM technologies (with better training) won't be able to do more and more of this as time passes.
For transparency in future incidents, I now expect that post-mortems like this one [0] would go along the lines of: "An AI code generator was used, it passed all the tests, we checked everything and we still got this error."
There is still one fundamental lesson in [0]: English as a 'programming language' cannot be formally verified and probabilistic AI generators can still be the cause of perfect-looking code being the cause of an incident.
This time the engineers will have no understanding of the AI generated code itself.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
politelemon•1h ago
The irony here is that although pointing out quite well how people may have made incorrect judgment calls due to what comes down to personal experience at various times, this aspect is also down to personal experience.
An LLM can look these up and is still getting them wrong, or it can get them right but still pick the wrong conventions to use. More importantly though, LLM code assistants will not always be performing lookups, you cannot assume the same IDE and tool configuration profile for everyone. You cannot even assume that everyone's using an IDE with an embedded chatbot.