"You did this with an AI and you do not understand what you're doing here" (2025) https://news.ycombinator.com/item?id=45330378
"Comprehension debt: A ticking time bomb of LLM-generated code" (2025) https://news.ycombinator.com/item?id=45423917
Test case to build understanding about safety and model and agent and PEBKAC inadequacy:
Generate a stop light.
Generate a stop light with unit tests.
Also, test that there can never be multiple lights on at once, in software, and then in hardware
(Nevermind that nobody will understand a new different stop light and the impact; the exercise is to also try and code one that's sufficient (that's validateable per customer specifications, and ideally verifiable per a sufficient formal specification))
Run the tests and improve test coverage by parsing the exceptions and variables in the test output
What is AI slop, and why should projects do PR review on slop when the contributor could've asked an LLM to review their code? GitHub has optional auto-review of all PRs IIUC?
As a senior engineer looking at a handful of vibe-coded prototypes of apparently sufficient but lurkingly technical debt-y projects, should I spend my time vibe-coding more on top or should I step back and return to sound engineering and software development methods to increase the value of and reduce the risk of these cool demos it auto-generated from really short prompts?
Explain each layer of this stack; and then update the AGENTS.md
ofalkaed•1h ago