This resonates with me for a couple of reasons. One is that despite a good AGENTS.md file and a detailed, specific prompt, I've seen LLM agents generate all sorts of questionable code. From making a mistake, running tests and fixing the mistake meanwhile adding a comment which only makes sense when you read it from the perspective of having seen it make that mistake... As soon as anyone else would read it, there is no context and it can be confusing or misleading...
Or naming tests only considering the specific task at hand, which is meaningless when compared to the grand scheme of things.
Yesterday I had GitHub ask me to complete a survey on it's Copilot coding agent, and it made me realize that some obvious things were missing from my AGENTS.md. Notes that are unnecessary to be written "normally" because it aligns naturally with how human programmers work. When writing a new unit test in a file full of unit tests, I typically copy an existing test which has roughly what I need, paste it and adapt it. Or at least look at existing tests when building a new one. I've seen LLM agents ignore private helper methods and do full integration style tests for new test cases because they don't work like that unless specifically instructed...
So yes, I definitely feel that AI can increase tech debt big time unless managed carefully - paved roads are the way to go for human developers and AI agents. It does get tricky when you need to branch out and do something new or never considered before though...
indentit•15m ago
Or naming tests only considering the specific task at hand, which is meaningless when compared to the grand scheme of things.
Yesterday I had GitHub ask me to complete a survey on it's Copilot coding agent, and it made me realize that some obvious things were missing from my AGENTS.md. Notes that are unnecessary to be written "normally" because it aligns naturally with how human programmers work. When writing a new unit test in a file full of unit tests, I typically copy an existing test which has roughly what I need, paste it and adapt it. Or at least look at existing tests when building a new one. I've seen LLM agents ignore private helper methods and do full integration style tests for new test cases because they don't work like that unless specifically instructed...
So yes, I definitely feel that AI can increase tech debt big time unless managed carefully - paved roads are the way to go for human developers and AI agents. It does get tricky when you need to branch out and do something new or never considered before though...