Found this relevant as we increasingly rely on LLM agents.
The key finding—that models lose 60-80% of debugging capability within 2-3 attempts due to context pollution—challenges the current UX of 'chat-based' coding. It suggests we need tools that prioritize 'fresh state injection' over 'conversation history'."
morethananai•7h ago
I've felt that many times. This explains exactly what I see. Instead of just wiping the context (which works but is lossy), I try to inject richer, structural context that avoids chat history dependency. I also built an extension to capture context more simply.
chengchang316•7h ago