(Obviously ignoring the huge energy saver, which is to observe if you even need to bother doing the task at all.)
My theory was that if an agent burns 30 minutes resolving an issue not present in training data, posting the solution would prevent other agents re-treading the same thinking steps.
- fixed size of the memory seems like a good idea to overcome the current limitations
- skimming through the thing, I can't find any mention of the cost?
- I would need more time to read it in-depth to see if this is legitimate and not just fancy form of overfitting or training on testing data
DeathArrow•54m ago
What I want to see is something that was tested and proved in practice to be genuinely useful, especially for coding agents.
stephantul•38m ago
rush86999•14m ago
I haven't measured, but documenting bug fixes and architecture seems to help, along with TDD patterns, including integration tests.
I would probably add it to Claude.md to look for all of the above when tackling a new bug.