I noticed with post-edit hooks that Claude would do things like import a Python module in one edit then add code to use it in another. The lint hook would fire after the first edit and complain about unused imports and it seems to confuse or slow down Claude. It got there in the end but with some unnecessery drama polluting the context.
Some advice suggests to use "stop" hooks so Claude would lint after it had finished its current task, but apparently by that point it can lose the context of which file it changed. Possibly not a problem if you lint everything in a small project.
I also see potential for running automated fixes without bothering Claude about it. Though again there's potential for the llm to get confused by things happening "behind it's back". I've seen edits get redone if the underlying file has changed.
One final idea, fixing lint issues feels like something that could be done by a cheaper sub agent without bothering the main model for small things, possibly even more so with this kind of prompt based feedback.
ZeroGravitas•1h ago
I noticed with post-edit hooks that Claude would do things like import a Python module in one edit then add code to use it in another. The lint hook would fire after the first edit and complain about unused imports and it seems to confuse or slow down Claude. It got there in the end but with some unnecessery drama polluting the context.
Some advice suggests to use "stop" hooks so Claude would lint after it had finished its current task, but apparently by that point it can lose the context of which file it changed. Possibly not a problem if you lint everything in a small project.
I also see potential for running automated fixes without bothering Claude about it. Though again there's potential for the llm to get confused by things happening "behind it's back". I've seen edits get redone if the underlying file has changed.
One final idea, fixing lint issues feels like something that could be done by a cheaper sub agent without bothering the main model for small things, possibly even more so with this kind of prompt based feedback.