If you have it write down every important information and finding along a plan that it keeps updated, why would you even want compaction and not just start a blank sessions by reading that md?
I'm kind of suprised that anyone even thinks that compaction is currently in any way useful at all. I'm working on something which tries to achieve lossless compaction but that is incredibly expensive and the process needs around 5 to 10 times as many tokens to compact as the conversation it is compacting.
Firstly, it's very useful to have your (or at least some) previous messages in. There's often a lot of nuance it can pick up. This is probably the main benefit - there's often tiny tidbits in your prompts that don't get written to plans.
Secondly, it can keep eg long running background bash commands "going" and know what they are. This is very useful when diagnosing problems with a lot of tedious log prepping/debugging (no real reason these couldn't be moved to a new session tho).
I think with better models they are much better at joining the dots after compactation. I'd agree with you a few months ago that compactation is nearly always useless but lately I've actually found it pretty good (I'm sure harness changes have helped as well).
Obviously if you have a total fresh task to do then start a new session. But I do find it helpful to use on a task that is just about finished but ran out of space, OR it's preferable to a new task if you've got some hellish bug to find and it requires a bunch of detective work.
> there's often tiny tidbits in your prompts that don't get written to plans.
Then the prompt of what should be written down is not good enough, I don't see any way how those tidbits would survive any compaction attempts if the llm won't even write them down when prompted.
>Secondly, it can keep eg long running background bash commands "going" and know what they are. This is very useful when diagnosing problems with a lot of tedious log prepping/debugging (no real reason these couldn't be moved to a new session tho).
I cannot really say anything about that, because I never had the issue of having to debug background commands that exhaust the context window when started in a fresh one.
I agree they are better now, probably because they have been trained on continuing after compaction, but still I wonder if I'm the only one who does not like compaction at all. Its just so much easier for an LLM to hallucinate stuff when it does have some lossy information instead of no information at all
it is pretty rare for me to compact, even if i let it run to 160k
--
just realized how i wouldn't think about using ccstatusline based a quick glance at its README's images. looks like this for me:
Store things you care about on disk!
aciccarelli2•3h ago
The transcript with the original markup is still sitting at ~/.claude/projects/ as a .jsonl file — the compaction summary just has no pointer back to it.
I found 8+ open issues on the repo describing different symptoms of this same root cause. The proposal is to add line-range annotations to compaction summaries so Claude can surgically recover just the chunk it needs from the transcript on demand. Zero standing token overhead.
Curious if others have hit this in different scenarios or found workarounds that actually stick.
martinald•2h ago
Fwiw I built a little CLI that could help with this, https://github.com/martinalderson/claude-log-cli. It allows Claude to search its own logs very efficiently. So I'm sure you could add something like "if the session is continued from a previous one, use claude-log cli to find users original prompt with claude-log" which would pull it out very efficiently. I built it to enable self improving claude.md files (link to the blog in the GitHub) but it's so useful for many tasks.
antinomicus•2h ago
martinald•1h ago
maxbond•1h ago
selridge•1h ago