> With agentic coding, part of what makes the models work today is knowing the mistakes. If you steer it back to an earlier state, you want the tool to remember what went wrong. There is, for lack of a better word, value in failures. As humans we might also benefit from knowing the paths that did not lead us anywhere, but for machines this is critical information. You notice this when you are trying to compress the conversation history. Discarding the paths that led you astray means that the model will try the same mistakes again.
I've been trying to find the best ways to record and publish my coding agent sessions so I can link to them in commit messages, because increasingly the work I do IS those agent sessions.
Claude Code defaults to expiring those records after 30 days! Here's how to turn that off: https://simonwillison.net/2025/Oct/22/claude-code-logs/
I share most of my coding agent sessions through copying and pasting my terminal session like this: https://gistpreview.github.io/?9b48fd3f8b99a204ba2180af785c8... - via this tool: https://simonwillison.net/2025/Oct/23/claude-code-for-web-vi...
Recently been building new timeline sharing tools that render the session logs directly - here's my Codex CLI one (showing the transcript from when I built it): https://tools.simonwillison.net/codex-timeline?url=https%3A%...
And my similar tool for Claude Code: https://tools.simonwillison.net/claude-code-timeline?url=htt...
What I really want it first class support for this from the coding agent tools themselves. Give me a "share a link to this session" button!
In turn, this could all be plain-text and be made accessible, through version control in a repo or in a central logging platform.
The limits seem to be not just in the pull request model on GitHub, but also the conventions around how often and what context gets committed to Git by AI. We already have AGENTS.md (or CLAUDE.md, GEMINI.md, .github/copilot-instructions.md) for repository-level context. More frequent commits and commit-level context could aid in reviewing AI generated code properly.
bgwalter•1h ago
In many respects 2025 was a lost year for programming. People speak about tools, setups and prompts instead of algorithms, applications and architecture.
People who are not convinced are forced to speak against the new bureaucratic madness in the same way that they are forced to speak against EU ChatControl.
I think 2025 was less productive, certainly for open source, except that enthusiasts now pay the Anthropic tax (to use the term that was previously used for Windows being preinstalled on machines).
grim_io•45m ago
r2_pilot•25m ago
I think 2025 is more productive for me based on measurable metrics such as code contribution to my projects, better ability to ingest and act upon information, and generally I appreciate the Anthropic tax because Claude genuinely has been a step-change improvement in my life.
JimDabell•22m ago
I think the opposite. Natural language is the most significant new programming language in years, and this year has had a tremendous amount of progress in collectively figuring out how to use this new programming language effectively.
sixtyj•13m ago
"There’s an AI for that" lists 44,172 AI tools for 11,349 tasks. Most of them are probably just wrappers…
As Cory Doctorow uses enshittification for the internet, for AI/LLM there should be something like a dumbaification.
It reminds me late 90s when everything was "World Wide Web". :)
Gold rush it is.