Now with coding agents, in a sense, we can read the “mind” of the system that helped build the feature. Why did it do what it did, what are the gotchas, any follow up actions items.
Today I decided to paste my prompts and agent interactions into Linear issues instead of writing traditional notes. It felt clunky, but stopped and thought "is this valuable?" It's the closest thing to a record of why a feature ended up the way it did.
So I'm wondering:
- Is anyone intentionally treating agent prompts, traces, or plans as a new form of documentation? - Are there tools that automatically capture and organize this into something more useful than raw logs? - Is this just more noise and not useful with agentic dev?
It feels like there's a new documentation pattern emerging around agent-native development, but I haven't seen it clearly defined or productized yet. Curious how others are approaching this.
sshadmand•1h ago
...on the other hand... since we still have humans using the features and interacting with them, knowing what is going on and why it made a decisions (for better or worst) doesn't seem like something to let go of.