- Outputs are either noisy, or lacking important details.
- Trade-offs aren't fully listed when the agents provide options.
- Limited context length, and the inability to share context between sessions means I often have to repeat the same info multiple times when iterating through a problem.
So, I defined some custom workflows that make the agent persist the context to markdown files, where I can thoroughly review, answer inline, cross-reference, or delegate another agent for more analysis. The agent and I keep iterating the discussion until we resolve every blocker. This builds an auditable trail of discussions and decision making.
I believe these problems and preferences aren't only mine. There are likely some existing tools or skills that provide similar mechanics (e.g. Entire.io). However, I avoided them due to security concerns with third-party plugins, and because I dislike the prompting UX. Skills, if relevant, would just add more slash commands. My customized workflows are about discussions-like UX.
The repo contains:
- Opinionated mechanics to group rules by layers: global + cross machine, global + machine specific, project + shared, and project + personal.
- A layout to split language/tech specific rules into small files. Agents read only what's relevant to the current task, so they don't waste context.
- Some general practices that I find useful.
Consider this repo as a template or reference implementation which you can build upon. More details in the project readme. Enjoy.