I’ve been pairing with AI coding tools a lot, and kept running into the same
problems:
- Every time I switch tools/models, I have to re-explain the project.
- Specs live in my head or in random chat history.
- The AI happily writes code, but there’s no clear “this is the task, this is
how we verify it, this is how we log the change”.
SPEC-AGENTS.md is a tiny attempt to fix that.
You drop an `AGENTS.md` file (this repo) plus a small `.phrase/` folder into
your project. That file tells the AI to:
- treat docs as the source of truth (`spec_`, `plan_`, `task_`, `change_`,
`issue_`, `adr_`)
- only tackle one atomic `taskNNN` per session
- always write back what happened (what changed, how it was verified)
There’s no server, no binary, no tooling – it’s just conventions the AI is
expected to follow. Any tool that can read files (CLI, editor plugin, chat
with “read files” feature) can play along.
Rough loop:
1. You and the AI update `spec_` / `plan_` in `.phrase/phases/...` to
describe what you want.
2. You break that into small `taskNNN` items, each with a clear output +
verification step.
3. The AI implements one task, runs tests/manual checks, and tells you what
it did.
4. It writes back to `task_` and `change_`, and updates `spec_` /
`issue_` / `adr_*` if needed.
The README has an ASCII diagram and a small “dark mode toggle” example
conversation to show what this looks like in practice. There’s also a Chinese
section because I originally wrote this for my own projects.
This is still an experiment. It adds a bit of ceremony, so it’s probably
overkill for one-off scripts, but it feels good for small projects where you
want more structure without bringing in a full PM tool.
I’d love to know:
- Does this doc-first, one-task-per-session style match how you work with AI,
or is it too much?
- If you already use specs (OpenSpec, your own templates, etc.), would you
keep this as a separate “AI contract”, or just integrate the ideas?
- What’s missing for this to be useful in your day-to-day?
oliverchan2024•8h ago
- Every time I switch tools/models, I have to re-explain the project. - Specs live in my head or in random chat history. - The AI happily writes code, but there’s no clear “this is the task, this is how we verify it, this is how we log the change”.
SPEC-AGENTS.md is a tiny attempt to fix that.
You drop an `AGENTS.md` file (this repo) plus a small `.phrase/` folder into your project. That file tells the AI to:
- treat docs as the source of truth (`spec_`, `plan_`, `task_`, `change_`, `issue_`, `adr_`) - only tackle one atomic `taskNNN` per session - always write back what happened (what changed, how it was verified)
There’s no server, no binary, no tooling – it’s just conventions the AI is expected to follow. Any tool that can read files (CLI, editor plugin, chat with “read files” feature) can play along.
Rough loop:
1. You and the AI update `spec_` / `plan_` in `.phrase/phases/...` to describe what you want. 2. You break that into small `taskNNN` items, each with a clear output + verification step. 3. The AI implements one task, runs tests/manual checks, and tells you what it did. 4. It writes back to `task_` and `change_`, and updates `spec_` / `issue_` / `adr_*` if needed.
The README has an ASCII diagram and a small “dark mode toggle” example conversation to show what this looks like in practice. There’s also a Chinese section because I originally wrote this for my own projects.
This is still an experiment. It adds a bit of ceremony, so it’s probably overkill for one-off scripts, but it feels good for small projects where you want more structure without bringing in a full PM tool.
I’d love to know:
- Does this doc-first, one-task-per-session style match how you work with AI, or is it too much? - If you already use specs (OpenSpec, your own templates, etc.), would you keep this as a separate “AI contract”, or just integrate the ideas? - What’s missing for this to be useful in your day-to-day?