It's a real problem! Every agent right now has their own weird filename. I love David Crawshaw's sketch.dev, but for reasons passing understanding they choose "dear_llm.md" for theirs.
It makes no sense and it really needs standardisation. I hope this catches on.
It would be nice if it was standardized. Right now I’m using ruler to automate generating these files for all standards as a necessary evil, but I don’t envision this problem being solved soon. Especially because these coding agents also use different styles for consuming MCP configs.
I don't see a reference to a specific filename in Aider's documentation, can you link to it?
Anthropic appears to be the major holdout here.
> Are there required fields?
> No. AGENTS.md is just standard Markdown. Use any headings you like; the agent simply parses the text you provide.
The LLM's whole shtick is that it can read and comprehend our writing, so let's architect for it at that level.
This isn't guaranteed. Just like we will never have fully self-driving cars, we likely won't have fully human quality coders.
Right now AI coders are going to be another tool in the tool bucket.
If we're otherwise assuming it reads and follows an AGENTS.md file, then following the README.md should be within reach.
I think our task is to ensure that our README.md is suitable for any developer to onboard into the codebase. We can then measure our LLMs (and perhaps our own documentation) by if that guidance is followed.
A good example is autonomous driving and local laws / context. "No turn on red. School days 7am-9am".
So you need: where am I, when are school days for this specific school, and what datetime it is. You could attempt to gather that through search. Though more realistically I think the municipality will make the laws require less context, or some machine readable (e.g. qrcode) transfer of information will be on the sign. If they don't there's going to be a lot of rule breaking.
Amp used to have an "RFC 9999" article on their website for this as well but the link now appears to be broken.
You can symlink your Cursor / Windsurf / whatever rules to AGENTS.md for backwards compatibility.
For me, that gives a 404 with no obvious way to get to https://agents.md, I think either a hyperlink or redirect would be nice to have as well.
This situation reminds me a bit of ergonomic handles design. Designed for a few people, preferred by everyone.
With an agent I know if I write once to CLAUDE.md and it will be read by 1000’s of agents in a week.
With Claude code and others, if I put a context file (agent.MD or whatever) in a project subfolder, e.g., something explaining my database model in with the related code, it gets added to the root project context when the agent is using that subfolder.
It sounds to me like this formulation doesn’t support that.
> Place another AGENTS.md inside each package. Agents automatically read the nearest file in the directory tree, so the closest one takes precedence and every subproject can ship tailored instructions. For example, at time of writing the main OpenAI repo has 88 AGENTS.md files.
For tiny, throwaway projects, a monolithic .md file is fine. A folder allows more complex projects to use "just enough hierarchy" to provide structure, with index.md as the entry point. Along with top-level universal guidance, it can include an organization guide (easily maintained with the help of LLMs).
index.md
├── auth.md
├── performance.md
├── code_quality
├── data_layer
├── testing
└── etc
In my experience, this works loads better than the "one giant file" method. It lets LLMs/agents add relevant context without wasting tokens on unrelated context, reduces noise/improves response accuracy, and is easier to maintain for both humans and LLMs alike.¹ Ideally with a better name than ".agents", like ".codebots" or ".context".
I've been experimenting with having a rules.md file within each directory where I want a certain behavior. Example, let us say I have a directory with different kind of services like realtime-service.ts and queue-service.ts, I then have a rules.md file on the same level as they are.
This lets me scaffold things pretty fast when prompting by just referencing that file. The name is probably not the best tho.
In any case, I increasingly question the use of an agents file. What's the point, then the agent forget about them every few prompt, and need to be constantly reminded to go through the file again and again?
Another thought: are folks committing their AGENTS.md? If so, do you feel comfortable with the world knowing that a project was built with the help of AI? If not, how do you durably persist the file?
<Role> <instruction>
Agent only reads the file if its role is defined there.
Inside project directory, we've a dot<coding agent name> folder where coding agents state is stored.
Our process kicks off with an `/init` command, which triggers a deep analysis of an entire repository. Instead of just indexing the raw code, the agent generates a high-level summary of its architecture and logic. These summaries appear in the editor as toggleable "ghost comments." They're a metadata layer, not part of the source code, so they are never committed in actual code. A sophisticated mapping system precisely links each summary annotation to the relevant lines of code.
This architecture is the solution to a problem we faced early on: running Retrieval-Augmented Generation (RAG) directly on source code never gave us the results we needed.
Our current system uses a hybrid search model. We use the AST for fast, literal lexical searches, while RAG is reserved for performing semantic searches on our high-level summaries. This makes all the difference. If you ask, "How does authentication work in this app?", a purely lexical search might only find functions containing the word `login` and functions/classes appearing in its call hierarchy. Our semantic search, however, queries the narrative-like summaries. It understands the entire authentication flow like it's reading a story, piecing together the plot points from different files to give you a complete picture.
It works like magic.
Why does it seem that the solution to no-code (which AI-coding agents are) always comes back to "no-code, but actually there is some code behind the scenes, but if you squint enough it looks like no-code".
ivanjermakov•1h ago
diggan•1h ago
stingraycharles•1h ago
Document how to use and install your tool in the readme.
Document how to compile, test, architecture decisions, coding standards, repository structure etc in the agents doc.
darepublic•1h ago
sponnath•30m ago
throwup238•26m ago
blinkymach12•1h ago
esafak•52m ago
sothatsit•51m ago
They are separate for a good reason. My CLAUDE.md and README.md look very different.
bongodongobob•47m ago
CuriouslyC•42m ago
ameliaquining•30m ago
blinkymach12•17m ago