model: foo-model
max_tokens: 32000
top_p: 1
messages:
- role: system
content: |
You are opencode, an interactive CLI tool that helps users with software engineering tasks.
Use the instructions below and the tools available to you
# ... snip ...
Here is some useful information about the environment you are running in:
<env>
Working directory: /home/user/dir
Workspace root folder: /
Is directory a git repo: no
Platform: linux
Today's date: Tue Apr 28 2026
</env>
Skills provide specialized instructions and workflows for specific tasks.
Use the skill tool to load a skill when a task matches its description.
No skills are currently available.
Instructions from: /home/user/dir/AGENTS.md
# Overview
This directory holds the entirety of the code for the <dayjob> company. All code lives in Github
under the `<dayjob>` organization, and beneath that Organization is a wide-and-flat set of all
the Git repositories of all source code at <dayjob>. That Github repo structure is replicated in
this directory via `ghorg`.
My AGENTS.md file contents start at the "# Overview" line.Notice that the harness is just unceremoniously dumping the AGENTS.md file into the exact same text stream as the system prompt, barely contextualizing that hey, starting now, this text is from AGENTS.md and not from the harness.
If you want AGENTS.md to work (likewise, if you want skills or anything else to work) you have to know how the harness is handling/feeding them to the LLM, because no LLM will reliably look on their own.
Bonus points if you can force them into context without needing the agent to make a tool call, based on touching the files or systems near them. (my homegrown agent has this feature)
the AGENTS.md pieces that pin specific tool-call shapes or force chain-of-thought before action are coping that ages out, same lifecycle as the retry-with-different-prompt loops or chains of thought prompt most stacks shipped in 2024 to compensate for brittle instruction-following.
not quite there yet, but it's nice to see them being shorter and shorter as model release until all the basic are peeled out by the march of progress and one day only the invariants will be left there
Also curious how well LLMs can self-reflect in a loop, in terms of, here's how the previous iteration went, here's what didn't go well, here's feedback from the human, how do I modify the docs I use in a way that I know I'll do better next time.
I know you can somewhat hillclimb via DSPy but that's hard to generalize.
For existing files, the agent will carry on a bad structure unless you specifically ask it to refactor and think about what's actually helpful.
In general, it should be a lean file that tells the agent how to work with the project (short description, table of commands, index of key docs, supporting infra, handful of high-level rules and conventions that apply to everything). Occasionally ask the agent to review and optimize the file, particularly after model upgrades.
rgbrgb•1h ago