But neither travels cleanly across everything I use, and packing too much into MD files eats context and tokens.
With Empirical, I keep my AGENTS.md lean and let Codex pull context dynamically when it actually needs it.
I can open ChatGPT on my phone, connected to Empirical, and it pulls the same memory context and writing tone I use in Codex or any other connected AI tool.
That means:
* less repeated setup
* cleaner, cheaper prompts
* more consistent output across sessions
stevendeluth•1h ago
But neither travels cleanly across everything I use, and packing too much into MD files eats context and tokens.
With Empirical, I keep my AGENTS.md lean and let Codex pull context dynamically when it actually needs it.
I can open ChatGPT on my phone, connected to Empirical, and it pulls the same memory context and writing tone I use in Codex or any other connected AI tool. That means: * less repeated setup * cleaner, cheaper prompts * more consistent output across sessions
This is just the tip of the iceberg.