I ran into this building a spec/skill sync system [1] - the "sync once" model breaks down when you need to track whether downstream consumers are aware of upstream changes.
[1] https://github.com/anupamchugh/shadowbookFor transformed files (Cursor's .mdc frontmatter, GEMINI.md sub-directory rules), you re-run lnai sync. LNAI maintains a manifest tracking of every generated file with content hashes, so it knows what changed and cleans up orphans automatically.
So it's not really "sync once", it's "symlink for instant propagation, regenerate-on-demand for transforms." The manifest ensures LNAI always knows its downstream state.
This system can also break down if you create new skills/rules in the specific tool directories (.claude, .codex, etc.) but that is against LNAI's philosophy. If you need per-tool overrides you put them in `.ai/.{claude/codex/etc.}` sub-directories and LNAI manages them for you.
Then for the application specific documentation, I'd understand you'd want to share it, as it stays the same for all agents touching the same codebase. But easily solved by putting it in DESIGN.md or whatever and appending "Remember to check against DESIGN.md before changing the architecture" or similar.
However given how many tools there are and how fast each tool moves, I find myself jumping between them quite often, just to see which ones I like most or if some tool have improved since I lasted checked it. In this case LNAI is very helpful.
Most prompts I do I execute in all four at the same time, and literally compare the git diff from their work, so I understand totally :) But even for comparison, I think using the same identical config for all, you're not actually seeing and understanding the difference because again, they need different system prompts. By using the same when you compare, you're not accurately seeing the best of each model.
iamkrystian17•2h ago
.ai/ ├── AGENTS.md ├── rules/ ├── skills/ └── settings.json # MCP servers, permissions
Run `lnai sync` and it exports to native formats for 7 tools: Claude Code, Cursor, GitHub Copilot, Gemini CLI, OpenCode, Windsurf, and Codex. The interesting part is it's not just copying files. Each tool has quirks:
- Cursor wants `.mdc` files with `globs` arrays in frontmatter - Gemini reads rules at the directory level, so rules get grouped - Permissions like `Bash(git:*)` become `Shell(git)` for Cursor - Some tools don't support certain features (e.g., Cursor has no "ask" permission level). LNAI warns but doesn't fail
Where possible, it uses symlinks. So `.claude/CLAUDE.md` → `../.ai/AGENTS.md`. Edit the source, all tools see the change immediately without re-syncing.
Usage:
npm install -g lnai lnai init # Creates .ai/ directory lnai validate # Checks for config errors lnai sync # Exports to all enabled tools
It's MIT licensed. The code is TypeScript with a plugin architecture, each tool is a plugin that implements import/export/validate. GitHub: https://github.com/KrystianJonca/lnai Docs: https://lnai.sh
Would appreciate feedback, especially from anyone else dealing with this config hell problem.