Today, about 65% of our commits are produced by AI agents (Copilot, Cursor, Claude Code, and Kiro). The productivity gain is real, but so are the new classes of drift it introduces.
Each assistant ends up operating from a different context snapshot of our architecture, naming conventions, or ADRs. Some pull from old instruction files, others from out-of-date wikis. The result: AI-generated code that’s locally correct but globally inconsistent.
We built Packmind OSS to tackle this. It’s an open-source framework for Context Engineering — versioning, distributing, and enforcing organizational standards across repos and agents.
Normalize scattered decisions (docs, reviews, ADRs) into structured “rules” and “prompts.” Sync them through MCP servers or CLI across GitHub, Cursor, Claude, etc. Detect drift and repair it automatically during PRs or CI. Repo → github.com/PackmindHub/packmind
I’d love to hear how others are approaching this. How do you maintain context integrity when assistants are coding at scale?
(Apache-2.0 Licensed, CLI & MCP.)
laurent_py•13m ago
ArthurMagne•1m ago