Built an MCP server that auto-detects your tech stack and installs relevant AI coding skills.
Problem: CLAUDE.md, copilot-instructions, cursor-rules – every tool has its own monolithic instruction format. They grow huge (10K+ tokens) and load on every request.
Solution: Composable skills (~500 tokens each) that sync from registries and load only when matched to your stack.
- 3-tier detection: GitHub SBOM → Specfy (700+ techs) → local fallback - 25+ skills from Anthropic, OpenAI, GitHub - Works with Claude, Copilot, Codex
kxbnb•1w ago
One thing I'd be curious about: how do you think about security when skills auto-provision based on stack detection? If a skill gets compromised upstream, the auto-sync could propagate it quickly.
We're working on policy enforcement for MCP at keypost.ai and thinking about similar trust questions - what should be allowed to load/execute vs what needs explicit approval.
DavidGraca•1w ago
how are you dealing with this topic at keypost.ai?