When you ask "what breaks if I change this function?", the AI greps for text patterns, reads random files hoping to find context, and eventually gives up and asks you to provide more context.
CKB gives AI tools the knowledge they're missing. It indexes your codebase and exposes 80+ MCP tools for:
- Impact analysis: blast radius with risk scores before you touch anything - Dead code detection: confidence-scored candidates based on call graphs + optional telemetry - Security scanning: 26 patterns for exposed secrets (API keys, tokens, credentials) - Ownership: CODEOWNERS + git-blame fusion with time decay - Affected tests: run only what matters instead of the full suite - Multi-repo federation: query across all your repositories
Works with Claude Code, Cursor, Windsurf, VS Code, and anything that speaks MCP. Also has CLI and HTTP API for CI/CD integration.
Technical details: - Written in Go - Uses SCIP indexes for precise symbol resolution - Incremental indexing (updates in seconds) - Presets system for token optimization (load 14-81 tools based on task) - Three-tier caching with auto-invalidation
Install: `npm install -g @tastehub/ckb && ckb init`
Free for personal use and small teams. Source on GitHub.
Would love feedback, especially on the MCP tool design and what's missing for your workflows.
storystarling•1w ago
SimplyLiz•1w ago
Presets control tool availability, not output truncation. The core preset exposes 19 tools (~12k tokens for definitions) vs full with 50+ tools. This affects what the AI can ask for, not what it gets back. The AI can dynamically call expandToolset mid-session to unlock additional tools when needed.
Depth parameters control which analyses run, not result pruning. For compound tools like explore: - shallow: 5 key symbols, skips dependency/change/hotspot analysis entirely - standard: 10 key symbols, includes deps + recent changes, parallel execution - deep: 20 key symbols, full analysis including hotspots and coupling
This is additive query selection. The call graph depth (1-4 levels) is passed through unchanged to the underlying traversal—if you ask for depth 3, you get full depth 3, not a truncated version.
On token optimization specifically: CKB tracks token usage at the response level using WideResultMetrics (measures JSON size, estimates tokens at ~4 bytes/token for structured data). When truncation does occur (explicit limits like maxReferences), responses include transparent TruncationInfo metadata with reason, originalCount, returnedCount, and droppedCount. The AI sees exactly what was cut and why.
The compound tools (explore, understand, prepareChange) reduce tool calls by 60-70% by aggregating what would be sequential queries into parallel internal execution. This preserves reasoning depth while cutting round-trip overhead. The AI can always fall back to granular tools (getCallGraph, findReferences) when it needs explicit control over traversal parameters.