I built AgentsKB after watching Claude/Cursor hallucinate Stripe API syntax for the 10th time in a week.
The Problem: AI agents don't "remember" across sessions. You debug a tricky Next.js issue on Monday. Tuesday, same error, same web search loop, same wasted 30 minutes.
The Solution: A curated knowledge base with 3,276 verified Q&As across 160 domains (PostgreSQL, Redis, Kafka, TypeScript, AWS, etc.). 99% confidence rating, 50ms query time.
How it works: - Integrates via MCP (Model Context Protocol) into Claude Desktop/Code - Agent queries verified answers before guessing - No more "let me search the web for that" delays
Tech stack: - MCP native (no plugins to manage) - Vector similarity search for atomic Q&As - Covers common pain points: JWT auth, Kubernetes configs, API design patterns, PostgreSQL quirks
Current stats: - 3,276 Q&As - 160 domains - 73% atomic (single-concept answers) - 99% average authority score
Why I built this: Every AI coding session wastes time re-teaching the agent things it "learned" yesterday. This gives agents persistent, verified memory.
Try it: [Your URL]
Looking for feedback on: 1. Which domains/libraries should I prioritize next? 2. How do you currently handle agent hallucinations? 3. Interest in self-hosted version for proprietary codebases?