So I built an MCP server that checks before your AI starts coding. Install with `uvx idea-reality-mcp`, and Claude/Cursor will automatically scan GitHub + Hacker News for existing implementations before writing a single line.
Returns: reality_signal (0-100), duplicate_likelihood, top 5 similar repos, evidence from multiple sources, and pivot suggestions.
It's a protocol layer, not a SaaS dashboard — the check happens inside your IDE workflow.
Python, MIT licensed, zero config. Would love feedback on the scoring algorithm.