I've been building MCPSpec, an open-source CLI for MCP server reliability. Record sessions, generate mock servers, catch Tool Poisoning, and fail your CI build when something's wrong — without writing test code.
There are ways to validate MCP servers today — the MCP Inspector, ad-hoc SDK scripts, unit tests for server internals — but nothing that handles regression detection, security auditing, mock generation, and CI pass/fail checks in one tool. MCPSpec does that:
1. Record a session against your real server, replay it after changes to catch regressions
2. Generate a standalone .js mock from any recording — no API keys, no live server needed in CI
3. Security audit with 8 rules including Tool Poisoning (prompt injection hidden in tool descriptions)
4. 0-100 quality score across documentation, schema quality, error handling, responsiveness, and security
5. One command to generate GitHub Actions / GitLab CI configs
No LLMs in the loop. Deterministic and fast. Ships with 70 ready-to-run tests for 7 popular MCP servers.
GitHub: https://github.com/light-handle/mcpspec Docs: https://light-handle.github.io/mcpspec/
Would love feedback or feature ideas.
shalakasurya•1h ago
warmcat•1h ago
- MCP Inspector (Anthropic's official tool) — interactive debugging, not CI-oriented - MCP-Scan (Invariant Labs, now Snyk) — security scanning, focused on tool poisoning detection - Promptfoo — LLM red teaming tool that added MCP support recently - MCP Protocol Validator — spec compliance checking
MCPSpec tries to be the one tool that covers the full workflow: record, replay, mock, security audit, quality scoring, and CI setup. None of the above do recording/replay or mock generation.
As for standards — there aren't any yet. MCP itself moved under the Linux Foundation's Agentic AI Foundation in December 2025, and NIST launched an AI Agent Standards Initiative last month. But no conformance or testing standard exists. It's still early.