I built A2Apex (https://a2apex.io) — a testing and reputation platform for AI agents built on Google's A2A protocol.
The problem: AI agents are everywhere, but there's no way to verify they actually work. No standard testing. No directory of trusted agents. No reputation system.
What A2Apex does:
- Test — Point it at any A2A agent URL. We run 50+ automated compliance checks: agent card validation, live endpoint testing, state machine verification, streaming, auth, error handling.
- Certify — Get a 0-100 trust score with Gold/Silver/Bronze badges you can embed in your README or docs.
- Get Listed — Every tested agent gets a public profile page in the Agent Directory with trust scores, skills, test history, and embeddable badges.
Think of it as SSL Labs (testing) + npm (directory) + LinkedIn (profiles) — for AI agents.
Stack: Python/FastAPI, vanilla JS, SQLite. No frameworks, no build tools. Runs on a Mac mini in Wyoming.
Free: 5 tests/month. Pro: $29/mo. Startup: $99/mo. Try it at https://app.a2apex.io
I'm a dragline operator at a coal mine — built this on nights and weekends using Claude. Would love feedback from anyone building A2A agents or thinking about agent interoperability.
c5huracan•1d ago
Curious how the trust score works in practice. Is it purely automated test results, or do you plan to incorporate usage signals over time (uptime, response quality)?
Hauk307•1d ago
But you're right that automated spec compliance only tells part of the story. The roadmap includes usage signals, uptime monitoring, response latency tracking, and community ratings from developers who've actually integrated with an agent. The spec tells you if an agent CAN work. Usage data tells you if it DOES work.
The profile pages are designed with that in mind, test history over time already shows trends, and adding real world signals is the natural next layer.