AgentCheck is a public “AI bot posture leaderboard” built from declared public signals: - robots.txt allow/deny rules - public capability/interface files (e.g. /llms.txt and /.well-known/agents.json where present) - weekly deltas so you can see policy changes over time
It answers: which bots a site declares it blocks/allows (using a fixed reference bot set), whether agent-readable interface files exist, and how posture changes week to week.
Important: this is not a claim about actual crawling activity — it’s posture + public interface signals.
Link: https://www.agentcheck.com/leaderboard/ai-bots
I’d love feedback on: - other public signals worth adding - how you’d define “agent readiness” - edge cases where robots.txt parsing should be handled differently
MK_Phoenix•2h ago
We’re also building a companion “Verify” flow that analyzes uploaded access logs (UA + ASN + behavior scoring) for people who want evidence of actual crawler activity. This post is intentionally about the public posture layer.