I’ve been analyzing how different LLM agents (GPTBot, ClaudeBot, Perplexity) crawl modern marketing sites. I found that while many sites rank well on Google, they often score poorly on "LLM Readability"—meaning the agents struggle to parse the content due to hydration latency or schema gaps.
I built this tool to assign a "readability score" to your site by simulating those specific user-agents and evaluating the raw HTML response.
Tech Disclaimer (v0.1): This is currently an early MVP. I am actively developing a much more polished version (and an API) to handle edge cases and deeper rendering issues.
Right now, I’m looking for feedback on the scoring logic:
Does the score feel accurate for your tech stack?
Are there signals you think should weigh heavier (e.g., Schema vs. raw text)?
aggeeinn•2h ago
I’ve been analyzing how different LLM agents (GPTBot, ClaudeBot, Perplexity) crawl modern marketing sites. I found that while many sites rank well on Google, they often score poorly on "LLM Readability"—meaning the agents struggle to parse the content due to hydration latency or schema gaps.
I built this tool to assign a "readability score" to your site by simulating those specific user-agents and evaluating the raw HTML response.
Tech Disclaimer (v0.1): This is currently an early MVP. I am actively developing a much more polished version (and an API) to handle edge cases and deeper rendering issues.
Right now, I’m looking for feedback on the scoring logic:
Does the score feel accurate for your tech stack?
Are there signals you think should weigh heavier (e.g., Schema vs. raw text)?
Feedback is welcome.
-- Hristo