I'm a long-time developer and SEO consultant. Over the years, I've seen clients suffer from a simple, costly mistake: accidentally blocking Googlebot or other important crawlers with a misplaced rule in robots.txt or a noindex tag.
Manually checking the robots.txt file, then the page's meta tags, then the X-Robots-Tag HTTP header is a tedious process. I wanted a tool that would do it all in one shot and give me a clear answer.
So, I built CrawlerCheck. You give it a URL, and it checks all three sources of crawler directives to tell you if a page is accessible.
The backend is written in Go, and the frontend is a lightweight Svelte app. The goal was to make it as fast and reliable as possible.
It's a brand new project, and I'd love to get some honest feedback from the HN community. Thanks for taking a look.
8organicbits•5h ago
Minor suggestion. Consider sorting the checks by status, or adding a summary at the top. I needed to scroll to find if anything was blocked.
I don't know enough about the SEO space, but would a llms.txt check also help?