I’m Ralph. I have a degree in software development, and for the past 15+ years, I’ve been doing website development and SEO, with the last 6 running an agency.
A couple of years ago, it became clear that AI could be useful in automating a lot of our agency's junior SEO tasks and manual work. I laid out our processes in detail, mapped the areas where AI could be useful, and started integrating it all.
This created an AI-powered SEO platform that automates keyword research, meta titles/descriptions, image alt text, and page-level content, with approval workflows and token-based usage. I'm also exploring automation for link building, full technical audits, and AI-generated recommended fixes.
One of the biggest struggles has been managing contextual relevance, giving the system enough information to understand a site holistically without overwhelming the model or diluting relevance.
The platform is live in beta, but I'm torn between continuing to engineer toward "perfect" versus focusing energy on sharing it earlier and letting real users guide what actually matters, so I’m here to ask for that feedback.
I’d really appreciate any insights, especially around where this would or wouldn’t fit into workflows, feedback on the quality of the responses returned, and anything that might create friction towards adoption.
To keep costs predictable during beta, users can test with a token-seeded workspace for sites of 100 pages or fewer.