Because web pages are being served up without interactivity, there would be no way to _easily_ tell if alternate content exists for different users; if the access tech (browser, bot, crawler, etc.) is what is causing the content to change.
The big issue is how to reliably identify Google crawlers and bots. This framework might need to go as far as identifying entire blocks of Google IPv4 and IPv6 addresses to use as a filter, since many of the more recent indexing tech looks at web pages the same way humans would, and may even present themselves to the server much the same way a normal web browser would, but this is a technical problem that can be overcome.
Hmmm… this sounds like a potential project. Something that can be used as a foundation for many websites, and possibly even a plugin for existing frameworks like WordPress.
nickorlow•27m ago
I feel like search (even on non-google search engines) has gotten pretty bad. Kagi seems to be the best, but I still see AI-slop or list-slop on it.