I'm developing an onboarding flow for my future OF for GH service, I think it would help with the quality of onboarding answers if users fleshed out their onboarding responses with the help of an AI chat. But AI chat is not integrated in any other part of the webapp which means I have to build out this whole chat interface just for onboarding - seems like a massive waste of time.
I was thinking instead allowing the users to copy over the agent/system prompt to their LLM chat app of choice and then just complete the onboarding once the agent determines its of sufficient quality.
At this point I was surprised to learn that there is no `mailto://` type URI scheme for llms. Or at least not that I know of.
I was thinking something like this:
llm://[min-model-generation]?subject=....&prompt=......&schema=[base64 encoded schema output]
e.g
# full link example is in the linked gist [0]
llm://>2024?subject=Yocto.is%20Onboarding%20Helper%20Agent&prompt=...
the first part can select a specific model as well
e.g
# Specific model
llm://gemini-2.5-pro?...
# Specific provider, min model
llm://>=gemini-2.5-pro?...
# Specific provider, gt model
llm://>gemini-2.5-pro?...
Thanks
etc.
[0]: https://gist.github.com/smashah/f6192d7af114ca059b3a47b33ec1df18