So in the spirit of "doing things because you can", I have made a full fledged LLM client for the Vita. You can even use the camera to take photos to send to models that support vision. I'm happy with how it all turned out. It isn't perfect, as LLMs like to display messages in fancy ways like using TeX and markdown formatting, so it shows that in its raw form. The Vita can't even do emojis!
You can download the vpk in the releases section of the repo. Throw in an endpoint and try it yourself! (If using an API key, I hope you are very patient in typing that out manually)