Even with a VPN, providers still see prompts, and usage is tied to accounts and payment methods linked to identity in some way.
I've been trying to find a way to access these models without creating provider-specific accounts tied to my identity. Ideally, through some kind of intermediary that abstracts identity and doesn't retain prompts.
From a technical and economic perspective, would that kind of setup meaningfully improve privacy, or does it just shift trust? Is meaningful privacy with hosted AI fundamentally unrealistic regardless of architecture?
collenjk•1h ago
The way I'd think about it: run a local model for anything it can handle. Keep that stuff on your machine entirely. Then for the stuff that actually needs a more powerful model, don't just dump your whole context in — strip it down to just the specific thing you need answered, like "given X, what's Y" with no surrounding story. The hosted model just sees a decontextualized task, not what's actually going on.
You basically become your own privacy layer. More friction, but the provider never gets the full picture.