Main deal breaker for me when I tried it was I couldn't talk to multiple models at once, even if they were remote models on OpenRouter. If I ask a question in one chat, then switch to another chat and ask a question, it will block until the first one is done.
Also Tauri apps feel pretty clunky on Linux for me.
All of them, or this one specifically? I've developed a bunch of tiny apps for my own usage (on Linux) with Tauri (maybe largest is just 5-6K LoC) and always felt snappy to me, mostly doing all the data processing with Rust then the UI part with ClojureScript+Reagent.
roscas•4h ago
SilverRubicon•2h ago
- Cloud Integration: Connect to OpenAI, Anthropic, Mistral, Groq, and others
- Privacy First: Everything runs locally when you want it to
trilogic•2h ago
imiric•1h ago
It's not open source, has no license, runs on Windows only, and requires an activation code to use.
Also, the privacy policy on their website is missing[2].
Anyone remotely concerned about privacy wouldn't come near this thing.
Ah, you're the author, no wonder you're shilling for it.
[1]: https://github.com/Mainframework/HugstonOne
[2]: https://hugston.com/privacy
trilogic•1h ago
do_not_redeem•1h ago
Great to hear! Since you care so much about privacy, how can I get an activation code without sending any bytes over a network or revealing my email address?
riquito•24m ago
kgeist•1h ago
Llama.cpp's built-in web UI.
trilogic•1h ago
kgeist•1h ago
I tried downloading your app, and it's a whopping 500 MB. What takes up the most disk space? The llama-server binary with the built-in web UI is like a couple MBs.
trilogic•53m ago
rcakebread•33m ago
kgeist•13m ago
>the app is a bit heavy as is loading llm models using llama.cpp cli
So it adds an unnecessary overhead of reloading all the weights to VRAM on each message? On some larger models it can take up to a minute. Or you somehow stream input/output from an attached CLI process without restarting it?
woadwarrior01•1h ago
That's a tall claim.
I've been selling a macOS and iOS private LLM app on the App Store for over two years now, that is:
a) is fully native (not electron.js) b) not a llama.cpp / MLX wrapper c) fully sandboxed (none of Jan, Ollama, LM Studio are)
I will not promote. Quite shameless of you to shill your electron.js based llama.cpp wrapper here.
trilogic•57m ago
rovr138•35m ago
> I accept every challenge to prove that HugstonOne is worth the claim.
I expect your review.
hoppp•1h ago