This is not a cloud chatbot or a hosted-LLM wrapper. The focus is on building an assistant that lives with the user — understands personal context, reasons over local data, and can act autonomously without exporting private information off the device.
I’m interested in connecting with senior engineers who enjoy building AI systems in Rust, think in terms of orchestration and state, and care deeply about correctness, performance, and privacy.
San Francisco Bay Area preferred, as I believe early in-person collaboration matters.
Posting anonymously for privacy reasons — happy to share more in 1:1 conversations.
— Founder, SF Bay Area Contact: localai-founder@proton.me
repelsteeltje•13h ago
So something like triton server backends, but using rust rather that python or c++?
cajazzer•13h ago
I’m less interested in building a generic inference backend and more focused on how the system behaves over time — things like agent lifecycles, state, memory, and coordination. Inference is part of it, but not the whole thing.
Correctness, performance, and privacy matter because the system is long-lived and stateful, not just because it’s pushing tokens around.
repelsteeltje•13h ago
cajazzer•12h ago
I don’t think you can support something like this if it’s a total black box. If behavior drifts, users need some way to see what state the system thinks it’s in and why it did what it did, otherwise support becomes guesswork.
On updates, I’m pretty cautious about changing behavior silently. I’d expect versioned behavior, explicit migrations, and being careful about what auto-updates versus what users opt into.
And if people are paying over time, updates need to be real improvements they can understand, not just maintenance or regressions.
This feels more like a design problem than a testing problem.
repelsteeltje•11h ago
For plain old software development, supporting installed base was always far more difficult than designing and building new features. Doable, but hard.
With LLMs it's much harder. For instance, we built a transcript summarization service that customers were happy with. Then we improved the LLM, but we could not just yank the old LLM and replace it with the new version. Some customers would see the improvements, but others would complain that the new tone was ... just different.
I can't imagine how much more complicated it would be if the LLM was not under our control and had evolved to better help the customer.
But maybe I'm just misunderstanding what you're intending to build?