The link goes to our GitHub Manifesto. It explains the 'Why' behind aurintex: I believe the future of 'always-on' AI companions must be built on a foundation of absolute trust.
The default "Cloud-First" model can't provide this. So I'm proposing and building a model based on two principles: *"Fully Functional Offline"* and an *"Open Core"* trust model.
The `README` on GitHub explains this mission in detail. (The landing page, aurintex.com, is linked from there.)
I've cleared my entire afternoon and evening, and I'm here for your brutally honest feedback on this approach.
Is this a trust model you could actually get behind for an 'always-on' AI?
(P.S. This is the reboot of my 'Show HN' from Tuesday. My original launch failed because my 0-day-old account got rate-limited and I couldn't post this context. My mistake, and I appreciate you giving this a second look.)
aurintex•7h ago
The link goes to our GitHub Manifesto. It explains the 'Why' behind aurintex: I believe the future of 'always-on' AI companions must be built on a foundation of absolute trust.
The default "Cloud-First" model can't provide this. So I'm proposing and building a model based on two principles: *"Fully Functional Offline"* and an *"Open Core"* trust model.
The `README` on GitHub explains this mission in detail. (The landing page, aurintex.com, is linked from there.)
I've cleared my entire afternoon and evening, and I'm here for your brutally honest feedback on this approach.
Is this a trust model you could actually get behind for an 'always-on' AI?
(P.S. This is the reboot of my 'Show HN' from Tuesday. My original launch failed because my 0-day-old account got rate-limited and I couldn't post this context. My mistake, and I appreciate you giving this a second look.)