frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•19m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•23m ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
2•xeouz•45m ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
266•isitcontent•20h ago•33 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
365•vecti•22h ago•166 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
338•eljojo•23h ago•209 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
17•sandGorgon•2d ago•5 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
3•anipaleja•2h ago•0 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
80•phreda4•19h ago•15 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
94•antves•2d ago•70 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
3•sam256•4h ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
52•nwparker•1d ago•11 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
27•dchu17•1d ago•12 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
154•bsgeraci•1d ago•64 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
18•denuoweb•2d ago•2 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
7•sakanakana00•5h ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•5h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
19•NathanFlurry•1d ago•9 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
3•nmfccodes•2h ago•1 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
2•melvinzammit•7h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•8h ago•2 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
173•vkazanov•2d ago•49 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•1d ago•8 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
10•michaelchicory•9h ago•3 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
6•rahuljaguste•19h ago•1 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
24•JoshPurtell•1d ago•5 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
17•keepamovin•10h ago•6 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•13h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
4•ambitious_potat•14h ago•4 comments
Open in hackernews

Show HN: Cactus – Ollama for Smartphones

https://github.com/cactus-compute/cactus
231•HenryNdubuaku•7mo ago
Hey HN, Henry and Roman here - we've been building a cross-platform framework for deploying LLMs, VLMs, Embedding Models and TTS models locally on smartphones.

Ollama enables deploying LLMs models locally on laptops and edge severs, Cactus enables deploying on phones. Deploying directly on phones facilitates building AI apps and agents capable of phone use without breaking privacy, supports real-time inference with no latency, we have seen personalised RAG pipelines for users and more.

Apple and Google actively went into local AI models recently with the launch of Apple Foundation Frameworks and Google AI Edge respectively. However, both are platform-specific and only support specific models from the company. To this end, Cactus:

- Is available in Flutter, React-Native & Kotlin Multi-platform for cross-platform developers, since most apps are built with these today.

- Supports any GGUF model you can find on Huggingface; Qwen, Gemma, Llama, DeepSeek, Phi, Mistral, SmolLM, SmolVLM, InternVLM, Jan Nano etc.

- Accommodates from FP32 to as low as 2-bit quantized models, for better efficiency and less device strain.

- Have MCP tool-calls to make them performant, truly helpful (set reminder, gallery search, reply messages) and more.

- Fallback to big cloud models for complex, constrained or large-context tasks, ensuring robustness and high availability.

It's completely open source. Would love to have more people try it out and tell us how to make it great!

Repo: https://github.com/cactus-compute/cactus

Comments

max-privatevoid•7mo ago
They literally vendored llama.cpp and they STILL called it "Ollama for *". Georgi cannot be vindicated hard enough.
rshemet•7mo ago
didn't Ollama vendor Llama cpp too?

Most projects typically start with llama.cpp and then move away to proprietary kernels

ttouch•7mo ago
very good project!

can you tell us more about the use cases that you have in mind? I saw that you're able to run 1-4B models (which is impressive!)

rshemet•7mo ago
Thank you! it goes without saying that the field is rapidly developing, so the use cases range from private AI assistant/companion apps to internet connectivity-independent copilots to powering private wearables, etc.

We're currently working with a few projects in the space.

For a demo of a familiar chat interface, download https://apps.apple.com/gb/app/cactus-chat/id6744444212 or https://play.google.com/store/apps/details?id=com.rshemetsub...

For other applications, join the discord and stay tuned! :)

xnx•7mo ago
Is there an .apk for Android?
rshemet•7mo ago
Cactus is a framework - not the app itself. If you're looking for an Android demo, you can go to

https://play.google.com/store/apps/details?id=com.rshemetsub...

Otherwise, it's easy to build any of the example apps from the repo:

cd react/example && yarn && npx expo run:android

or

cd flutter/example && flutter pub get && flutter run

nunobrito•7mo ago
Thanks, looking fantastic so far.
felarof•7mo ago
This is cool!

We are working on agentic browser (also launched today https://news.ycombinator.com/item?id=44523409 :))

Right now we have a desktop version with ollama support, but we want to build a mobile chromium fork with local LLM support. Will check out cactus!

rshemet•7mo ago
great stuff. (good timing for a post given all the comet news too :) )

DM me on BF - let's talk!

politelemon•7mo ago
Very nice, good work. I think you should add the chat app links on the readme, so that visitors get a good idea of what the framework is capable of.

The performance is quite good, even on CPU.

However I'm now trying it on a pixel, and it's not using GPU if I enable it.

I do like this idea as I've been running models in termux until now.

Is the plan to make this app something similar to lmstudio for phones?

rshemet•7mo ago
appreciate the feedback! Made the demo links more prominent on the README.

Some Android models won't support GPU hardware; we'll be addressing that as we move to our own kernels.

The app itself is just a demonstration of Cactus performance. The underlying framework gives you the tools to build any local mobile AI experience you'd like.

yrcyrc•7mo ago
how do i add RAG / personal assistant features on iOS?
rshemet•7mo ago
you can plug in a vector DB and run Cactus embeddings for retrieval. Assuming you're using React Native, here's an example:

https://github.com/cactus-compute/cactus/tree/main/react#emb...

(Flutter works the same way)

What are you building?

khalel•7mo ago
What do you think about security? I mean, a model with full (or partial) access to the smartphone and internet. Even if it runs locally, isn't there still a risk that these models could gain full access to the internet and the device?
rshemet•7mo ago
The models themselves live in an isolated sandbox. On top of that, each mobile app has its own sandbox - isolated from the phone's data or tools.

Both the model and the app only have access to the tools or data that you choose to give it. If you choose to give the model access to web search - sure, it'll have (read-only) access to internet data.

matthewolfe•7mo ago
For argument's sake, suppose we live in a world where many high-quality models can be run on-device. Is there any concern from companies/model developers about exposing their proprietary weights to the end user? It's generally not difficult to intercept traffic (weights) sent to and app, or just reverse the app itself.
rshemet•7mo ago
So far, our focus is on supporting models with fully open-sourced weights. Providers who are sensitive about their weights typically lock those weights up in their cloud and don't run their models locally on consumer devices anyway.

I believe there are some frameworks pioneering model encryption, but i think we're a few steps away from wide adoption.

bangaladore•7mo ago
Simple answer is they won't send the model to the end user if they don't want it used outside their app.

This isn't really anything novel to LLMs of AI models. Part of the reason for many previously desktop applications being cloud or requiring cloud access is keeping their sensitive IP off the end users' device.

deepdarkforest•7mo ago
[flagged]
rshemet•7mo ago
Thanks for the feedback. You're right to point out that Google AI Edge is cross-platform and more flexible than our phrasing suggested.

The core distinction is in the ecosystem: Google AI Edge runs tflite models, whereas Cactus is built for GGUF. This is a critical difference for developers who want to use the latest open-source models.

One major outcome of this is model availability. New open source models are released in GGUF format almost immediately. Finding or reliably converting them to tflite is often a pain. With Cactus, you can run new GGUF models on the day they drop on Huggingface.

Quantization level also plays a role. GGUF has mature support for quantization far below 8-bit. This is effectively essential for mobile. Sub-8-bit support in TFLite is still highly experimental and not broadly applicable.

Last, Cactus excels at CPU inference. While tflite is great, its peak performance often relies on specific hardware accelerators (GPUs, DSPs). GGUF is designed for exceptional performance on standard CPUs, offering a more consistent baseline across the wide variety of devices that app developers have to support.

deepdarkforest•7mo ago
No worries.

GGUF is more suitable for the latest open-source models, i agree there. Quant2/Q4 will probably be critical as well, if we don't see a jump in ram. But then again I wonder when/If mediapipe will support GGUF as well.

PS, I see you are in the latest YC batch? (below you mentioned BF). Good luck and have fun!

blks•7mo ago
First paragraph reads like chat gpt response.
poly2it•7mo ago
Not just the first paragraph, the whole response reads like LLM output.
DarmokJalad1701•7mo ago
I would say that while Google's MediaPipe can technically run any tflite model, it turned out to be a lot more difficult to do in practice with third-party models compared to the "officially supported" models like Gemma-3n. I was trying to set up a VLM inference pipeline using a SmolVLM model. Even after converting it to a tfilte-compatible binary, I struggled to get it working and then once it did work, it was super slow and was obviously missing some hardware acceleration.

I have not looked at OP's work yet, but if it makes the task easier, I would opt for that instead of Google's "MediaPipe" API.

pj_mukh•7mo ago
Does Google AI Edge have React Native support? Doesn't seem like it? Cactus does though.
dang•7mo ago
> as you are for sure aware

> Why lie?

Whoa—that's way too aggressive for this forum and definitely against the site guidelines. Could you please review them (https://news.ycombinator.com/newsguidelines.html) and take the spirit of this site more to heart? We'd appreciate it. You can always make your substantive points while doing that.

Note this one: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

throw777373•7mo ago
Ollama runs on Android just fine via Termux. I use it with 5GB models. They even recently added ollama package, there is no longer need to compile it from source code.
rshemet•7mo ago
True - but Cactus is not just an app.

We are a dev toolkit to run LLMs cross-platform locally in any app you like.

jadbox•7mo ago
How does it work? How does one model on the device get shared to many apps? Does each app have it's own inference sdk running or is there one inference engine shared to many apps (like ollama does). If it's the later, what's the communication protocol to the inference engine?
rshemet•7mo ago
Great question. Currently, each app is sandboxed - so each model file is downloaded inside each app's sandbox. We're working on enabling file sharing across multiple apps so you don't have to redownload the model.

With respect to the inference SDK, yes you'll need to install the (react native/flutter) framework inside each app you're building.

The SDK is very lightweight (our own iOS app is <30MB which includes the inference SDK and a ton of other stuff)

pogue•7mo ago
I would like to see it as an app, tbh! If I could run it as an APK with a nice GUI interface for picking different models to run, that would be a killer feature.
rshemet•7mo ago
https://play.google.com/store/apps/details?id=com.rshemetsub...
pogue•7mo ago
Ah ha!
v5v3•7mo ago
Didn't know that. Thanks
teaearlgraycold•7mo ago
Does this download models at runtime? I would have expected a different API for that. I understand that you don’t want to include a multi-gig model in your app. But the mobile flow is usually to block functionality with a progress bar on first run. Downloading inline doesn’t integrate well into that.

You’d want an API for downloading OR pulling from a cache. Return an identifier from that and plug it into the inference API.

rshemet•7mo ago
Very good point - we've heard this before.

We're restructuring the model initialization API to point to a local file & exposing a separate abstracted download function that takes in a URL.

wrt downloading post-install: based on our feedback, this is indeed a preferred pattern (as opposed to bundling in large files).

We'll update the download API, thanks again.

teaearlgraycold•7mo ago
Sounds good!
Uehreka•7mo ago
This is one hell of an Emperor’s New Groove reference, well played: https://x.com/filmeastereggs/status/1637412071137759235
rshemet•7mo ago
love this. So many layers deep, we just had a good laugh.
refulgentis•7mo ago
[flagged]
rshemet•7mo ago
reminds me of

- "You are, undoubtedly, the worst pirate i have ever heard of" - "Ah, but you have heard of me"

Yes, we are indeed a young project. Not two weeks, but a couple of months. Welcome to AI, most projects are young :)

Yes, we are wrapping llama.cpp. For now. Ollama too began wrapping llama.cpp. That is the mission of open-source software - to enable the community to build on each others' progress.

We're enabling the first cross-platform in-app inference experience for GGUF models and we're soon shipping our own inference kernels fully optimized for mobile to speed up the performance. Stay tuned.

PS - we're up to good (source: trust us)

HenryNdubuaku•7mo ago
Thanks for the comment, but:

1) The commit history goes back to April.

2) LlaMa.cpp licence is included in the Repo where necessary like Ollama, until it is deprecated.

3) Flutter isolates behave like servers, and Cactus codes use that.

refulgentis•7mo ago
[flagged]
HenryNdubuaku•7mo ago
We are following Ollama's design, but not verbatim due to apps being sandboxed.

Phones are resource-constrained, we saw significant battery overhead with in-process HTTP listeners so we stuck with simple stateful isolates in Flutter and exploring standalone server app others can talk to for React.

For model sharing with the current setup:

iOS - We are working towards writing the model into an App Group container, tricky but working around it.

Android - We are working towards prompting the user once for a SAF directory (e.g., /Download/llm_models), save the model there, then publish a ContentProvider URI for zero-copy reads.

We are already writing more mobile-friendly kernels and Tensors, but GGML/GGUF is widely supported, porting it is an easy way to get started and collect feedback, but we will completely move away from in < 2 months.

Anything else you would like to know?

refulgentis•7mo ago
How does writing a model into an App Group container enable your framework to enable an app to enable a local LLM server that 3rd party apps can make calls to on iOS?[^1]

How does writing a model into a shared directory on Android enable a local LLM server that 3rd party apps can make calls to?[^2]

How does writing your own kernels get you off GGUF in 2 months? GGUF is a storage format. You use kernels to do things with the numbers you get from it.

I thought GGUF was an advantage? Now it's something you're basically done using?

I don't think you should continue this conversation. As easy it as it is to get your work out there, it's just as easy to build a record of stretching truth over and over again.

Best of luck, and I mean it. Just, memento mori: be honest and humble along the way. This is something you will look back on in a year and grimace.

[^1] App group containers only work between apps signed from the same Apple developer account. Additionally, that is shared storage, not a way to provide APIs to other apps.

[^2] SAF = Storage Access Framework, that is shared storage, not a way to provide APIs to other apps.

HenryNdubuaku•7mo ago
[flagged]
refulgentis•7mo ago
[flagged]
jeffhuys•7mo ago
The best way to go about this is realizing that there are more people reading this thread that make their own assumptions.

Not staying professional and just answering the questions, and just doing "aight im outta here" when it gets a little bit harder is not a good look; it seems like you can't defend your own project.

Just FYI.

HenryNdubuaku•7mo ago
Please feel free to join our Discord: https://discord.com/invite/bNurx3AXTJ
pj_mukh•7mo ago
Amazing, this is so so useful.

Thank you especially for the phone model vs tok/s breakdown. Do you have such tables for more models? For models even leaner than Gemma3 1B. How low can you go? Say if I wanted to tweak out 45toks/s on an iPhone 13?

P.S: Also, I'm assuming the speeds stay consistent with react-native vs. flutter etc?

rshemet•7mo ago
thank you! We're continue to add performance metrics as more data comes in.

A Qwen 2.5 500M will get you to ≈45tok/sec on an iPhone 13. Inference speeds are somewhat linearly inversely proportional to model sizes.

Yes, speeds are consistent across frameworks, although (and don't quote me on this), I believe React Native is slightly slower because it interfaces with the C++ engine through a set of bridges.

Reebz•7mo ago
Looking at the current benchmarks table, I was curious: what do you think is wrong with Samsung S25 Ultra?

Most of the standard mobile CPU benchmarks (GeekBench, AnTuTu, et al) show a 20-40% performance gain over S23/S24 Ultra. Also, this bucks the trend where most other devices are ranked appropriately (i.e. newer devices perform better).

Thanks for sharing your project.

rshemet•7mo ago
great observation - this data is not from a controlled environment; these are metrics from our Cactus Chat use (we only collect tok/sec telemetry).

S25 is an outlier that surprised us too.

I got $10 on S25 climbing back up to the top of the rankings as more data comes in :)

pickettd•7mo ago
I also want to add on that I really appreciate the benchmarks.

When I was working with RAG llama.cpp through RN early last year I had pretty acceptable tok/sec results up through 7-8b quantized models (on phones like the S24+ and iPhone 15pro). MLC was definitely higher tok/sec but it is really tough to beat the community support and availability in the gguf ecosystem.

smcleod•7mo ago
FYI I see you have SmolLM2, this was replaced with SmolLM 3 this week!

Would be great to have a few larger models to choose from too, Qwen 3 4b, 8b etc

rshemet•7mo ago
in the app you mean?

Adding shortly!

K0balt•7mo ago
Very cool. Looks like it might be practical to run 7b models at Q4 on my phone, That would make it truly useful!
Scene_Cast2•7mo ago
Does this support openrouter?
rshemet•7mo ago
hot off the press in our latest feature release :)

we support cloud fallback as an add-on feature. This lets us support vision and audio in addition to text.

ipsum2•7mo ago
GGUF is easy to implement, but you'd probably find better performance with tflite on mobile for their custom XNNPACK kernels. Performance is pretty critical on low-power devices.
HenryNdubuaku•7mo ago
We are writing our own backend, but tflite (now called LiteRT) was not faster than GGML when we tested and GGML is already well supported. But we are moving away completely anyway.
tderflinger•7mo ago
Great project! I will try it out. :)
azinman2•7mo ago
“ Is available in Flutter, React-Native & Kotlin Multi-platform for cross-platform developers, since most apps are built with these today.”

Is this really true? Where are these stats coming from?

pzo•7mo ago
Probably they mean new apps. Since kotlin multiplatform on android is just native android and android share is like 70% if devices it already at least 50% market share of mobile apps. If you add flutter and react native there is not much left: only games like unity and unreal. I see much less iOS jobs these days.
realjoe•7mo ago
exciting
HenryNdubuaku•7mo ago
Thanks!
pzo•7mo ago
Is this using only llama.cpp as inference engine? How is this days support there on NPU and GPU? Not sure if LLM can run on NPU but many models like STT and TTS and vision often can run much faster on Apple NPU
liuliu•7mo ago
You don't need to guess: https://github.com/cactus-compute/cactus/tree/main/cpp
neurostimulant•7mo ago
This is great!

It would be great if the local llm have access to local tools you can enable/disable as needed (e.g. via customizable profiles). Simple tools such as fetch url, file access, messaging, calendar, etc would be very useful, though I'm not sure if the input token limit is large enough to allow this. Even better if it can somehow do web search but I understand it would be hard to do for free.

Also, how cool it would be if you can expose openai compatible api that can be accessed from other devices in your local network? Imagine turning your old phones into local llm servers. That would be very cool.

By the way, I can't figure out how to clear previous chats data. Is it hidden somewhere?

rshemet•7mo ago
no, good observation - not hidden; we don't have a "clear conversation" button.

to your previous point - Cactus fully supports tool calling (for models that have been instruction-trained accordingly, e.g. Qwen 1.7B)

for "turning your old phones into local llm servers", Cactus is likely not the best tool. We'd recommend something like actual Ollama or Exo

v5v3•7mo ago
Do the community tools in Ollama work in Cactus? (Just python scripts I think).
rshemet•7mo ago
say more about "community tools"?
nunobrito•7mo ago
I've installed the Android version from https://play.google.com/store/apps/details?id=com.rshemetsub...

It is fantastic. Compared to another program I had installed a year ago, the speed of processing and answering is really good and accurate. Was able to ask mathematical questions, basic translation between different languages and even trivia about movies released almost 30 years ago.

Things to improve: 1) sometimes the question would get stuck on the last phrase and keep repeating it without end. 2) The chat does not scroll the window to follow the answer and we have to scroll manually.

In either case, excellent start. It is without the fastest offline LLM that I've seen working on this phone.

rshemet•7mo ago
thank you! Very kind feedback, and we'll add your feedback to our to-dos.

re: "question would get stuck on the last phrase and keep repeating it without end." - that's a limitation of the model i'm afraid. Smaller models tend to do that sometimes.

anupj•7mo ago
Running LLMs, VLMs, and TTS models locally on smartphones is quietly redefining what 'edge AI' means suddenly, the edge is in your pocket, not just at the network boundary. The next wave of apps will be built by those who treat mobile as the new AI server
rshemet•7mo ago
that's our mission! if you are passionate about the space, we look forward to your contributions!
ekianjo•7mo ago
appreciate if you can provide a apk that does not require google play services to run...
awaseem•7mo ago
This is actually crazy. The API is so simple! I tried to do this on Swift using LLM.swift and it went okay, excited to try this on RN
rshemet•7mo ago
looking forward to your feedback!
pxc•7mo ago
This is a cool app! I'm happy to play with it. Some feedback:

1. The lack of a dark mode is an accessibility issue for me. I have a genetic condition that causes severe light sensitivity and special difficulty with contrast. The only way for me to achieve sufficient contrast without uncomfortable and blinding brightness is dark mode, so at present I can only use your app by disabling dark mode and inverting colors across my phone. This is of course not ideal because it ruins photos in other apps, and I end up with some unavoidable very bright white "hotspots" on my phone that I don't normally have when I can just use dark mode. Relatedly, the contrast for some of the text in the app is low to the point of being practically unreadable for me (getting enough contrast with it similarly requires cranking up the brightness). :(

2. I tried downloading a few other models, namely Jan Nano and SmolLM3, using the GGUF link download functionality, but every time I select them, the app just immediately crashes.

I understand that the chat app on the Play Store is basically just a demo for the framework, and if I were really using it I would be in charge of my own theming and downloading the required models and so on, but these still seem worth fixing to me.

wanderingmind•7mo ago
I understand this is targetted towarda devwlopers, bur Can someone explain why should I go through this complex install process instead of just using ChatterUI? It can handle same GGUF format and works great with Gemma and Qwen. What kind of usecases am I missing?
mounir-portia•6mo ago
Love having this running on my Android phone. Reckon i am going to use to automate some task reminders i can do during my commute!