frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Postgres LISTEN/NOTIFY does not scale

https://www.recall.ai/blog/postgres-listen-notify-does-not-scale
295•davidgu•3d ago•109 comments

Show HN: Pangolin – Open source alternative to Cloudflare Tunnels

https://github.com/fosrl/pangolin
30•miloschwartz•4h ago•4 comments

What is Realtalk’s relationship to AI? (2024)

https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relationship_to_AI
232•prathyvsh•11h ago•79 comments

Show HN: Open source alternative to Perplexity Comet

https://www.browseros.com/
160•felarof•9h ago•54 comments

Batch Mode in the Gemini API: Process More for Less

https://developers.googleblog.com/en/scale-your-ai-workloads-batch-mode-gemini-api/
21•xnx•3d ago•5 comments

FOKS: Federated Open Key Service

https://foks.pub/
177•ubj•13h ago•42 comments

Graphical Linear Algebra

https://graphicallinearalgebra.net/
180•hyperbrainer•10h ago•12 comments

Flix – A powerful effect-oriented programming language

https://flix.dev/
217•freilanzer•12h ago•89 comments

Measuring the impact of AI on experienced open-source developer productivity

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
514•dheerajvs•10h ago•330 comments

Belkin ending support for older Wemo products

https://www.belkin.com/support-article/?articleNum=335419
53•apparent•8h ago•47 comments

Red Hat Technical Writing Style Guide

https://stylepedia.net/style/
159•jumpocelot•11h ago•71 comments

Yamlfmt: An extensible command line tool or library to format YAML files

https://github.com/google/yamlfmt
25•zdw•3d ago•12 comments

Launch HN: Leaping (YC W25) – Self-Improving Voice AI

49•akyshnik•8h ago•25 comments

Turkey bans Grok over Erdoğan insults

https://www.politico.eu/article/turkey-ban-elon-musk-grok-recep-tayyip-erdogan-insult/
84•geox•3h ago•57 comments

How to prove false statements: Practical attacks on Fiat-Shamir

https://www.quantamagazine.org/computer-scientists-figure-out-how-to-prove-lies-20250709/
198•nsoonhui•16h ago•153 comments

eBPF: Connecting with Container Runtimes

https://h0x0er.github.io/blog/2025/06/29/ebpf-connecting-with-container-runtimes/
35•forxtrot•7h ago•0 comments

Regarding Prollyferation: Followup to "People Keep Inventing Prolly Trees"

https://www.dolthub.com/blog/2025-07-03-regarding-prollyferation/
40•ingve•3d ago•1 comments

Show HN: Cactus – Ollama for Smartphones

108•HenryNdubuaku•7h ago•45 comments

Grok 4

https://simonwillison.net/2025/Jul/10/grok-4/
178•coloneltcb•6h ago•148 comments

Analyzing database trends through 1.8M Hacker News headlines

https://camelai.com/blog/hn-database-hype/
117•vercantez•2d ago•61 comments

Not So Fast: AI Coding Tools Can Reduce Productivity

https://secondthoughts.ai/p/ai-coding-slowdown
57•gk1•2h ago•35 comments

Diffsitter – A Tree-sitter based AST difftool to get meaningful semantic diffs

https://github.com/afnanenayet/diffsitter
89•mihau•13h ago•26 comments

Matt Trout has died

https://www.shadowcat.co.uk/2025/07/09/ripples-they-cause-in-the-world/
142•todsacerdoti•19h ago•42 comments

Is Gemini 2.5 good at bounding boxes?

https://simedw.com/2025/07/10/gemini-bounding-boxes/
259•simedw•13h ago•58 comments

The ChompSaw: A Benchtop Power Tool That's Safe for Kids to Use

https://www.core77.com/posts/137602/The-ChompSaw-A-Benchtop-Power-Tool-Thats-Safe-for-Kids-to-Use
80•surprisetalk•3d ago•66 comments

Foundations of Search: A Perspective from Computer Science (2012) [pdf]

https://staffwww.dcs.shef.ac.uk/people/J.Marshall/publications/SFR09_16%20Marshall%20&%20Neumann_PP.pdf
4•mooreds•3d ago•0 comments

Show HN: Typeform was too expensive so I built my own forms

https://www.ikiform.com/
166•preetsuthar17•17h ago•86 comments

Final report on Alaska Airlines Flight 1282 in-flight exit door plug separation

https://www.ntsb.gov:443/investigations/Pages/DCA24MA063.aspx
131•starkparker•5h ago•142 comments

Radiocarbon dating reveals Rapa Nui not as isolated as previously thought

https://phys.org/news/2025-06-radiocarbon-dating-reveals-rapa-nui.html
17•pseudolus•3d ago•8 comments

Optimizing a Math Expression Parser in Rust

https://rpallas.xyz/math-parser/
127•serial_dev•17h ago•55 comments
Open in hackernews

Show HN: Cactus – Ollama for Smartphones

108•HenryNdubuaku•7h ago
Hey HN, Henry and Roman here - we've been building a cross-platform framework for deploying LLMs, VLMs, Embedding Models and TTS models locally on smartphones.

Ollama enables deploying LLMs models locally on laptops and edge severs, Cactus enables deploying on phones. Deploying directly on phones facilitates building AI apps and agents capable of phone use without breaking privacy, supports real-time inference with no latency, we have seen personalised RAG pipelines for users and more.

Apple and Google actively went into local AI models recently with the launch of Apple Foundation Frameworks and Google AI Edge respectively. However, both are platform-specific and only support specific models from the company. To this end, Cactus:

- Is available in Flutter, React-Native & Kotlin Multi-platform for cross-platform developers, since most apps are built with these today.

- Supports any GGUF model you can find on Huggingface; Qwen, Gemma, Llama, DeepSeek, Phi, Mistral, SmolLM, SmolVLM, InternVLM, Jan Nano etc.

- Accommodates from FP32 to as low as 2-bit quantized models, for better efficiency and less device strain.

- Have MCP tool-calls to make them performant, truly helpful (set reminder, gallery search, reply messages) and more.

- Fallback to big cloud models for complex, constrained or large-context tasks, ensuring robustness and high availability.

It's completely open source. Would love to have more people try it out and tell us how to make it great!

Repo: https://github.com/cactus-compute/cactus

Comments

max-privatevoid•6h ago
They literally vendored llama.cpp and they STILL called it "Ollama for *". Georgi cannot be vindicated hard enough.
rshemet•6h ago
didn't Ollama vendor Llama cpp too?

Most projects typically start with llama.cpp and then move away to proprietary kernels

ttouch•6h ago
very good project!

can you tell us more about the use cases that you have in mind? I saw that you're able to run 1-4B models (which is impressive!)

rshemet•6h ago
Thank you! it goes without saying that the field is rapidly developing, so the use cases range from private AI assistant/companion apps to internet connectivity-independent copilots to powering private wearables, etc.

We're currently working with a few projects in the space.

For a demo of a familiar chat interface, download https://apps.apple.com/gb/app/cactus-chat/id6744444212 or https://play.google.com/store/apps/details?id=com.rshemetsub...

For other applications, join the discord and stay tuned! :)

xnx•6h ago
Is there an .apk for Android?
rshemet•6h ago
Cactus is a framework - not the app itself. If you're looking for an Android demo, you can go to

https://play.google.com/store/apps/details?id=com.rshemetsub...

Otherwise, it's easy to build any of the example apps from the repo:

cd react/example && yarn && npx expo run:android

or

cd flutter/example && flutter pub get && flutter run

felarof•6h ago
This is cool!

We are working on agentic browser (also launched today https://news.ycombinator.com/item?id=44523409 :))

Right now we have a desktop version with ollama support, but we want to build a mobile chromium fork with local LLM support. Will check out cactus!

rshemet•6h ago
great stuff. (good timing for a post given all the comet news too :) )

DM me on BF - let's talk!

politelemon•6h ago
Very nice, good work. I think you should add the chat app links on the readme, so that visitors get a good idea of what the framework is capable of.

The performance is quite good, even on CPU.

However I'm now trying it on a pixel, and it's not using GPU if I enable it.

I do like this idea as I've been running models in termux until now.

Is the plan to make this app something similar to lmstudio for phones?

rshemet•6h ago
appreciate the feedback! Made the demo links more prominent on the README.

Some Android models won't support GPU hardware; we'll be addressing that as we move to our own kernels.

The app itself is just a demonstration of Cactus performance. The underlying framework gives you the tools to build any local mobile AI experience you'd like.

yrcyrc•6h ago
how do i add RAG / personal assistant features on iOS?
rshemet•6h ago
you can plug in a vector DB and run Cactus embeddings for retrieval. Assuming you're using React Native, here's an example:

https://github.com/cactus-compute/cactus/tree/main/react#emb...

(Flutter works the same way)

What are you building?

khalel•6h ago
What do you think about security? I mean, a model with full (or partial) access to the smartphone and internet. Even if it runs locally, isn't there still a risk that these models could gain full access to the internet and the device?
rshemet•5h ago
The models themselves live in an isolated sandbox. On top of that, each mobile app has its own sandbox - isolated from the phone's data or tools.

Both the model and the app only have access to the tools or data that you choose to give it. If you choose to give the model access to web search - sure, it'll have (read-only) access to internet data.

matthewolfe•5h ago
For argument's sake, suppose we live in a world where many high-quality models can be run on-device. Is there any concern from companies/model developers about exposing their proprietary weights to the end user? It's generally not difficult to intercept traffic (weights) sent to and app, or just reverse the app itself.
rshemet•5h ago
So far, our focus is on supporting models with fully open-sourced weights. Providers who are sensitive about their weights typically lock those weights up in their cloud and don't run their models locally on consumer devices anyway.

I believe there are some frameworks pioneering model encryption, but i think we're a few steps away from wide adoption.

bangaladore•4h ago
Simple answer is they won't send the model to the end user if they don't want it used outside their app.

This isn't really anything novel to LLMs of AI models. Part of the reason for many previously desktop applications being cloud or requiring cloud access is keeping their sensitive IP off the end users' device.

deepdarkforest•5h ago
> However, both are platform-specific and only support specific models from the company

This is not true, as you are for sure aware. Google AI edge supports a lot models, including any Litert model from huggingface, pytorch ones etc. [0]. Additionally, it's not even platform specific, works for iOS [1].

Why lie? I understand that your framework does more stuff like MCP, but I'm sure that's coming for Google's as well. I guess if the UX is really better it can work, but i would also say Ollama's use cases are quite different because on desktop there's a big community of hobbyists that cook up their own little pipelines/just chat to LLMs with local models (apart from the desktop app devs). But on phones, imo that segment is much smaller. App devs are more likely to use the 1st party frameworks, rather than 3rd party. I wouldnt even be surprised if apple locks down at some points some API's for safety/security reasons.

[0] https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inf...

[1] https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inf...

rshemet•5h ago
Thanks for the feedback. You're right to point out that Google AI Edge is cross-platform and more flexible than our phrasing suggested.

The core distinction is in the ecosystem: Google AI Edge runs tflite models, whereas Cactus is built for GGUF. This is a critical difference for developers who want to use the latest open-source models.

One major outcome of this is model availability. New open source models are released in GGUF format almost immediately. Finding or reliably converting them to tflite is often a pain. With Cactus, you can run new GGUF models on the day they drop on Huggingface.

Quantization level also plays a role. GGUF has mature support for quantization far below 8-bit. This is effectively essential for mobile. Sub-8-bit support in TFLite is still highly experimental and not broadly applicable.

Last, Cactus excels at CPU inference. While tflite is great, its peak performance often relies on specific hardware accelerators (GPUs, DSPs). GGUF is designed for exceptional performance on standard CPUs, offering a more consistent baseline across the wide variety of devices that app developers have to support.

deepdarkforest•5h ago
No worries.

GGUF is more suitable for the latest open-source models, i agree there. Quant2/Q4 will probably be critical as well, if we don't see a jump in ram. But then again I wonder when/If mediapipe will support GGUF as well.

PS, I see you are in the latest YC batch? (below you mentioned BF). Good luck and have fun!

DarmokJalad1701•5h ago
I would say that while Google's MediaPipe can technically run any tflite model, it turned out to be a lot more difficult to do in practice with third-party models compared to the "officially supported" models like Gemma-3n. I was trying to set up a VLM inference pipeline using a SmolVLM model. Even after converting it to a tfilte-compatible binary, I struggled to get it working and then once it did work, it was super slow and was obviously missing some hardware acceleration.

I have not looked at OP's work yet, but if it makes the task easier, I would opt for that instead of Google's "MediaPipe" API.

throw777373•5h ago
Ollama runs on Android just fine via Termux. I use it with 5GB models. They even recently added ollama package, there is no longer need to compile it from source code.
rshemet•4h ago
True - but Cactus is not just an app.

We are a dev toolkit to run LLMs cross-platform locally in any app you like.

jadbox•4h ago
How does it work? How does one model on the device get shared to many apps? Does each app have it's own inference sdk running or is there one inference engine shared to many apps (like ollama does). If it's the later, what's the communication protocol to the inference engine?
rshemet•4h ago
Great question. Currently, each app is sandboxed - so each model file is downloaded inside each app's sandbox. We're working on enabling file sharing across multiple apps so you don't have to redownload the model.

With respect to the inference SDK, yes you'll need to install the (react native/flutter) framework inside each app you're building.

The SDK is very lightweight (our own iOS app is <30MB which includes the inference SDK and a ton of other stuff)

teaearlgraycold•4h ago
Does this download models at runtime? I would have expected a different API for that. I understand that you don’t want to include a multi-gig model in your app. But the mobile flow is usually to block functionality with a progress bar on first run. Downloading inline doesn’t integrate well into that.

You’d want an API for downloading OR pulling from a cache. Return an identifier from that and plug it into the inference API.

rshemet•4h ago
Very good point - we've heard this before.

We're restructuring the model initialization API to point to a local file & exposing a separate abstracted download function that takes in a URL.

wrt downloading post-install: based on our feedback, this is indeed a preferred pattern (as opposed to bundling in large files).

We'll update the download API, thanks again.

teaearlgraycold•4h ago
Sounds good!
Uehreka•4h ago
This is one hell of an Emperor’s New Groove reference, well played: https://x.com/filmeastereggs/status/1637412071137759235
rshemet•4h ago
love this. So many layers deep, we just had a good laugh.
refulgentis•4h ago
Beware of this, it's a two weeks old project.

Idk who these people are and I am sure they have good intentions, but they're wrapping llama.cpp.

That's what "like Ollama" means when you're writing code. That's also why there's a ton of comments asking if it's a server or app or what (it's a framework that an app would be built to use, you can't have an app with a localhost server like ollama on Android & iOS)

There's plenty of projects much further ahead, and I don't appreciate the amount of times I've seen this project come up in conversation the past 24 hours, due to misleading assertions that looked LLM-written, and a rush to make marketing claims that are just stuff llama.cpp does for you.

rshemet•4h ago
reminds me of

- "You are, undoubtedly, the worst pirate i have ever heard of" - "Ah, but you have heard of me"

Yes, we are indeed a young project. Not two weeks, but a couple of months. Welcome to AI, most projects are young :)

Yes, we are wrapping llama.cpp. For now. Ollama too began wrapping llama.cpp. That is the mission of open-source software - to enable the community to build on each others' progress.

We're enabling the first cross-platform in-app inference experience for GGUF models and we're soon shipping our own inference kernels fully optimized for mobile to speed up the performance. Stay tuned.

PS - we're up to good (source: trust us)

HenryNdubuaku•1h ago
Thanks for the comment, but:

1) The commit history goes back to April.

2) LlaMa.cpp licence is included in the Repo where necessary like Ollama, until it is deprecated.

3) Flutter isolates behave like servers, and Cactus codes use that.

refulgentis•48m ago
What does #3 mean?

Flutter isolates are like threads, and servers may use multithreading to handle requests, and Ollama is like a server in that it provides an API, and since we've shown both are servers, it's like Ollama?

Please do educate me on this, I'm fascinated.

When you're done there, let's say Flutter having isolates does mean you have a React Native and Flutter local LLM server.

What's your plan for your Android & iOS-only framework being a system server? Or alternatively, available at localhost for all apps to contact?

HenryNdubuaku•4h ago
Please feel free to join our Discord: https://discord.com/invite/bNurx3AXTJ
pj_mukh•3h ago
Amazing, this is so so useful.

Thank you especially for the phone model vs tok/s breakdown. Do you have such tables for more models? For models even leaner than Gemma3 1B. How low can you go? Say if I wanted to tweak out 45toks/s on an iPhone 13?

P.S: Also, I'm assuming the speeds stay consistent with react-native vs. flutter etc?

rshemet•3h ago
thank you! We're continue to add performance metrics as more data comes in.

A Qwen 2.5 500M will get you to ≈45tok/sec on an iPhone 13. Inference speeds are somewhat linearly inversely proportional to model sizes.

Yes, speeds are consistent across frameworks, although (and don't quote me on this), I believe React Native is slightly slower because it interfaces with the C++ engine through a set of bridges.

Reebz•2h ago
Looking at the current benchmarks table, I was curious: what do you think is wrong with Samsung S25 Ultra?

Most of the standard mobile CPU benchmarks (GeekBench, AnTuTu, et al) show a 20-40% performance gain over S23/S24 Ultra. Also, this bucks the trend where most other devices are ranked appropriately (i.e. newer devices perform better).

Thanks for sharing your project.

rshemet•2h ago
great observation - this data is not from a controlled environment; these are metrics from our Cactus Chat use (we only collect tok/sec telemetry).

S25 is an outlier that surprised us too.

I got $10 on S25 climbing back up to the top of the rankings as more data comes in :)

pickettd•2h ago
I also want to add on that I really appreciate the benchmarks.

When I was working with RAG llama.cpp through RN early last year I had pretty acceptable tok/sec results up through 7-8b quantized models (on phones like the S24+ and iPhone 15pro). MLC was definitely higher tok/sec but it is really tough to beat the community support and availability in the gguf ecosystem.

smcleod•2h ago
FYI I see you have SmolLM2, this was replaced with SmolLM 3 this week!

Would be great to have a few larger models to choose from too, Qwen 3 4b, 8b etc

K0balt•2h ago
Very cool. Looks like it might be practical to run 7b models at Q4 on my phone, That would make it truly useful!
Scene_Cast2•1h ago
Does this support openrouter?
rshemet•1h ago
hot off the press in our latest feature release :)

we support cloud fallback as an add-on feature. This lets us support vision and audio in addition to text.

ipsum2•1h ago
GGUF is easy to implement, but you'd probably find better performance with tflite on mobile for their custom XNNPACK kernels. Performance is pretty critical on low-power devices.