frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Gemma 4 on iPhone

https://apps.apple.com/nl/app/google-ai-edge-gallery/id6749645337
254•janandonly•3h ago

Comments

hadrien01•2h ago
Is it me or does the App Store website look... fake? The text in the header ("Productiviteit", "Alleen voor iPhone") looks pixelated, like it was edited on Paint, the header background is flickering, the app icon and screenshots are very low quality, the title of the website is incomplete ("App Store voor iPho...")
piperswe•2h ago
What browser are you using? I don't see any of this behavior on Firefox...
hadrien01•2h ago
Firefox on Windows, but it looks about the same in Edge

Screenshot of the header: https://i.imgur.com/4abfGYF.png

morpheuskafka•2h ago
It looks like there is some sort of glow effect on the text that isn't rendering right on your browser? It arguably doesn't have the best contrast, but seems to be as intended in Safari 26.3. Looks similar on Chrome macOS too: https://imgur.com/yq5PrKm.
t-sauer•2h ago
Renders equally weird for me on Firefox on Windows 11. Firefox on MacOS looks good though.

Edit: Seems like mix-blend-mode: plus-lighter is bugged in Firefox on Windows https://jsfiddle.net/bjg24hk9/

throwatdem12311•2h ago
Issues caused by a low effort localization?

On my iPhone it opens on the App Store app, so it looks fine to me.

j0hax•2h ago
Everything renders crystal clear with Firefox on GrapheneOS.
giarc•2h ago
It's the dutch version, see /nl/ in the url.

If you just go to https://apps.apple.com/ it does look better, but I agree, still a bit "off".

ezfe•2h ago
Nothing weird on my side
lateforwork•54m ago
Here's the US version of the same page: https://apps.apple.com/us/app/google-ai-edge-gallery/id67496...

The design quality is still poor. But that's the new Apple. Design is no longer one of their core strengths.

pmarreck•2h ago
Impressive model, for sure. I've been running it on my Mac, now I get to have it locally in my iPhone? I need to test this. Wait, it does agent skills and mobile actions, all local to the phone? Whaaaat? (Have to check out later! Anyone have any tips yet?)

I don't normally do the whole "abliterated" thing (dealignment) but after discovering https://github.com/p-e-w/heretic , I was too tempted to try it with this model a couple days ago (made a repo to make it easier, actually) https://github.com/pmarreck/gemma4-heretical and... Wow. It worked. And... Not having a built-in nanny is fun!

It's also possible to make an MLX version of it, which runs a little faster on Macs, but won't work through Ollama unfortunately. (LM Studio maybe.)

Runs great on my M4 Macbook Pro w/128GB and likely also runs fine under 64GB... smaller memories might require lower quantizations.

I specifically like dealigned local models because if I have to get my thoughts policed when playing in someone else's playground, like hell am I going to be judged while messing around in my own local open-source one too. And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

Note: I tried to hook this one up to OpenClaw and ran into issues

To answer the obvious question- Yes, this sort of thing enables bad actors more (as do many other tools). Fortunately, there are far more good actors out there, and bad actors don't listen to rules that good actors subject themselves to, anyway.

c2k•2h ago
I run mlx models with omlx[1] on my mac and it works really well.

[1] https://github.com/jundot/omlx

magospietato•2h ago
Haven't built anything on the agent skills platform yet, but it's pretty cool imo.

On Android the sandbox loads an index.html into a WebView, with standardized string I/O to the harness via some window properties. You can even return a rendered HTML page.

Definitely hacked together, but feels like an indication of what an edge compute agentic sandbox might look like in future.

barbazoo•2h ago
> And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

I checked the abliterate script and I don't yet understand what it does or what the result is. What are the conversations this enables?

throwuxiytayq•1h ago
The in-ter-net is for porn
rav3ndust•1h ago
that song is going to be stuck in my head all day now. lol
spijdar•1h ago
Realistically, a lot of people do this for porn.

In my experience, though, it's necessary to do anything security related. Interestingly, the big models have fewer refusals for me when I ask e.g. "in <X> situation, how do you exploit <Y>?", but local models will frequently flat out refuse, unless the model has been abliterated.

tredre3•31m ago
From what I've seen gemma 4 doesn't refuse a lot regarding sex, it only needs little nudging in the right direction sometimes.

But it does refuse being critical of the usual topics: israel, islam, trans, or race.

So wanting to discuss one of those is the real reason people would use an uncensored model.

pmarreck•1h ago
1) Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).

2) Asking questions about sketchy things. Simply asking should not be censored.

3) I don't use it for this, but porn or foul language.

4) Imitating or representing a public figure is often blocked.

5) Asking security-related questions when you are trying to do security.

6) For those who have had it, people who are trying to use AI to deal with traumatic experiences that are illegal to even describe.

Many other instances.

peyton•7m ago
The manufacturing of biologics can be heavily censored to an absurd degree. I don’t know about Gemma 4 in particular.
eloisant•1h ago
I tried it on my mac, for coding, and I wasn't really impressed compared to Qwen.

I guess there are things it's better at?

nkohari•42m ago
You're comparing apples to oranges there. Qwen 3.5 is a much larger model at 397B parameters vs. Gemma's 31B. Gemma will be better at answering simple questions and doing basic automation, and codegen won't be it's strong suit.
tredre3•34m ago
Gemma 4 31B is still not impressive at coding compare to even Qwen 3.5 27B. It's just not its strong suit.

So far gemma 4 seems excellent at role playing, document analysis, and decent at making agentic decisions.

gigatexal•16m ago
This has been my experience as well, Qwen via Ollama locally has been very very impressive.
kgeist•34m ago
Qwen3.5 comes in various sizes (including 27B), and judging by the posts on HN, /LocalLlama etc., it seems to be better at logic/reasoning/coding/tool calling compared to Gemma 4, while Gemma 4 is better at creative writing and world knowledge (basically nothing changed from the Qwen3 vs. Gemma3 era)
Mil0dV•20m ago
Does this also apply to gemma's 26B-A4B vs say Qwens 35B-A3B?

I'm not sure if I can make the 35B-A3B work with my 32GB machine

bossyTeacher•33m ago
>there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

Mind giving us a few of the examples that you plan to run in your local LLM? I am curious.

PullJosh•2h ago
This is awesome!

1) I am able to run the model on my iPhone and get good results. Not as good as Gemini in the cloud, but good.

2) I love the “mobile actions” tool calls that allow the LLM to turn on the flashlight, open maps, etc. It would be fun if they added Siri Shortcuts support. I want the personal automation that Apple promised but never delivered.

3) I am so excited for local models to be normalized. I build little apps for teachers and there are stringent privacy laws involved that mean I strongly prefer writing code that runs fully client-side when possible. When I develop apps and websites, I want easy API access to on-device models for free. I know it sort of exists on iOS and Chrome right now, but as far as I’m aware it’s not particularly good yet.

buzzerbetrayed•52m ago
For me the hallucination and gaslighting is like taking a step back in time a couple of years. It even fails the “r’s in strawberry” question. How nostalgic.

It’s very impressive that this can run locally. And I hope we will continue to be able to run couple-year-old-equivalent models locally going forward.

jeroenhd•2h ago
English version of the page: https://apps.apple.com/us/app/google-ai-edge-gallery/id67496...

Also on Android: https://play.google.com/store/apps/details?id=com.google.ai....

It's a demo app for Google's Edge project: https://ai.google.dev/edge

carbocation•2h ago
It would be very helpful if the chat logs could (optionally) be retained.
TGower•2h ago
These new models are very impressive. There should be a massive speedup coming as well, AI Edge Gallery is running on GPU, but NPUs in recent high end processors should be much faster. A16 chip for example (Macbook Neo and iphone 16 series) has 35 TOPS of Neural Engine vs 7 TFLOPS gpu. Similar story for Qualcomm.
api•2h ago
That’s nuts actually for such a low power chip. Can’t wait to see the M series version of that.

I’m sure very fast TPUs in desktops and phones are coming.

zozbot234•1h ago
The Apple Silicon in the MacBook Neo is effectively a slimmed down version of M4, which is already out and has a very similar NPU (similar TFLOPS rating). It's worth noting however that the TFLOPS rating for Apple Neural Engine is somewhat artificial, since e.g. the "38 TFLOPS" in the M4 ANE are really 19 TFLOPS for FP16-only operation.
janandonly•1h ago
OP Here. It is my firm belief that the only realistic use of AI in the future is either locally on-device for almost free, or in the cloud but way more expensive then it is today.

The latter option will only bemusedly for tasks that humans are more expensive or much slower in.

This Gemma 4 model gives me hope for a future Siri or other with iPhone and macOS integration, “Her” (as in the movie) style.

kennywinker•1h ago
Did you really watch “Her” and think this is a future that should happen??

Seriously????

jfreds•1h ago
I don’t think OP’s point has anything to do with AI companions.

The big benefit of moving compute to edge devices is to distribute the inference load on the grid. Powering and cooling phones is a lot easier than powering and cooling a datacenter

sambapa•1h ago
Torment Nexus sounds fun
aninteger•56m ago
Having Scarlett Johansson's voice might not be so bad or even something less robotic.
0dayman•1h ago
this is not that first step towards your dream
crazygringo•1h ago
> or in the cloud but way more expensive then it is today.

Why? It's widely understood that the big players are making profit on inference. The only reason they still have losses is because training is so expensive, but you need to do that no matter whether the models are running in the cloud or on your device.

If you think about it, it's always going to be cheaper and more energy-efficient to have dedicated cloud hardware to run models. Running them on your phone, even if possible, is just going to suck up your battery life.

nothinkjustai•1h ago
> It's widely understood that the big players are making profit on inference.

Are they? Or are they just saying that to make their offerings more attractive to investors?

Plus I think most people using agents for coding are using subscriptions which they are definitely not profitable in.

Locally running models that are snappy and mostly as capable as current sota models would be a dream. No internet connection required, no payment plans or relying on a third party provider to do your job. No privacy concerns. Etc etc.

zozbot234•1h ago
You can pick models that are snappy, or models that are as capable as SOTA. You don't really get both unless you spend extremely unreasonable amounts of money on what is essentially a datacenter-scale inference platform of your own, meant to service hundreds of users at once. (I don't care how many agent harnesses you spin up at once, you aren't going to get the same utilization as hundreds of concurrent users.)

This assessment might change if local AI frameworks start working seriously on support for tensor-parallel distributed inference, then you might get away with cheaper homelab-class hardware and only mildly unreasonable amounts of money.

zozbot234•1h ago
The big players are plausibly making profits on raw API calls, not subscriptions. These are quite costly compared to third-party inference from open models, but even setting that up is a hassle and you as a end user aren't getting any subsidy. Running inference locally will make a lot of sense for most light and casual users once the subsidies for subscription access cease.

Also while datacenter-based scaleout of a model over multiple GPUs running large batches is more energy efficient, it ultimately creates a single point of failure you may wish to avoid.

mbesto•1h ago
> It's widely understood that the big players are making profit on inference.

This is most definitely not widely understood. We still don't know yet. There's tons of discussions about people disagreeing on whether it really is profitable. Unless you have proof, don't say "this is widely understood".

huijzer•1h ago
Laptop/desktop could work. Most systems are on charger most of time anyway
jrflowers•43m ago
> It's widely understood that the big players are making profit on inference.

I love the whole “they are making money if you ignore training costs” bit. It is always great to see somebody say something like “if you look at the amount of money that they’re spending it looks bad, but if you look away it looks pretty good” like it’s the money version of a solar eclipse

amelius•1h ago
A local model running on a phone owned and controlled by the vendor is still not really exciting, imho.

It may be physically "local" but not in spirit.

_pdp_•28m ago
If you can run free models on consumer devices why do you think cloud providers cannot do the same except better and bundled with a tone of value worth paying?
dwa3592•1h ago
I think with this google starts a new race- best local model that runs on phones.
dwa3592•1h ago
I wonder why the cut off date for 3n-E4B-it is Oct, 2023. That's really far in the past.
burnto•1h ago
My iPhone 13 can’t run most of these models. A decent local LLM is one of the few reasons I can imagine actually upgrading earlier than typically necessary.
deckar01•1h ago
It doesn’t render Markdown or LaTeX. The scrolling is unusable during generation. E4B failed to correctly account for convection and conduction when reasoning about the effects of thermal radiation (31b was very good). After 3 questions in a session (with thinking) E4B went off the rails and started emitting nonsense fragment before the stated token limit was hit (unless it isn’t actually checking).
__natty__•1h ago
That's a great project! I just wondered whether Google would have a problem with you using their trademark
tech234a•1h ago
This is an app published by Google itself
rickdg•1h ago
How do these compare to Apple's Foundation Models, btw?
simonw•1h ago
So much better. Hard to quantify, but even the small Gemma 4 models have that feels-like-ChatGPT magic that Apple's models are lacking.
snarkyturtle•1h ago
AFM had a 4096 token context window and this can be configured to have a 32k+ token context window, for one.
karimf•1h ago
This app is cool and it showcases some use cases, but it still undersells what the E2B model can do.

I just made a real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B. I posted it on /r/LocalLLaMA a few hours ago and it's gaining some traction [0]. Here's the repo [1]

I'm running it on a Macbook instead of an iPhone, but based on the benchmark here [2], you should be able to run the same thing on an iPhone 17 Pro.

[0] https://www.reddit.com/r/LocalLLaMA/comments/1sda3r6/realtim...

[1] https://github.com/fikrikarim/parlor

[2] https://huggingface.co/litert-community/gemma-4-E2B-it-liter...

nothinkjustai•1h ago
Parlor is so cool, especially since you’re offering it for free. And a great use case for local LLMs.
karimf•1h ago
Thanks! Although, I can't claim any credit for it. I just spent a day gluing what other people have built. Huge props to the Gemma team for building an amazing model and also an inference engine that's focused for edge devices [0]

[0] https://github.com/google-ai-edge/LiteRT-LM

beeflet•1h ago
Isn't this already possible in a much more open-ended way with PocketPal?

https://github.com/a-ghorbani/pocketpal-ai

https://apps.apple.com/us/app/pocketpal-ai/id6502579498

https://play.google.com/store/apps/details?id=com.pocketpala...

dzhiurgis•1h ago
I recently got to a first practical use of it. I was on a plane, filling landing card (what a silly thing these are). I looked up my hotel address using qwen model on my iPhone 16 Pro. It was accurate. I was quite impressed.

After some back and forth the chat app started to crash tho, so YMMV.

allpratik•55m ago
Nice! Tried on iPhone 16 pro with 30 TPS from Gemma-4-E2B-it model.

Although the phone got considerably hot while inferencing. It’s quite an impressive performance and cannot wait to try it myself in one of my personal apps.

garff•46m ago
How new of an iPhone model is needed?
XCSme•27m ago
Gemma 4 is great: https://aibenchy.com/compare/google-gemma-4-31b-it-medium/go...

I assume it is the 26B A4B one, if it runs locally?

dhbradshaw•17m ago
My son just started using 2B on his Android. I mentioned that it was an impressively compact model and next thing I knew he had figured out how to use it on his inexpensive 2024 Motorolla and was using it to practice reading and writing in foreign languages.

Gemma 4 on iPhone

https://apps.apple.com/nl/app/google-ai-edge-gallery/id6749645337
256•janandonly•3h ago•70 comments

LÖVE: 2D Game Framework for Lua

https://github.com/love2d/love
122•cl3misch•1d ago•38 comments

Artemis II crew see first glimpse of far side of Moon [video]

https://www.bbc.com/news/videos/ce3d5gkd2geo
355•mooreds•8h ago•279 comments

Eight years of wanting, three months of building with AI

https://lalitm.com/post/building-syntaqlite-ai/
516•brilee•9h ago•161 comments

Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code

https://ai.georgeliu.com/p/running-google-gemma-4-locally-with
116•vbtechguy•5h ago•30 comments

Caveman: Why use many token when few token do trick

https://github.com/JuliusBrussee/caveman
634•tosh•13h ago•298 comments

Music for Programming

https://musicforprogramming.net
46•merusame•4h ago•12 comments

A tail-call interpreter in (nightly) Rust

https://www.mattkeeter.com/blog/2026-04-05-tailcall/
108•g0xA52A2A•7h ago•12 comments

Computational Physics (2nd Edition)

https://websites.umich.edu/~mejn/cp2/
76•teleforce•6h ago•10 comments

Nanocode: The best Claude Code that $200 can buy in pure JAX on TPUs

https://github.com/salmanmohammadi/nanocode/discussions/1
125•desideratum•8h ago•21 comments

The underrated benefits of always having oatmeal at lunch

https://hazn.com/oatmeal
28•surprisetalk•3d ago•34 comments

UK intelligence censored report on global warming and homeland security

https://theoryofchange1.substack.com/p/from-global-warming-to-homeland-security
44•ewidar•1h ago•22 comments

LibreOffice – Let's put an end to the speculation

https://blog.documentfoundation.org/blog/2026/04/05/lets-put-an-end-to-the-speculation/
131•eisa01•4h ago•75 comments

From birds to brains: My path to the fusiform face area (2024)

https://www.kavliprize.org/nancy-kanwisher-autobiography
30•everbody•5h ago•0 comments

The Free Market Lie: Why Switzerland Has 25 Gbit Internet and America Doesn't

https://sschueller.github.io/posts/the-free-market-lie/
73•sschueller•3h ago•54 comments

Lisette a little language inspired by Rust that compiles to Go

https://lisette.run/
243•jspdown•15h ago•127 comments

Finnish sauna heat exposure induces stronger immune cell than cytokine responses

https://www.tandfonline.com/doi/full/10.1080/23328940.2026.2645467#abstract
293•Growtika•9h ago•196 comments

Friendica – A Decentralized Social Network

https://friendi.ca/
112•janandonly•11h ago•42 comments

Show HN: Contrapunk – Real-time counterpoint harmony from guitar input

https://contrapunk.com/
107•waveywaves•21h ago•40 comments

Baby's Second Garbage Collector

https://www.matheusmoreira.com/articles/babys-second-garbage-collector
38•matheusmoreira•3d ago•11 comments

The threat is comfortable drift toward not understanding what you're doing

https://ergosphere.blog/posts/the-machines-are-fine/
770•zaikunzhang•12h ago•508 comments

Perfmon – Consolidate your favorite CLI monitoring tools into a single TUI

https://github.com/sumant1122/Perfmon
39•paperplaneflyr•8h ago•6 comments

Hightouch (YC S19) Is Hiring

https://hightouch.com/careers#open-positions
1•joshwget•10h ago

Tracing Goroutines in Realtime with eBPF

https://sazak.io/articles/tracing-goroutines-in-realtime-with-ebpf-2026-03-31
55•darccio•3d ago•6 comments

Musician says AI company is cloning her music, filing claims against her

https://twitter.com/i/status/2040577536136974444
28•lando2319•2h ago•2 comments

Bacteria found in the human intestine capable of improving muscle strength

https://www.ugr.es/en/about/news/bacteria-found-human-intestine-capable-improving-muscle-strength
100•gnabgib•3h ago•59 comments

Qwen-3.6-Plus is the first model to break 1T tokens processed in a day

https://twitter.com/openrouter/status/2040239467865489874
15•Alifatisk•1h ago•8 comments

Just 'English with Hanzi'

https://www.oldnorthwhale.com/p/why-modern-chinese-is-just-english
72•scour•2d ago•40 comments

Scientists Figured Out How Eels Reproduce (2022)

https://www.intelligentliving.co/scientists-finally-figured-out-how-eels-reproduce/
104•thunderbong•3d ago•22 comments

The Enigma of Gertrude Stein

https://www.thenation.com/article/culture/gertrude-stein-afterlife-wade-review/
18•samclemens•3d ago•2 comments