The E2B/E4B models also support voice input, which is rare.
The sizes are E2B and E4B (following gemma3n arch, with focus on mobile) and 26BA4 MoE and 31B dense. The mobile ones have audio in (so I can see some local privacy focused translation apps) and the 31B seems to be strong in agentic stuff. 26BA4 stands somewhere in between, similar VRAM footprint, but much faster inference.
I agree it's misleading for them to hyper-focus on one metric, but public benchmarks are far from the only thing that matters. I place more weight on Lmarena scores and private benchmarks.
We are at least 1 year and at most 2 years until they surpass closed models for everyday tasks that can be done locally to save spending on tokens.
Until they pass what closed models today can do.
By that time, closed models will be 4 years ahead.
Google would not be giving this away if they believed local open models could win.
Google is doing this to slow down Anthropic, OpenAI, and the Chinese, knowing that in the fullness of time they can be the leader. They'll stop being so generous once the dust settles.
Google, at least, is likely interested in such a scenario, given their broad smartphone market. And if their local Gemma/Gemini-nano LLMs perform better with Gemini in the cloud, that would naturally be a significant advantage.
At least, as of this post
It's a good balance between accuracy and memory, though in my experience, it's slower than older model architectures such as Llava. Just be aware Qwen-VL tends to be a bit verbose [2], and you can’t really control that reliably with token limits - it'll just cut off abruptly. You can ask it to be more concise but it can be hit or miss.
What I often end up doing and I admit it's a bit ridiculous is letting Qwen-VL generate its full detailed output, and then passing that to a different LLM to summarize.
| Model | MMLUP | GPQA | LCB | ELO | TAU2 | MMMLU | HLE-n | HLE-t |
|----------------|-------|-------|-------|------|-------|-------|-------|-------|
| G4 31B | 85.2% | 84.3% | 80.0% | 2150 | 76.9% | 88.4% | 19.5% | 26.5% |
| G4 26B A4B | 82.6% | 82.3% | 77.1% | 1718 | 68.2% | 86.3% | 8.7% | 17.2% |
| G4 E4B | 69.4% | 58.6% | 52.0% | 940 | 42.2% | 76.6% | - | - |
| G4 E2B | 60.0% | 43.4% | 44.0% | 633 | 24.5% | 67.4% | - | - |
| G3 27B no-T | 67.6% | 42.4% | 29.1% | 110 | 16.2% | 70.7% | - | - |
| GPT-5-mini | 83.7% | 82.8% | 80.5% | 2160 | 69.8% | 86.2% | 19.4% | 35.8% |
| GPT-OSS-120B | 80.8% | 80.1% | 82.7% | 2157 | -- | 78.2% | 14.9% | 19.0% |
| Q3-235B-A22B | 84.4% | 81.1% | 75.1% | 2146 | 58.5% | 83.4% | 18.2% | -- |
| Q3.5-122B-A10B | 86.7% | 86.6% | 78.9% | 2100 | 79.5% | 86.7% | 25.3% | 47.5% |
| Q3.5-27B | 86.1% | 85.5% | 80.7% | 1899 | 79.0% | 85.9% | 24.3% | 48.5% |
| Q3.5-35B-A3B | 85.3% | 84.2% | 74.6% | 2028 | 81.2% | 85.2% | 22.4% | 47.4% |
MMLUP: MMLU-Pro
GPQA: GPQA Diamond
LCB: LiveCodeBench v6
ELO: Codeforces ELO
TAU2: TAU2-Bench
MMMLU: MMMLU
HLE-n: Humanity's Last Exam (no tools / CoT)
HLE-t: Humanity's Last Exam (with search / tool)
no-T: no think(Comparing Q3.5-27B to G4 26B A4B and G4 31B specifically)
I'd assume Q3.5-35B-A3B would performe worse than the Q3.5 deep 27B model, but the cards you pasted above, somehow show that for ELO and TAU2 it's the other way around...
Very impressed by unsloth's team releasing the GGUF so quickly, if that's like the qwen 3.5, I'll wait a few more days in case they make a major update.
Overall great news if it's at parity or slightly better than Qwen 3.5 open weights, hope to see both of these evolve in the sub-32GB-RAM space. Disappointed in Mistral/Ministral being so far behind these US & Chinese models
Because those are two different, completely independent Elos... the one you linked is for LMArena, not Codeforces.
Same here. I can't wait until mlx-community releases MLX optimized versions of these models as well, but happily running the GGUFs in the meantime!
One more thing about Google is that they have everything that others do not:
1. Huge data, audio, video, geospatial 2. Tons of expertise. Attention all you need was born there. 3. Libraries that they wrote. 4. Their own data centers and cloud. 4. Most of all, their own hardware TPUs that no one has.
Therefore once the bubble bursts, the only player standing tall and above all would be Google.
Others have just borrowed data, money, hardware and they would run out of resources for sure.
Maybe the model is good but the product is so shitty that I can't perceive its virtues while using it. I would characterize it as pretty much unusable (including as the "Google Assistant" on my phone).
It's extremely frustrating every way that I've used it but it seems like Gemini and Gemma get nothing but praise here.
Really eager to test this version with all the extra capabilities provided.
https://unsloth.ai/docs/models/gemma-4 > Gemma 4 GGUFs > "Use this model" > llama.cpp > llama-server -hf unsloth/gemma-4-31B-it-GGUF:Q8_0
If you already have llama.cpp you might need to update it to support Gemma 4.
I am only a casual AI chatbot user, I use what gives me the most and best free limits and versions.
Although I'm not sure whether Gemma will be available even in aistudio - they took the last one down after people got it to say/do questionable stuff. It's very much intended for self-hosting.
What's the business case for releasing Gemma and not just focusing on Gemini + cloud only?
With the caveat that I'm not on the pixel team and I'm not building _all_ the models that are used on, its evident there are many models that support the Android experience, from autocomplete on keyboard to image editing.
https://store.google.com/us/magazine/magic-editor?hl=en-US&p...
Where can I download the full model? I have 128GB Mac Studio
The decision is always a mix between how good we can make the models from a technical aspect, with how good they need to be to make all of you super excited to use them. And its a bit of a challenge what is an ever changing ecosystem.
I'm personally curious is there a certain parameter size you're looking for?
Was it too good or not good enough? (blink twice if you can't answer lol)
Isn't that more dictated by the competition you're facing from Llama and Qwent?
Also, as I understand it the 26B is the MOE and the 31B is dense - why is the larger one dense and the smaller one MOE?
https://developers.googleblog.com/en/gemma-3-quantized-aware...
Having 4 bit QAT versions of the larger models would be great for people who only have 16 or 24 GB of VRAM.
Thank you for the release.
-Chris Lattner (yes, affiliated with Modular :-)
https://gist.github.com/simonw/12ae4711288637a722fd6bd4b4b56...
The gemma-4-31b model is completely broken for me - it just spits out "---\n" no matter what prompt I feed it.
# with uvx
uvx litert-lm run \
--from-huggingface-repo=litert-community/gemma-4-E2B-it-litert-lm \
gemma-4-E2B-it.litertlmFirst message:
https://i.postimg.cc/yNZzmGMM/Screenshot-2026-04-03-at-12-44...
Not sure if I'm doing something wrong?
This more or less reflects my experience with most local models over the last couple years (although admittedly most aren't anywhere near this bad). People keep saying they're useful and yet I can't get them to be consistently useful at all.
I had a similar experience running Qwen 3.5 35b a3b directly through llama.cpp. It would massively overthink every request. Somehow in OpenCode it just worked.
I think it comes down to temperature and such (see daniel‘s post), but I haven’t messed with it enough to be sure.
danielhanchen•1h ago
We made some quants at https://huggingface.co/collections/unsloth/gemma-4 for folks to run them - they work really well!
Guide for those interested: https://unsloth.ai/docs/models/gemma-4
Also note to use temperature = 1.0, top_p = 0.95, top_k = 64 and the EOS is "<turn|>". "<|channel>thought\n" is also used for the thinking trace!
l2dy•1h ago
danielhanchen•1h ago
Imustaskforhelp•1h ago
I am not sure if someone might have asked this already to you, but I have a question (out of curiosity) as to which open source model you find best and also, which AI training team (Qwen/Gemini/Kimi/GLM) has cooperated the most with the Unsloth team and is friendly to work with from such perspective?
danielhanchen•1h ago
Tbh Gemma-4 haha - it's sooooo good!!!
For teams - Google haha definitely hands down then Qwen, Meta haha through PyTorch and Llama and Mistral - tbh all labs are great!
Imustaskforhelp•1h ago
danielhanchen•1h ago
evilelectron•58m ago
I setup a pipeline for inference with OCR, full text search, embedding and summarization of land records dating back 1800s. All powered by the GGUF's you generate and llama.cpp. People are so excited that they can now search the records in multiple languages that a 1 minute wait to process the document seems nothing. Thank you!
danielhanchen•57m ago
Oh nice! That sounds fantastic! I hope Gemma-4 will make it even better! The small ones 2B and 4B are shockingly good haha!
zaat•26m ago
You have an answer on your page regarding "Should I pick 26B-A4B or 31B?", but can you please clarify if, assuming 24GB vRAM, I should pick a full precision smaller model or 4 bit larger model?
danielhanchen•7m ago
I presume 24B is somewhat faster since it's only 4B activated - 31B is quite a large dense model so more accurate!