frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

CPUs Aren't Dead. Gemma2B Out Scored GPT-3.5 Turbo on Test That Made It Famous

https://seqpu.com/CPUsArentDead/
52•fredmendoza•1h ago

Comments

fredmendoza•1h ago
we found something interesting and wanted to share it with this community.

we wanted to know how google's gemma 4 e2b-it — 2 billion parameters, bfloat16, apache 2.0 — stacks up against gpt-3.5 turbo. not in vibes. on the same test. mt-bench: 80 questions, 160 turns, graded 1-10 — what the field used to grade gpt-3.5 turbo, gpt-4, and every major model of the last three years. we ran gemma through all of it on a cpu. 169-line python wrapper. no fine-tuning, no chain-of-thought, no tool use.

gpt-3.5 turbo scored 7.94. gemma scored ~8.0. 87x fewer parameters, on a cpu — the kind already in your laptop.

but the score isn't what we want to talk about. what's interesting is what we found when we read the tape.

we graded all 160 turns by hand. (when we used ai graders on the coding questions, they scored responses as gpt-4o-level.) the failures aren't random. they're specific, nameable patterns at concrete moments in generation. seven classes.

cleanest example: benjamin buys 5 books at $20, 3 at $30, 2 at $45. total is $280. the model writes "$245" first, then shows its work — 100 + 90 + 90 = 280 — and self-corrects. the math was right. the output token fired before the computation finished. we saw this on three separate math questions — not a fluke, a pattern.

the fix: we gave it a calculator. model writes a python expression, subprocess evaluates it, result comes back deterministic. ~80 lines. arithmetic errors gone. six of seven classes follow the same shape — capability is there, commit flinches, classical tool catches the flinch. z3 for logic, regex for structural drift, ~60 lines each. projected score with guardrails: ~8.2. the seventh is a genuine knowledge gap we documented as a limitation.

one model, one benchmark, one weekend. but it points at something underexplored.

this model is natively multimodal — text, images, audio in one set of weights. quantized to Q4_K_M it's 1.3GB. google co-optimized it with arm and qualcomm for mobile silicon. what runs it now:

phones: iphone 14 pro+ (A16), mid-range android 2023+ with 6GB+ ram

tablets: ipads m-series, galaxy tab s8+, pixel tablet — anything 6GB+

single-board: raspberry pi

laptops: anything from the last 5-7 years, 8GB+ ram

edge/cloud: cloudflare containers, $5/month — scales to zero, wakes on request

google says e2b is the foundation for gemini nano 4, already on 140 million android devices. the same model that matched gpt-3.5 turbo. on phones in people's pockets. think about what that means: a pi in a conference room listening to meetings, extracting action items with sentiment, saving notes locally — no cloud, no data leaving the building. an old thinkpad routing emails. a mini-pc running overnight batch jobs on docs that can't leave the network. a phone doing translation offline. google designed e2b for edge from the start — per-layer embeddings, hybrid sliding-window/global attention to keep memory low. if a model designed for phones scores higher than turbo on the field's standard benchmark, cpu-first model design is a real direction, not a compromise.

the gpu isn't the enemy. it's a premium tool. what we're questioning is whether it should be the default — because what we observed looks more like a software engineering problem than a compute problem. cs already has years of tools that map onto these failure modes. the models may have just gotten good enough to use them. the article has everything: every score, every error class with tape examples, every fix, the full benchmark harness with all 80 questions, and the complete telegram bot code. run it yourself, swap in a different model, or just talk to the live bot — raw model, no fixes, warts and all.

we don't know how far this extends beyond mt-bench or whether the "correct reasoning, wrong commit" pattern has a name. we're sharing because we think more people should be looking at it. everything is open. the code is in the article. tear it apart.

ComputerGuru•2m ago
Grading by hand was done fully blinded?

(Also this comment is ai generated so I’m not sure who I’m even asking.)

100ms•39m ago
Tiny model overfit on benchmark published 3 years prior to its training. News at 10
bigyabai•38m ago
But GPT-3.5 was benchmaxxing too.
100ms•38m ago
GPT 3.5 Turbo knowledge cutoff was circa 2021. MT-Bench is from 2023. Not suggesting improvements on small models aren't possible (or forthcoming, the 1.85 bit etc models look exciting), but this almost certainly isn't that.
svnt•34m ago
> The model does not need to be retrained. It needs surgical guardrails at the exact moments where its output layer flinches.

> With those guardrails — a calculator for arithmetic, a logic solver for formal puzzles, a per-requirement verifier for structural constraints, and a handful of regex post-passes — the projected score climbs to ~8.2.

Surgical guardrails? Tools, those are just tools.

polotics•31m ago
"Surgical "is the kind of wordage that LLMs seem to love to output. I have had to put in my .md file the explicit statement that the word "surgical" should only be used when referring to an actual operation at the block...
fredmendoza•26m ago
you're right, they are tools. that's kind of the point. PAL is a subprocess that runs a python expression. Z3 is a constraint solver. regex is regex. calling them "surgical" is just about when they fire, not what they are. the model generates correctly 90%+ of the time. the guardrails only trigger on the 7 specific patterns we found in the tape. to be clear, the ~8.0 score is the raw model with zero augmentation. no tools, no tricks. just the naive wrapper. the guardrail projections are documented separately. all the code is in the article for anyone who wants to review it.
mrtesthah•12m ago
The core issue is that the LLM is using rhetoric to try to convince or persuade you. That's what you need to tell it not to do.
operatingthetan•10m ago
>It needs surgical guardrails at the exact moments where its output layer flinches.

This article is very clearly shitty LLM output. Abstract noun and verb combos are the tipoff.

It's actually quite horrible, it repeats lines from paragraph to paragraph.

roschdal•34m ago
I yearn for the days when I can program on my PC with a programming llm running on the CPU locally.
fredmendoza•22m ago
you're honestly not that far off. the coding block on this model scored 8.44 with zero help. it caught a None-init TypeError on a code review question that most people would miss. one question asked for O(n) and it just went ahead and shipped O(log(min(m,n))) on its own. it's not copilot but it's free, it's offline, and it runs on whatever you have. there's a 30-line chat.py in the article you can copy and run tonight.
trgn•18m ago
we need sqlite for llms
philipkglass•3m ago
I think that we're getting there. I put together a workstation in early 2023 with a single 4090 GPU. I did it to run things like BERT and YOLO image classifiers. At that point the only "open weights" LLM was the original Llama from Meta, and even that was open-weights only because it was leaked. It was a very weak model by today's standards.

With the same hardware I now get genuine utility out of models like Qwen 3.5 for document processing, extraction, and categorization. I don't use local models for coding since commercial ones are so much stronger, but if I had to go back to local models for coding too they would be more useful than anything commercially available as recently as 4 years ago.

luxuryballs•6m ago
You can do it on a laptop today, faster with gpu/npu, it’s not going to one shot something complex but you can def pump out models/functions/services, scaffold projects, write bash/powershell scripts in seconds.
yazaddaruvala•4m ago
I’ve been using Google AI Edge Gallery on my M1 MacBook with Gemma4B with very good results for random python scripts.

Unfortunately still need to copy paste the code into a file+terminal command. Which is annoying but works.

FergusArgyll•30m ago
Posters comment is dead. It may be llm-assisted but should prob be vouched for anyway as long as the story isn't flagged.
fredmendoza•18m ago
appreciate the vouch but come on lol. we ran 80 questions, graded 160 turns by hand, documented 7 error classes, open sourced all the code, and put a live bot up for people to test. to write this post up took me hours. everyone is a critic lol.
drivebyhooting•26m ago
That was prolix and repetitive. I wish the purported simple fixes were shown on the page.
stavros•2m ago
I wish the page were just the prompt they used to generate the article. I like LLMs as much as the next person, but we don't really need two intermediate LLM layers (expand and summarise) between your brain and mine.
fb03•10m ago
Can you run the same tests on Qwen3.5:9b? that's also a model that runs very well locally, and I believe it's even stronger than Gemma2B
MarsIronPI•6m ago
It's almost like Qwen 3.5 9B is 4 times larger.
MarsIronPI•7m ago
> A weekend of focused work, Claude as pair programmer, no ML degree required

It's not caught up if you're using Claude as your pair programmer instead of the model you're touting. Gemma 4 may be equivalent to GPT-3.5 Turbo, but GPT-3.5 isn't SOTA anymore. Opus 4.5 and 4.6 are in a different league.

semiquaver•6m ago
This really shows the power of distillation. One thing I find amusing: download the Google Edge Gallery app and one of the chat models, then go into airplane mode and ask it about where it’s deployed. gemma-4-e2b-it is quite confident that it is deployed in a Google datacenter and that deploying it on a phone is completely impossible. The larger 4B model is much subtler: it’s skeptical about the claim but does seem to accept it and sound genuinely impressed and excited after a few turns.

I don’t know how any AI company can be worth trillions when you can fit a model only 12-18 months behind the frontier on your dang phone. Thought will be too cheap to meter in 10 years.

declan_roberts•5m ago
I'm very surprised at the quality of the new Gemma 4 models. On my 32 gig Mac mini I can be very productive with it. Not close to replacing paid AI by a long shot, but if I had to tighten the belt I could do it as someone who already knows how to program.
j-bos•52s ago
What's your setup/usecase? Enhanced intellisense?
ComputerGuru•4m ago
Seems to be llm written article and the tooling around the model is undeniably influenced by knowledge of the tests.

In all cases, GPT 3.5 isn’t a good benchmark for most serious uses and was considered to be pretty stupid, though I understand that isn’t the point of the article.

Startups Are Context Arbitrages

https://www.alessiofanelli.com/posts/startups-are-context-arbitrages
1•FanaHOVA•47s ago•0 comments

Coq theorem prover is now called Rocq

https://rocq-prover.org/about
1•rwmj•47s ago•0 comments

Printing real headline news on the Commodore 64 with The Newsroom's Wire Service

http://oldvcr.blogspot.com/2023/03/printing-real-headline-news-on.html
1•superultra•1m ago•0 comments

Space Force looks at moving "significant number" of launches from ULA to SpaceX

https://arstechnica.com/space/2026/04/space-force-looks-at-moving-significant-number-of-launches-...
2•Bender•1m ago•0 comments

Opting out of cookies no guarantee

https://globalprivacyaudit.org/2026/california
1•HelloUsername•2m ago•0 comments

How Accurate Are Google's A.I. Overviews?

https://www.nytimes.com/2026/04/07/technology/google-ai-overviews-accuracy.html
1•bookofjoe•3m ago•1 comments

Lowdefy v5: The Config Webstack

https://lowdefy.com/articles/lowdefy-5-whats-new/
1•gervwyk•3m ago•0 comments

I made Agentation for vanilla JavaScript

https://github.com/mearnest-dev/agentation-vanilla
1•mearnest•4m ago•1 comments

Project Glasswing Has a Blind Spot. It's You

https://quodeq.ai/blog/glasswing-blind-spot/
3•vikDPG•4m ago•0 comments

Users lose $9.5M to fake Ledger wallet app on the Apple App Store

https://www.web3isgoinggreat.com/?id=fake-ledger-app
1•CharlesW•5m ago•0 comments

Jane Street Signs $6B AI Cloud Agreement with CoreWeave

https://www.coreweave.com/news/jane-street-signs-6-billion-ai-cloud-agreement-with-coreweave
1•moelf•5m ago•0 comments

Linux 7.1 Is a Big Win for Intel Panther Lake with Fred Now Enabled by Default

https://www.phoronix.com/news/Linux-7.1-Enabled-Intel-FRED
1•mikece•7m ago•0 comments

Keyword Scout

https://keywordscout.app
1•DailyGeo•7m ago•0 comments

ChatGPT, Is This Real?

https://arxiv.org/abs/2604.09316
2•runningmike•9m ago•0 comments

Show HN: EmbedIQ – Claude Code Compliance Config for HIPAA/PCI-DSS/SOC2

https://github.com/asq-sheriff/embediq
1•asqpl•10m ago•0 comments

We Built Hanker in 14 Days with Claude

https://hanker.app/blog/we-built-hanker-in-14-days-with-claude-heres-the-slightly-unhinged-techni...
1•whatsupdog•11m ago•0 comments

Fiverr Denies Report of Data Leak

https://www.pymnts.com/cybersecurity/2026/fiverr-denies-report-of-data-leak/
1•shooker435•12m ago•1 comments

AI papers published in 2026 worth reading

https://www.chapterpal.com/curriculum/a0/papers-published-in-2026-worth-reading
1•roody_wurlitzer•14m ago•1 comments

Claude Cowork found me a flat to rent in London in just 5 days

https://old.reddit.com/r/ClaudeAI/comments/1smay7l/claude_cowork_found_me_a_flat_to_rent_in_london/
1•mikepapadim•14m ago•1 comments

Project Maven Put A.I. Into the Kill Chain

https://www.newyorker.com/books/under-review/how-project-maven-put-ai-into-the-kill-chain
1•littlexsparkee•15m ago•0 comments

How China is wooing Paraguay's political class away from longtime ally Taiwan

https://www.japantimes.co.jp/news/2026/03/14/asia-pacific/politics/china-wooing-paraguay/
1•PaulHoule•16m ago•0 comments

The Courage to Stop

https://zeldman.com/2026/04/15/the-courage-to-stop/
1•speckx•16m ago•0 comments

Anthropic's rise is giving some OpenAI investors second thoughts

https://techcrunch.com/2026/04/14/anthropics-rise-is-giving-some-openai-investors-second-thoughts/
1•Brajeshwar•17m ago•0 comments

Study of the cosmos proves we still can't explain how the universe is expanding

https://www.livescience.com/space/somethings-missing-most-thorough-ever-study-of-the-cosmos-prove...
2•geox•18m ago•0 comments

AAUP does not want you to share your syllabus

https://www.aaupnc.org/projects/guidance-for-syllabi
1•apwheele•18m ago•0 comments

Show HN: Horizontally Scale Localhost

https://coasts.dev/blog/introducing-remote-coasts
1•jsunderland323•19m ago•0 comments

Before he wrote AI 2027, he predicted the world in 2026. How did he do?

https://asteriskmag.substack.com/p/before-he-wrote-ai-2027-he-predicted
2•gmays•20m ago•0 comments

Shoe brand Allbirds says it will become an AI company, sending shares soaring

https://www.sfchronicle.com/tech/article/allbirds-stock-ai-pivot-22208030.php?link_source=ta_blue...
3•jaredwiener•20m ago•0 comments

Generating a Color Spectrum for an Image

https://amandahinton.com/blog/generating-a-color-spectrum-for-an-image
1•evakhoury•20m ago•0 comments

Show HN: The Simpsons Hit and Run Running in the Browser (WASM/WebGL)

https://shar-wasm.cjoseph.workers.dev/?skipmovie
1•calebj0seph•21m ago•1 comments