frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: What are you working on? (February 2026)

238•david927•1d ago•813 comments

Ask HN: Do provisional patents matter for early-stage startups?

17•gdad•6h ago•13 comments

OrthoRay – A native, lightweight DICOM viewer written in Rust/wgpu by a surgeon

3•DrMeric•3h ago•3 comments

I Built a Browser Flight Simulator Using Three.js and CesiumJS

7•dimartarmizi•4h ago•1 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

54•UmYeahNo•3d ago•35 comments

Ask HN: Open Models are 9 months behind SOTA, how far behind are Local Models?

9•myk-e•11h ago•10 comments

What Is Genspark?

4•powera•18h ago•0 comments

What do you use for your customer facing analytics?

3•arbiternoir•18h ago•4 comments

Ask HN: What made VLIW a good fit for DSPs compared to GPUs?

6•rishabhaiover•1d ago•3 comments

Ask HN: Ideas for small ways to make the world a better place

34•jlmcgraw•3d ago•38 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

50•Invictus0•3d ago•12 comments

The $5.5T Paradox: Structural displacement in the GPU/AI infra labor demand?

2•y2236li•1d ago•1 comments

Ask HN: Non AI-obsessed tech forums

45•nanocat•2d ago•34 comments

Tell HN: Another round of Zendesk email spam

105•Philpax•5d ago•54 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

22•jchung•4d ago•17 comments

The string " +#+#+#+#+#+ " breaks Codex 5.3

8•kachapopopow•1d ago•5 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

3•Chance-Device•2d ago•2 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•3d ago•2 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•5d ago•6 comments

Ask HN: Is it just me or are most businesses insane?

14•justenough•4d ago•7 comments

LLMs are powerful, but enterprises are deterministic by nature

6•prateekdalal•2d ago•14 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•3d ago•12 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

4•netfortius•2d ago•4 comments

Ask HN: Does a good "read it later" app exist?

9•buchanae•5d ago•20 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

17•locusofself•5d ago•16 comments

You've reached the end!

Open in hackernews

Ask HN: Open Models are 9 months behind SOTA, how far behind are Local Models?

9•myk-e•11h ago

Comments

softwaredoug•9h ago
A local model is a smaller open model, so I’d expect it to be 9 months behind a small (ie nano) closed model as a base assumption
myk-e•4h ago
Yes, a small open model that can run on today's hardware and that compared to a historic SOTA closed model with all in. What time difference do we think?
magicalhippo•8h ago
A local model is an open model you run locally, so I'm not entirely sure the distinction in the question makes sense.

That said, if you're talking about models you can actually use on a single regular computer that costs less than a new home, the current crop of open models are very capable but also have noticeable limitations.

Small models will always have limitations in terms of capability and especially knowledge. Improved training data and training regiment can squeeze out more from the same number of weights, but there is a limit.

So with that in mind, I think such a question only makes sense when talking about specific tasks, like creative writing, data extraction from text, answering knowledge questions, refactoring code, writing greenfield code, etc.

In some of these areas the smaller open models are very good and not that far behind. In other areas they are lagging much more, due to their inherent limitations.

myk-e•4h ago
Yes, I meant ordinary hardware which you find at home, like a current MacBook Air or equivalent Windows desktop. There must be a time frame when early SOTA LLMs were at a level that compares to open models that can run on ordinary hardware. But it's more like years rather than months. My rough guess would be 2-3 years. Which still would be amazing if we could get OPUS 4.5 quality within 2-3 years on an ordinary computer.
karmakaze•2h ago
I don't know if you'd consider this ordinary, but a single Mac Studio M5 Ultra 512GB (or even 256GB) V/RAM seems pretty sweet.
myk-e•18m ago
I love the spec, but it is like 5x or 10x a Macbook Air I mean really ordinary, Personal Computer in broad sense - not dedicated LLM kit.
hasperdi•8h ago
Well, it depends on the hardware you have. If you have a hardware locally that can run best open models, then your local models are as capable as the open models.

That said, open models are not far behind SOTA, less than 9 months gap.

If what you're asking about those models that you can run on retail GPUs, then they're a couple years behind. They're "hobby" grade.

myk-e•4h ago
Thanks, yes, I meant even ordinary retail PCs, not specialized GPUs. At some point in time in history, SOTA closed models were at a level that compares to todays open models that can run on ordinary hardware.
hasperdi•3h ago
Retail PCs will probably never catch up to even the open‑weight models (the full, non‑quantized versions). Unless there’s a breakthrough, they just don’t have enough parameters to hold all the information we expect SOTA models to contain.

That’s the conventional view. I think there’s another angle: train a local model to act as an information agent. It could “realize” that, yeah, it’s a small model with limited knowledge, but it knows how to fetch the right data. Then you hook it up to a database and let it do the heavy lifting.

myk-e•16m ago
Maybe the industry adapts too and the future PC is AI-ready out-of-the-box. Because people demand that.