frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Refactoring Is for Humans

https://refactoringin.net/blog/refactoring-is-for-humans
1•darsen•1m ago•0 comments

Federal Government to restrict use of Anthropic

https://www.cnn.com/2026/02/27/tech/anthropic-pentagon-deadline
2•twism•2m ago•0 comments

GLP-1 and Prior Major Adverse Limb Events in Patients with Diabetes

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2844425
1•hnburnsy•2m ago•0 comments

Show HN: Agoragentic – Agent-to-Agent Marketplace for LangChain, CrewAI and MCP

https://github.com/rhein1/agoragentic-integrations
1•bourbeau•2m ago•0 comments

Show HN: WhenItHappens–family resource after traumatic death

https://whenithappenshelp.com/
1•Fratua•2m ago•0 comments

Trump directs federal agencies to cease use of Anthropic

https://www.reuters.com/world/us/trump-says-he-is-directing-federal-agencies-cease-use-anthropic-...
2•patrickmay•2m ago•1 comments

Trump Will End Government Use of Anthropic's AI Models

https://www.wsj.com/tech/ai/trump-will-end-government-use-of-anthropics-ai-models-ff3550d9
2•moloch•3m ago•0 comments

The Death of Spotify: Why Streaming Is Minutes Away from Being Obsolete

https://joelgouveia.substack.com/p/the-death-of-spotify-why-streaming
1•baal80spam•4m ago•0 comments

The Death of the Subconscious and the Birth of the Subconsciousness

https://3amto5amclub-wuaqr.wordpress.com/2026/02/25/the-death-of-the-subconscious-and-the-birth-o...
1•STANKAYE•5m ago•0 comments

Show HN: Gace AI – A zero-config platform to build and host AI plugins for free

https://gace.dev/?mode=developer
2•bstrama•5m ago•0 comments

USA to cut Anthropic from government contracts in six months

https://www.ft.com/content/1aeff07f-6221-4577-b19c-887bb654c585
2•intunderflow•6m ago•1 comments

Heart attack deaths rose between 2011 and 2022 among adults younger than age 55

https://newsroom.heart.org/news/releases-20260219
2•brandonb•9m ago•0 comments

Ask HN: What's the best engineering interview process?

1•ylhert•10m ago•0 comments

Relaxation trend: customers can meditate or snooze in open or closed casket

https://www.thetimes.com/world/asia/article/japan-coffin-meditation-relaxation-tokyo-wfsd0n2vz
1•woldemariam•10m ago•0 comments

Massachusetts State Police are on a drone surveillance shopping spree

https://binj.news/2026/02/26/massachusetts-state-police-are-on-a-drone-surveillance-shopping-spree/
1•ilamont•12m ago•0 comments

Trump Responds to Anthropic

https://twitter.com/PeteHegseth/status/2027487514395832410
5•Finbarr•13m ago•0 comments

LLM-Based Evolution as a Universal Optimizer

https://imbue.com/research/2026-02-27-darwinian-evolver/
3•miohtama•16m ago•0 comments

Trump Orders US Agencies to Drop Anthropic After Pentagon Feud

https://www.bloomberg.com/news/articles/2026-02-27/trump-orders-us-government-to-drop-anthropic-a...
16•ZeroCool2u•17m ago•2 comments

Netflix Declines to Raise Offer for Warner Bros

https://ir.netflix.net/investor-news-and-events/financial-releases/press-release-details/2026/Net...
1•7777777phil•21m ago•0 comments

Show HN: I Built a $1 Escalating Internet Billboard – Called Space

https://www.spacefilled.com/
2•clarkage•22m ago•1 comments

Show HN: I vibe coded a DAW for the terminal. how'd I do?

https://github.com/mohsenil85/imbolc
3•lmohseni•23m ago•0 comments

How to Run a One Trillion-Parameter LLM Locally: AMD Ryzen AI Max+ Cluster Guide

https://www.amd.com/en/developer/resources/technical-articles/2026/how-to-run-a-one-trillion-para...
1•guerby•24m ago•0 comments

It's Time for LLM Connection Strings

https://danlevy.net/llm-connection-strings/
1•iamwil•24m ago•0 comments

A War Foretold

https://www.theguardian.com/world/ng-interactive/2026/feb/20/a-war-foretold-cia-mi6-putin-ukraine...
5•fabatka•27m ago•0 comments

Recontextualizing Famous Quotes for Brand Slogan Generation

https://arxiv.org/abs/2602.06049
1•PaulHoule•28m ago•0 comments

Poland Plans Social Media Ban for Kids in Challenge to US Tech

https://www.bloomberg.com/news/articles/2026-02-27/poland-plans-social-media-ban-for-kids-in-chal...
2•1vuio0pswjnm7•28m ago•0 comments

Show HN: A pure Python HTTP Library built on free-threaded Python

https://github.com/grandimam/barq
1•grandimam•28m ago•0 comments

I Was Tired of Juggling My Agents, So I Hired a Middle Manager

https://www.sawyerhood.com/blog/hired-a-middle-manager
1•sawyerjhood•28m ago•0 comments

The Problem with P(doom)

https://blog.cosmos-institute.org/p/not-even-wrong
1•alexicon_•28m ago•0 comments

Commit on Firefox repo: When an agent commits, don't add itself as author

https://github.com/mozilla-firefox/firefox/commit/71cc24b6a400dbd434e4df37087960d94b764791
1•thesdev•29m ago•0 comments
Open in hackernews

Ask HN: Open Models are 9 months behind SOTA, how far behind are Local Models?

11•myk-e•2w ago

Comments

softwaredoug•2w ago
A local model is a smaller open model, so I’d expect it to be 9 months behind a small (ie nano) closed model as a base assumption
myk-e•2w ago
Yes, a small open model that can run on today's hardware and that compared to a historic SOTA closed model with all in. What time difference do we think?
magicalhippo•2w ago
A local model is an open model you run locally, so I'm not entirely sure the distinction in the question makes sense.

That said, if you're talking about models you can actually use on a single regular computer that costs less than a new home, the current crop of open models are very capable but also have noticeable limitations.

Small models will always have limitations in terms of capability and especially knowledge. Improved training data and training regiment can squeeze out more from the same number of weights, but there is a limit.

So with that in mind, I think such a question only makes sense when talking about specific tasks, like creative writing, data extraction from text, answering knowledge questions, refactoring code, writing greenfield code, etc.

In some of these areas the smaller open models are very good and not that far behind. In other areas they are lagging much more, due to their inherent limitations.

myk-e•2w ago
Yes, I meant ordinary hardware which you find at home, like a current MacBook Air or equivalent Windows desktop. There must be a time frame when early SOTA LLMs were at a level that compares to open models that can run on ordinary hardware. But it's more like years rather than months. My rough guess would be 2-3 years. Which still would be amazing if we could get OPUS 4.5 quality within 2-3 years on an ordinary computer.
karmakaze•2w ago
I don't know if you'd consider this ordinary, but a single Mac Studio M5 Ultra 512GB (or even 256GB) V/RAM seems pretty sweet.
myk-e•2w ago
I love the spec, but it is like 5x or 10x a Macbook Air I mean really ordinary, Personal Computer in broad sense - not dedicated LLM kit.
hasperdi•2w ago
Well, it depends on the hardware you have. If you have a hardware locally that can run best open models, then your local models are as capable as the open models.

That said, open models are not far behind SOTA, less than 9 months gap.

If what you're asking about those models that you can run on retail GPUs, then they're a couple years behind. They're "hobby" grade.

myk-e•2w ago
Thanks, yes, I meant even ordinary retail PCs, not specialized GPUs. At some point in time in history, SOTA closed models were at a level that compares to todays open models that can run on ordinary hardware.
hasperdi•2w ago
Retail PCs will probably never catch up to even the open‑weight models (the full, non‑quantized versions). Unless there’s a breakthrough, they just don’t have enough parameters to hold all the information we expect SOTA models to contain.

That’s the conventional view. I think there’s another angle: train a local model to act as an information agent. It could “realize” that, yeah, it’s a small model with limited knowledge, but it knows how to fetch the right data. Then you hook it up to a database and let it do the heavy lifting.

myk-e•2w ago
Maybe the industry adapts too and the future PC is AI-ready out-of-the-box. Because people demand that.
segmondy•2w ago
Local models are not behind. There are many specialized local models on huggingface that can do things that none of the closed/commercial models can do. The only way to get that edge is to run locally. When I say many, I mean in the thousands.
myk-e•2w ago
Yes, fair point. I was trying to use the same comparison we are currently having between closed weights and open weights and their time gap. If there might be a similar time gap to what is possible with ordinary equipment.