frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Non AI-obsessed tech forums

16•nanocat•4h ago•10 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

43•UmYeahNo•1d ago•26 comments

Ask HN: Ideas for small ways to make the world a better place

8•jlmcgraw•6h ago•16 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

42•Invictus0•22h ago•11 comments

AI Regex Scientist: A self-improving regex solver

5•PranoyP•8h ago•1 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•510 comments

Ask HN: Who is hiring? (February 2026)

312•whoishiring•4d ago•511 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•4h ago•0 comments

Ask HN: Why LLM providers sell access instead of consulting services?

4•pera•14h ago•13 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•1d ago•37 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•16h ago•7 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

17•jchung•1d ago•12 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Is it just me or are most businesses insane?

7•justenough•1d ago•5 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•2 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•3d ago•122 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•1d ago•1 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•4 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Have you been fired because of AI?

17•s-stude•3d ago•15 comments

Test management tools for automation heavy teams

2•Divyakurian•1d ago•2 comments

Ask HN: Does a good "read it later" app exist?

7•buchanae•3d ago•18 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

11•kldg•3d ago•1 comments

Ask HN: Has anybody moved their local community off of Facebook groups?

23•madsohm•4d ago•17 comments

How do you deal with SEO nowadays?

5•jackota•1d ago•8 comments
Open in hackernews

Ask HN: What happened to self-hosted models?

3•curiousaboutml•3w ago
Hi HN, sorry for using a burner account.

It seems to me that up until the beginning of the last year, we saw a couple of new "open" model release announcements almost every week. They'd set a new state of the art for what an enthusiast could run on their laptop or home server.

Meta, Deepseek, Mistral, Qwen, even Google etc. were publishing new models left and right. There were new formats, quantizations, inference engines etc. and most importantly - a lot of discourse and excitement around them.

Quietly and suddenly, this changed. After the release of gpt-oss (August 2025), the discourse has been heavily dominated around hosted models now. I don't think I've seen any mention of Ollama in any discussion that reached HN's front page in the last 6 months.

What gives? Is this a proxy signal that we've hit a barrier in LLM efficiency?

Comments

al_borland•3w ago
My wildly uneducated guess is that they are getting to the point where they need to figure out how to profit off all this investment, and releasing self-hosted open-source models isn’t going to help them do that.
curiousaboutml•3w ago
Possibly, but it's not just the release of new models. It seems the community itself has lost interested in self-hosted models.
nacozarina•3w ago
Investors need everyone to avoid self-hosted models and pay premium subscriptions for large centralized models, else they will never earn the profits they want. Self-hosted models spoil their revenue forecasts.
electroglyph•3w ago
there are tons of models released still. even some non-Qwen ones!
bityard•3w ago
HN only covers a very small slice of interesting things that happen in tech every day. If it's your only source of tech news and information, you are missing out on a LOT.

There are plenty of self-hosted models being released all the time, they just don't make it to HN. For that, you need to find a community that is passionate about testing and tinkering with self hosted models. A very popular one is "/r/localllama" on Reddit, but there are a few others scattered around.

doublerabbit•3w ago
Could you recommend other sites? I only use HN exclusively but would be keen on decent tech new sites without having to sieve through the sludge of Google.

TheRegister, SlashDot and hackaday I know of.

gnosis67•3w ago
Ollama has changed. Early versions were raw, and then they were optimized (I’m on a laptop with 64GB RAM), and then they fell to shit. Optimized for someone else’s home rig I suppose.

And my old favorite models broke so I have to link different versions. nous-hermes2-mixtral I miss your sage banter.

Now everything runs on an excessive lag.

softwaredoug•3w ago
One thing that happened was the providers got better at hosting smaller and cheaper models. So you could self host or just get your work done with GPT 5 nano.
potsandpans•3w ago
They're still going. I just bought a 5090 for myself this Christmas to do more interesting things.

I mostly use them for game assets.

Trellis2 is very cool. Ive managed to put together a sdxl -> trellis -> unirig pipeline to generate 3d characters with mixamo skeletons that's working pretty well.

On the llm front, deepseek and qwen are still cranking away. Qwen3 a22b instruct, imho does a better job than gemini in some cases with ocr and translation of handwritten documents.

The problem with these frontier open weight models is that running them locally is not exactly tenable. You either have to get a cloud GPU instance, or go through a provider.

- https://github.com/microsoft/TRELLIS.2 - https://github.com/VAST-AI-Research/UniRig

jaggs•3w ago
There are a lot of local models being released every week. You really need to log into /r/localllama to stay up to date.
lioeters•3w ago
A recent local model I tried is Ministral 3 from a month ago. https://mistral.ai/news/mistral-3

    Vision: Enables the model to analyze images and provide insights based on visual content, in addition to text.
    Multilingual: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
    ...
    Agentic: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
    Edge-Optimized: Delivers best-in-class performance at a small scale, deployable anywhere.
    Apache 2.0 License: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
    Large Context Window: Supports a 256k context window.
journal•3w ago
no one cares about second best