It seems to me that up until the beginning of the last year, we saw a couple of new "open" model release announcements almost every week. They'd set a new state of the art for what an enthusiast could run on their laptop or home server.
Meta, Deepseek, Mistral, Qwen, even Google etc. were publishing new models left and right. There were new formats, quantizations, inference engines etc. and most importantly - a lot of discourse and excitement around them.
Quietly and suddenly, this changed. After the release of gpt-oss (August 2025), the discourse has been heavily dominated around hosted models now. I don't think I've seen any mention of Ollama in any discussion that reached HN's front page in the last 6 months.
What gives? Is this a proxy signal that we've hit a barrier in LLM efficiency?
al_borland•17h ago
curiousaboutml•17h ago