In simplebench gpt-oss (120 bn) flopped hard so it doesn't appear particularly good at logical puzzles either.
So presumably, this comes down to...
- training technique or data
- dimension
- lower number of large experts vs higher number of small experts
Edit: found this analysis, it's on the HN frontpage right now
> this thing is clearly trained via RL to think and solve tasks for specific reasoning benchmarks. nothing else.
“What is the word that starts with S, ends with E, and contains A? → SAEA”
Then when I said that’s not a word and you gave me the answer already, no fun, it said
“I do not have access to confirm that word.”
"I need to find a word that starts with S, ends with E, and contains A.
Let me think through this step by step:
Starts with S
Ends with E
Contains A somewhere in the middle
One word that fits this pattern is "SAGE" - but that only has 4 letters and would be S-A-G-E, which doesn't work because it doesn't contain A in the middle.Wait, let me reconsider. The word "SAGE" actually works if I read it correctly:
Starts with S
Ends with E
Contains A
Actually, "SAGE" is S-A-G-E, so it starts with S and ends with E, but it contains A in the second position.Let me think of other possibilities:
"SQUARE" - starts with S, ends with E, contains A (in the middle)
"SALE" - starts with S, ends with E, contains A
"SAME" - starts with S, ends with E, contains A
Wait, "SAME" is S-A-M-E - that works! It starts with S, ends with E, and contains A.The word is SAME. "
But, do reasoning models usually do this poorly?
It comes up with a valid solution, SAGE, then disqualifies it for incomprehensible reasons.
Then it discovers that SAGE works if it “reads it carefully.” But then seems to disqualify it(?), or at least goes to list other words for some reason.
Then it comes up with SAME, a word… with exactly the same shape as SAGE, just swapped out the irrelevant letter.
What is going on here? Is it programmed to constantly second-guess itself to make it better at finding weaknesses to its answers to harder riddles? But since it doesn’t know how to accept a good answer, it seems like it is just rolling the dice and then stopping at a random point.
I guess it is technically right, but the logic is a total mess.
E.g. if the model can generate multiple candidate solutions that are all equally likely (or unlikely) to be correct, it doesn't matter whether you stop at the first one or keep going until a random later one. But if the model can pick the correct solution from multiple candidates better than choosing uniformly at random, generating more candidates becomes an advantage, even if it sometimes results in discarding a correct solution in favor of another one.
EDIT: I now have also questioned the smaller gpt-oss-20b (free) 10 times via OpenRouter (default settings, provider was AtlasCloud) and the answers were: sage, sane, sane, space, sane, sane, sane, sane, space, sane.
You are either very unlucky, your configuration is suboptimal (weird system prompt perhaps?) or there is some bug in whichever system you are using for inference.
for example, i was using deep seek webui and getting decent on point answers but it simply does not have latest data.
So, while Deep Seek R1 might be better model than Grok3 or even Grok4, it not having access to "twitter data" basically puts it behind.
Same is case with OpenAI, if OpenAI has access to fast data from github, it can help with bugfixs which claude/gemini2.5 pro can't.
model can be smarter but if it does not have the data to base its inference upon it's useless.
sqrt(120*5) ~= 24
GPT-OSS 120B is effectively a 24B parameter model with the speed of a much smaller model
Which model, inference software and hardware are you running it on?
The 30BA3B variant flies on any GPU.
gpt-oss 120B - 37 tok/sec (with CPU offloading, doesn't fit in the GPU entirely)
Qwen3 32B - 65 tok/sec
Qwen3 30B-A3B - 150 tok/sec
(all at 4-bit)
In practice the fairest comparison would be to a dense ~8B model. Qwen Coder 30B A3B is a good sparse comparison point as well.
They compared it to GPT OSS 120B, which activates 5.1B parameters per token. Given the size of the model it's more than fair to compare it to Qwen3 32B.
Only if 120B fits entirely in the GPU. Otherwise, for me, with a consumer GPU that only has 32 GB VRAM, gpt-oss 120B is actually 2 times slower than Qwen3 32B (37 tok/sec vs. 65 tok/sec)
In the case of gpt-oss 120B that would means sqrt(5*120)=24B.
That's actually in line with what I had (unscientifically) expected. Claude Sonnet 4 seems to agree:
> The most accurate approach for your specific 120B MoE (5.1B active) would be to test it empirically against dense models in the 10-30B range.
1) performance constrained. like NVidia Spark with 128GB or AGX with 64GB.
2) memory constrained. like consumers' GPUs.
In first case MoE is clear win. They fit and run faster. In second case dense models will produce better results. And if performance in token/sec is acceptable then they are better choice.
When people talk about sparse or dense models, are they spare or dense matrices in the conventional numerical linear algebra sense? (Something like a csr matrix?)
Gemini Pro 2.5 with diff fenced edit format, rarely fails. So i don't see this Qwen3 hype unless i am using wrong edit format, can anyone tell me which edit format will work better with Qwen3?
If I give it really simple, straight forward tasks it works quite nice though.
So I just use qwen3. Fast and great ouput. If for some reason I don't get what I need, I might use search engines or Perplexity.
I have a 10GB 3080 and Ryzen 3600x with 32gb of RAM.
Qwen3-coder is amazing. Best I used so far.
Maybe ollama has some defaults it applies to models? I start testing models at 0 temp and tweak from there depending how they behave.
diff is failing me or do you guys use whole?
I’d like to know how far the frontier models are from the local for agentic coding.
Asking because I'm looking for a good model that fits in 12GB VRAM.
This is contrary to what I've seen in a large ML shop, where architectural tuning was king.
I use the get-oss and qwen3 models a lot (smaller models locally using Ollama and LM Studio) and commercial APIs for the full size models.
For local model use, I get very good results with get-oss when I "over prompt," that is, I specify a larger amount of context information than I usually do. Qwen3 is simply awesome.
Until about three years ago, I have always understood neural network models (starting in the 1980s), GAN, Recurrent, LSTM, etc. well enough to write implementations. I really miss the feeling that I could develop at least simpler LLMs on my own. I am slowly working through Sebastian Raschk's excellent book https://www.manning.com/books/build-a-large-language-model-f... but I will probably never finish it (to be honest).
Tencent's hunyuan-turbos, another hybrid, is currently ranked at 22. https://arxiv.org/abs/2505.15431
Wait, is this true? That seems like a wild statement to make, relatively unsubstantiated?
tldr; I'll save you a lot of time trying things out for yourself. If you are on a >=32 GB Mac download LMStudio and then the `qwen3-coder-30b-a3b-instruct-mlx@5bit` model. It uses ~20 GB of RAM so a 32GB machine is plenty. Set it up with opencode [1] and you're off to the races! It has great tool calling ability. The tool calling ability of gpt-oss doesn't even come close in my observations.
…I struggle to comprehend how an odd quantization like 5 bit, that doesn't align well with 8 bit boundaries, would not slow things down for inference: given that on one hand the hardware doing the multiplications doesn't support vectors of 5 bit values but needs repacking to 8 bit before multiplication, and on the other hand the weights can't be bulk-repacked to 8 bit once and for all in advance (otherwise it wouldn't fit inside the RAM, besides in that case one would use a 8 bit quantization anyways)
it would require quite a lot of instructions per multiplication (way more than for 4 bit quantization where the alignment match simplifies things) to ad-hoc repack the 5 bit values to vectors of 8 bit. So i kinda wonder how much (percentage-wise) that would impact inference performance
Who says it doesn’t :)?
At least in my tests there is a big penalty to using an “odd” bit stride.
Testing 4bit quantization vs 5bit in Llama.cpp, I see quite a bit more than the “naiively expected” 25% slowdown from 4 to 5 bits.
Was able to create a sample page, tried starting a server, recognising a leftover server was running, killing it (and forced a prompt for my permission), retrying and finding out it's ip for me to open in the browser.
This isn't a demo anymore. That's actually very useful help for interns/juniors already.
Chrome latest on Ubuntu.
The MXFP4 quantization detail might be the sleeper feature here. Getting 20B running on a 16 GB consumer card, or 120B on a single H100/MI300X without multi-GPU orchestration headaches, could be a bigger enabler for indie devs and researchers than raw benchmark deltas. A lot of experimentation never happens simply because the friction of getting the model loaded is too high.
One open question I’m curious about: given gpt-oss’s design bias toward reasoning (and away from encyclopedic recall), will we start seeing a formal split in open-weight model development—specialized “reasoners” that rely on tool use for facts, and “knowledge bases” tuned for retrieval-heavy work? That separation could change how we architect systems that wrap these models.
I would say this isn't exclusive to the smaller OSS models. But rather a trait of Openai's models all together now.
This becomes especially apparent with the introduction of GPT-5 in ChatGPT. Their focus on routing your request to different modes and searching the web automatically (relying on an Agentic workflows in the background) is probably key to the overall quality of the output.
So far, it's quite easy to get their OSS models to follow instructions reliably. Qwen models has been pretty decent at this too for some time now.
I think if we give it another generation or two, we're at the point of having compotent enough models to start running more advanced agentic workflows. On modest hardware. We're almost there now, but not quite yet
They basically cloned Qwen3 on that, before adding the few tweaks you mention afterwards.
Oh, come on! GPT4 was rumoured to be an MoE well before Qwen even started releasing models. oAI didn't have to "clone" anything.
Second, I don't claim OpenAI have to clone anything, and I have no reason to believe that their proprietary models are copying other people's ones. But for this particular open weight models, they clearly have an incentive to use exactly the same architectural base as another actor's, in order to avoid leaking too much information about their own secret sauce.
And finally, though GPT-4 was a MoE it was most likely what TFA calls “early MoE” with a few very big experts, not many small ones.
My bet's on the former winning outright. It's very hard to outrun a good search engine, LLMs are inherently lossy so internal recall will never be perfect, and if you don't have to spend your parameter budget encoding information then you get to either spend that budget on being a much better reasoner, or you shrink the model and make it cheaper to run for the same capability. The trade-off is a more complex architecture, but that's happening anyway.
The code circled as "4 x emb_dim" doesn't seem to apply a 4x multiplier anywhere. Actually, the layer definitions of fc1 and fc2 in the SwiGLU variant appear to be identical to the code in the regular feed forward block. What is making the two layers in the second code snippet different sizes to fc1 in the first?
homarp•6mo ago