Used their recommended temperature, top_k, top_p and so on settings
Overall it still seems extremely good for its size and I wouldn't expect anything below 30B to behave like that. I mean, it flies with 100 tok/sec even on a 1650 :D
from
```
excerpt of text or code from some application or site
```
What is the meaning of excerpt?
Just doesen't seem to work at a useable level. Coding questions get code that runs, but almost always misses so many things that finding out what it missed and fixing those takes a lot more time than handwriting code.
>Overall it still seems extremely good for its size and I wouldn't expect anything below 30B to behave like that. I mean, it flies with 100 tok/sec even on a 1650 :D
For it's size absolutely, I've not seen 1,5B models that form even sentences right most of the time so this is miles ahead of most small models, not just to the hinted at levels the benchmarks would you have believe
Overall if you're memory constrained it's probably still worth to try and fiddle around with it if you can get it to work. Speedwise if you got the memory a 5090 can get ~50-100tok/s for a single query with 32B-AWQ and way more if you have something parallel like open-webui
I gave it two tasks. "Create a new and original story in 500 words" and "Write a Python console game". Both of those resulted in an endless loop with the model repeating itself
I'm honest. Given that a 1B Granite nano model has only little problems (word count) with such tasks and given that VibeThinker is announced as a programming model it's disappointing to see a 1.6B model fail multiple times.
And it fails at one of the simplest coding tasks where a Granite model at nearly half the size has no problems.
It's probably an important discovery but seemingly only usable in an academic context.
So during the final they try to ensure the model doesn't get the right answer every time, but only 50% of time, so as to avoid killing all variability-- very sensible, and then they compute a measure of this, take the negative exponential of this measure and then they scale the advantage by this.
So a question matters in proportion to the variability of the answers. Isn't this more curriculum learning stuff than actually suppressing things that don't vary enough?
Basically focusing on questions that are still hard instead of trying to push the probability of problem it's often able to solve to 99.99%?
Also very reasonable, but this isn't how they describe it. Instead, from their description I would think they're sort of forcing entropy to be high somehow.
I think the way I'd have titled it would be something like "Dynamic curricula to preserve model entropy".
Alifatisk•2mo ago
Is this hosted online somewhere so I can try it out?
viraptor•2mo ago
Balinares•2mo ago
On math questions, though, beside a marked tendency towards rambling thinking, it's just plain implausibly good for a 1.5B model. This is probably just rote learning, though. Otherwise this might well be a breakthrough.