While machine learning is not my field, I've tried to finetune Mistral 7B (following their official guide and toolset) and the results did not satisfy. Had a few very specific questions from the dataset that no matter how much I've finetuned and tweaked the process it was not able to respond with correct information.
A mix of vector search + keyword search is still better at building the right question context than expecting it to learn all the information.
I've used the pretrained dataset approach. Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.
Also tried SmolVLM 256M and 500M, they load faster and you can embed them in assets, they work if you know what you're doing
Just keep in mind that smaller models don't perform as well due to their limited parameters
Also on Android, since you can't ship files larger than 2GB due to Java compression issues, you need to download models separately, then you can't load the model from the download folder, you have to copy it into the app's own folder, this means a Gemma 3N 2B model that's 3.14 GB would need at least 7 GB of free space on the user's phone
Looks like ballpark a million dollars of GPU time if you want to train up one for yourself (4000 gpus/24 days).
Very nice write up that’s generous in sharing their learnings.
This is a solid and positive contribution.
Like if I really really just wanted to build it from scratch, could I do so? (not that I have that money but just curious)
To be honest, if I might argue then that this is one of the best truly open source models that we have got.
There is AllenAI and (Elmo?) and there is also this one which does distributed training but I think this looks a lot like SOTA for 3B parameters to me.
Thanks for telling me, I am not going to lie, I am going to try to test it now! (Ima try some GGUF since ollama convenience)
AFAIK, they were the first open everything model.
For context, I dev an LLM client, a core tenant is keeping local as close to cloud parity as much as is possible. (via llama.cpp)
Companies aren't taking local AI seriously on a sustained basis outside Microsoft.
Overall, I usually would bite my tongue. HF is a great citizen, and I doubt this'll be a one off. However, when I see superlatives affirmed, while leaving out the local SoTA for many many moons that is a godsend in this sector, I think it is good to, rather than shy away, stand up and say this.
"So it's a small large language model?"
"Oh yes, very small."
"How can it be small and large at the same time?"
"Well, it's small by the standards of a large language model."
"So it's large."
"Oh yes, very large."
"Large compared to what?"
"Small language models."
"And so something like ChatGPT, what would that be exactly? A large large language model?"
"Yes, precisely. An LLLM."
I've been using the smollm base models for my own finetunes just because they're so high quality, it looks like I might be using them to drive local agents/code completion in the near future too.
Their RL algorithm looks interesting. I'm still using OpenAI's algorithm for my stuff, I've been meaning to check on the SoTA since I know my code is pretty outdated (It's crazy how fast that happens with this stuff.)
I hope you continue the 50-100M parameter models.
I think there is a case for models that finish fast on CPUs in solve by llm test cases.
./llama.cpp/llama-cli -hf unsloth/SmolLM3-3B-GGUF:Q4_K_XL --jinja -ngl 99
gardnr•6h ago
> We're releasing SmolLM3 with our engineering blueprint. It includes architecture details, exact data mixtures showing how we progressively boost performance across domains in a three-stage pretraining approach, and the methodology for building a hybrid reasoning model. Usually, achieving these results would require months of reverse engineering. Instead, we're providing the full methodology.