While machine learning is not my field, I've tried to finetune Mistral 7B (following their official guide and toolset) and the results did not satisfy. Had a few very specific questions from the dataset that no matter how much I've finetuned and tweaked the process it was not able to respond with correct information.
A mix of vector search + keyword search is still better at building the right question context than expecting it to learn all the information.
I've used the pretrained dataset approach. Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.
Looks like ballpark a million dollars of GPU time if you want to train up one for yourself (4000 gpus/24 days).
Very nice write up that’s generous in sharing their learnings.
This is a solid and positive contribution.
"So it's a small large language model?"
"Oh yes, very small."
"How can it be small and large at the same time?"
"Well, it's small by the standards of a large language model."
"So it's large."
"Oh yes, very large."
"Large compared to what?"
"Small language models."
"And so something like ChatGPT, what would that be exactly? A large large language model?"
"Yes, precisely. An LLLM."
I've been using the smollm base models for my own finetunes just because they're so high quality, it looks like I might be using them to drive local agents/code completion in the near future too.
Their RL algorithm looks interesting. I'm still using OpenAI's algorithm for my stuff, I've been meaning to check on the SoTA since I know my code is pretty outdated (It's crazy how fast that happens with this stuff.)
I hope you continue the 50-100M parameter models.
I think there is a case for models that finish fast on CPUs in solve by llm test cases.
gardnr•1h ago
> We're releasing SmolLM3 with our engineering blueprint. It includes architecture details, exact data mixtures showing how we progressively boost performance across domains in a three-stage pretraining approach, and the methodology for building a hybrid reasoning model. Usually, achieving these results would require months of reverse engineering. Instead, we're providing the full methodology.