Nice to see how open it is! However, if you are just looking for the best model, Mistral Small 3.2 appears to be a stronger model with fewer parameters compared to OLMo 2 32B. It would be interesting to see how far these "fully open" models can get to their "open weight" counterparts.
real0mar•4h ago
The inconvenient truth might be that the other models score higher than OLMO because they aren't restricted to purely "open and accessible" training data. Who knows what private or ethically dubious data went into training Mistral or llama, for example.
erlend_sh•55m ago
Exactly. If we really wanted to benchmark the various models on the merits of their individual implementations, we should be comparing them all on the same open dataset.
tripplyons•5h ago
real0mar•4h ago
erlend_sh•55m ago