Can't seem to see it on the arxiv site.
Everything past GPT5 has been ... fine. It's better at chat (sort of, depending on your tone preferenc) and way better at coding/tool use. In our product (plan out a migration with AI), they've gotten worse, because they want to chat or code. I'd have expected the coding knowledge to generalize, but no! Especially Claude really wants to change our code or explain the existing plan to me.
We're getting around it with examples and dynamic prompts, but it's pretty clear that fine-tuning is in our future. I suspect most of the broad-based AI success is going to look like that in the next couple years.
Like I wanted this from a year or two ago to just lets say have a model which lets say is genuinely really really good at sveltekit as an example instead of a model which is good at a lot of different things of sorts yknow
A model for sveltekit, A model for react and for coding general purpose too and preferably we can have a website which can make it easy to find these models/run them, ollama comes to my mind right now but it has really enshittened a little bit from the time when I was thinking about this but so maybe now a little competition on that side wouldn't hurt I suppose.
But it's informative for the engineers that need something right now, because it means taking the best general purpose tool and specializing it will outperform the general tool, and you can sustain that if you are willing to always hop tools and respecialize. As we may.
https://github.com/herniqeu/extract0
To quote Mulder: I want to believe.
Open-Source style small players will actually solve problems with AI.
And the big money invested things are going to do stupid pointless bubbly things at best, or enshittify other good things at worst.
Govern yourselves accordingly.
I guess this is a small step forward, if nothing else, to the day when I can actually teach a model something in situ on my personal machine (notice I said machine, not "machines") in a very short amount of time. I feel that until then, LLMs and similar technologies won't be maximally helpful. They're very useful, but not maximally helpful.
There is so much research that shows you can beat frontier models with very little investment. It's confusing that the industry at large hasn't caught up with that
This model is trained on a custom dataset of 280k examples then tested on 1k very similar examples from the same dataset. Of course it is specialized to outperform general models on this specific task in this specific domain with this specific json format for output.
This is a reasonable hobby project and interesting approach to synthetic data generation but not impressive research.
At minimum you should test your model on other benchmarks that have similar tasks e.g. docbench
They trained on synthetic extractions like "extract equations from arXiv papers" and "extract regulatory information from FDA documents," then tested on more synthetic extractions from the same sources. Essentially, "model trained on synthetic arXiv/PubMed/FDA extractions performs better on more synthetic arXiv/PubMed/FDA extractions than a model that never saw this distribution."
I'd like to see how it handles extractions from a real contract, or a low quality scan of a financial document, or processes a format it didn't see in training. o3 very likely handles these variations better, but we don't have that data to compare.
We need the model weights or tests on standard benchmarks to verify if this generalizes beyond documents that look like the training distribution.
giancarlostoro•41m ago
mvieira38•34m ago
giancarlostoro•24m ago
empath75•10m ago