[Edit: serves me right for not reading tfa. My points are well-covered]
Fun facts, LLM was once envisioned by Steve Jobs in one of his interviews [1].
Essentially one of his main wish in life is to meet and interract with Aristotle, in which according to him at the time, computer in the future can make it possible.
[1] In 1985 Steve Jobs described a machine that would help people get answers from Aristotle–modern LLM [video]:
> A language model trained from scratch exclusively on data from certain places and time periods to reduce modern bias and emulate the voice, vocabulary, and worldview of the era.
Discussed here: https://news.ycombinator.com/item?id=46590280
The blog post defines a "vintage model" as one that is trained only on data before a particular cutoff point:
> Vintage LMs are contamination-free by construction, enabling unique generalization experiments [...] The most important objective when training vintage language models is that no data leaks into the training corpus from after the intended knowledge cutoff
But as they acknowledge later, there are multiple major data leakage issues in their training pipeline, and their model does in fact have quite a bit of anachronistic knowledge. So it fails at what they call the most important objective. It's fair to say that they are working toward something that meets their definition of "vintage", but they're not there yet.
It's going to be more like corresponding with someone from the past. We don't have much in the way of recorded speech from that area, so this will be built from written records. Much more than now, the written records are going to be formal and edited, reflecting a different pattern than casual speech or writing.
Having said that, this is cool. I recently had to OCR a two-hundred year old book with the usual garish fonts from that era. It was remarkably easy to do, and accurate.
Post World War 2, some people had the odds per year at 10%. Some of that is probably a mix of recency bias + not understanding how to use new weapons etc etc but as Silver points out, the odds were much lower.
I mention this only b/c the "could something trained on LLMs of the time predict the future" always makes me think of it.
aftbit•1h ago
Wowfunhappy•39m ago
MerrimanInd•37m ago
I'm currently shopping for a local LLM setup and between something like the Framework Desktop with 64-128GB of shared RAM or just adding a 3090 or 4090 to my homelab so I'm very curious what hardware is working well for others.
zamadatix•7m ago
Parameters are like Hertz - they don't really tell you much until you know the rest anyways. In this case, a parameter is a bfloat16 (2 bytes). I'm sure someone will bother to makes quants at some point.
> I'm currently shopping for a local LLM setup and between something like the Framework Desktop with 64-128GB of shared RAM or just adding a 3090 or 4090 to my homelab so I'm very curious what hardware is working well for others.
I grabbed a 395 laptop w/ 128 GB to be a personal travel workstation. Great for that purpose. Not exactly a speed demon with LLMs but it can load large ones (which run even slower as a result) and that wasn't really my intent. I've found GPUs make more usable local LLMs, particularly in the speed department, but I suppose that depends more on how you really use them and how much you're willing to pay to have enough total VRAM.
It's next to impossible to make your money back on local (regardless what you buy) so I'd just say "go for whatever amount of best you're willing to put money down for" and enjoy it.