It takes about a month for the features from llama.cpp to trickle in. Also figuring the best mix of context length size to vram size to desired speed takes a while before it gets intuitive.
I’ve gotten carried away - I meant to express that using cloud as a fallback for local models is something I absolutely don’t want or need, because privacy is the whole and only point to local models.
- You're running a personal hosted instance. Good for experimentation and personal use; though there's a tradeoff on renting a cloud server.
- You want to run LLM inference on client machines (i.e., you aren't directly supervising it while it is running).
I'd say that the article is mostly talking about the second one. Doing the first one will get you familiar enough with the ecosystem to handle some of the issues he ran into when attempting the second (e.g., exactly which model to use). But the second has a bunch of unique constraints--you want things to just work for your users, after all.
I've done in-browser neural network stuff in the past (back when using TensorFlow.js was a reasonable default choice) and based on the way LLM trends are going I'd guess that edge device LLM will be relatively reasonable soon; I'm not quite sure that I'd deploy it in production this month but ask me again in a few.
Relatively tightly constrained applications are going to benefit more than general-purpose chatbots; pick a small model that's relatively good at your task and train it on enough of your data and you can get a 1B or 3B model that has acceptable performance, let alone the 7B ones being discussed here. It absolutely won't replace ChatGPT (though we're getting closer to replacing ChatGPT 3.5 with small models). But if you've got a specific use case that will hold still enough to deploy a model it can definitely give you the edge versus relying on the APIs.
I expect games to be one of the first to try this: per-player-action API costs murder per-user revenue, most of the gaming devices have some form of GPU already, and most games are shipped as apps so bundling a few more GB in there is, if not reasonable, at least not unprecedented.
I also agree the goal should not be to replace ChatGPT. I think ChatGPT is way overkill for a lot of the workloads it is handling. A good solution should probably use the cloud LLM outputs to train a smaller model to deploy in the background.
Games, on the other hand, are mostly funded via up-front purchase (so you get the money once and then have to keep the servers running) or free to play, which very carefully tracks user acquisition costs versus revenue. Most F2P games make a tiny amount per player; they make up the difference via volume (and whales). So even a handful of queries per day per player can bankrupt you if you have a million players and no way to recoup the inference cost.
Now, you can obviously add a subscription or ongoing charge to offset it, but that's not how the industry is mostly set up at the moment. I expect that the funding model will change, but meanwhile having a model on the edge device is the only currently realistic way to afford adding an LLM to a big single player RPG, for example.
Now of course "non technical" here is still a pc gamer that's had to fix drivers once or twice and messaged me to ask "hey how do i into LLM, Mr. AI knower", but I don't think twice these days about showing any pc owner how to use ollama because I know I probably won't be on the hook for much technical support. My sysadmin friends are easily writing clever scripts against ollama's JSON output to do log analysis and other stuff.
As a developer the amount of effort I'm likely to spend on the infra side of getting the model onto the user's computer and getting it running is now FAR FAR below the amount of time I'll spend developing the app itself or getting together a dataset to tune the model I want etc. Inference is solved enough. "getting the correct small enough model" is something that I would spend the day or two thinking about/testing when building something regardless. It's not hard to check how much VRAM someone has and get the right model, the decision tree for that will have like 4 branches. It's just so little effort compared to everything else you're going to have to do to deliver something of value to someone. Especially in the set of users that have a good reason to run locally.
Some models have even a 0.5B draft model. The speed increase is incredible.
I'm sure someone is watching their internet traffic, but I don't.
I take the risk now, but I ask questions about myself, relationships, conversations, etc... Stuff I don't exactly want Microsoft/ChatGPT to have.
Clippy is coming back guys, and we have to be ready for it.
Basically you let a very small model speculate on the next few tokens, and the large model then blesses/rejects those predictions. Depending on how well the small model performs, you get massive speedups that way.
The small model has to be as close to the big model as possible - I tried this with models from different vendors and it slowed generation down by x3 or so. So, you need to use a small Qwen 2.5 with a big Qwen 2.5, etc
Needs a pretty large difference in size to result in a speedup. 0.5 vs 27b is the only ones I’ve seen a speedbump
If you know, you know. CPU for LLMs is bad. No amount of Apple Marketing can change that.
Even my $700 laptop with a 3050 produces near instant results with 12B models.
I'm not sure what to tell you... Look to corporations who are doing Local LLMs and look to see what they are buying? They arent buying Apple, they are buying Nvidia.
- first of all local inference can never beat cloud inference for the very simple reason that costs go down with batching. it took me two years to actually understand what batching is - the LLM tensors flowing through transformer layers has a dimension designed specifically for processing data in parallel. so no matter if you process a 1 sequence or 128 sequences the costs are the same. i've read very few articles overstating this, so bear in mind - this is the primary stopper for competing local inference with cloud inference.
- second, and this is not a light one to take - LLM-assisted text2sql is not trivial, not at all. you may think it is, you may expect cutting-edge models to do it right, but there are ...plenty of reasons models fail so badly at this seemingly trivial task. you may start with arbitrary article such as https://arxiv.org/pdf/2408.14717 and dig the references, sooner or later you will stumble on one of dozens overview papers by mostly Chinese researchers (such as https://arxiv.org/abs/2407.10956) where overview of approaches is summarized. Caution: you may feel both inspired AI will not take over your job, or you may feel miserable how much effort is spent on this task and how badly everything fails in real-world scenarios
- finally, something we agreed with a professor advising a doctorate candidate whose thesis surprisingly was on the same topic. basically given GraphQL and other structured formats such as JSON, which LLMs are much better leaned on than the complex grammar of SQL which is not a regular grammar, but context-free one, which takes more complex machines to parse it and also very often recursion.
- which brings us to the most important question - why commercial GPTs fare so much better on it than local models. well, it is presumed top players, not only use MoEs but they also employ beam search, perhaps speculative inference and all sorts of optimizations on the hardware level. while this all is not beyond comprehension for a casual researcher at a casual university (like myself) you don't get to easily run this all locally. I have not written an inference engine myself, but I imagine MoE and beam search is super compled, as beam search basically means - you fork the whole LLM execution state and go back and forth. Not sure how this even works together with batching.
So basically - this is too expensive. Besides atm (to my knowledge) only vllm (the engine) has some sort of reasonably working local beam search. I would've loved to see llama.cpp's beam search get a rewrite, but it stalled. Trying to get beamsearch working with current python libs is nearly impossible for commodity hardware, even if you have 48gigs of ram, which already means a very powerful GPU.
https://docs.google.com/document/d/e/2PACX-1vSyWbtX700kYJgqe...
Since its in Cyrillic you should perhaps use a translation service. There are some screens showing results, though as I was really on a tight deadline, and its not a PHD but masters thesis, I decided to not go into in-depth evaluation of the proposed methodology against SPIDER (https://yale-lily.github.io/spider). Even though you can find the simplifed GBNF grammar, also some of the outputs. The grammar, interestingly it benefits/exploits a bug in llama.cpp which allows some sort of recursively-chained rules. Bibliography is in English, but really - there is so much written on the topic, by no means comprehensive.
Sadly no open inference engine (at time of writing) was both good enough in beam search, and grammars, so this whole things needs to perhaps be redone in pytorch.
If I find myself in a position to do this for commercial goals, I'd also explore the possibility of having human-catered SQLs against the particular schema, in order to guide the model better. And then do RAG on the DB for more context. Note: I'm already doing E/R model reduction to the minimal connected graph which includes all entities of particular interest to the present query.
And finally, since you got that far - the real real problem with restricting LLM output with grammars is the tokenization. Because all parsers work reading one char at a time, and tokens are very often few chars, so the parser in a way needs to be able to "lookahead", which it normally does not. I believe OpenAI wrote they realized this also, but I can't really find the article atm.
Better local beamsearch would be really nice to have, though.
* I say perhaps, because PROLOG engines normally don't rewrite strings like crazy while doing inference, so my statement may be somewhat off.
Plus there are a mountain of free tokens out there like Gemini free
aazo11•8h ago
TLDR -- What these frameworks can do on off the shelf laptops is astounding. However, it is very difficult to find and deploy a task specific model and the models themselves (even with quantization) are so large the download would kill UX for most applications.
codelion•6h ago