I'm not able to get my agentic system to use this model though as it just says "I don't have the tools to do this". I tried modifying various agent prompts to explicitly say "Use foo tool to do bar" without any luck yet. All of the ToolSpec that I use are annotated etc. Pydantic objects and every other model has figured out how to use these tools.
I find that on my M2 Mac that number is a rough approximation to how much memory the model needs (usually plus about 10%) - which matters because I want to know how much RAM I will have left for running other applications.
Anything below 20GB tends not to interfere with the other stuff I'm running too much. This model looks promising!
I'll give it a try with aider to test the large context as well.
There's context length, but then, how does that relate to input length and output length? Should I just make the numbers match? 32k is 32k? Any pointers?
Just for ollama, see: https://github.com/ollama/ollama/blob/main/docs/faq.md#how-c...
I’m using llama.cpp though, so I can’t confirm these methods.
I’ve been using Cursor and I’m kind of disappointed. I get better results just going back and forth between the editor and ChatGPT
I tried localforge and aider, but they are kinda slow with local models
Try hooking aider up to gemini and see how the speed is. I have noticed that people in the localllama scene do not like to talk about their TPS.
However I've also ran into 2 things: 1) most models don't support tools, sometimes it's hard to find a version of the model that correctly uses tools, 2) even with good TPS, since the agents are usually doing chain-of-thought and running multiple chained prompts, the experience feels slow - this is even true with Cursor using their models/apis
P.S. I am not a lawyer.
"Apple Intelligence" isn't it but it would be nice to know without churning through tests whether I should bother keeping around 2-3 models for specific tasks in ollama or if their performance is marginal there's a more stable all-rounder model.
[0] - https://github.com/ggml-org/llama.cpp
[1] - https://lmstudio.ai/
To determine how much space a model needs, you look at the size of the quantized (lower precision) model on HuggingFace or wherever it's hosted. Q4_K_M is a good default. As a rough rule of thumb, this will be a little over half the size of the parameters, if they were in gigabytes. For Devstral, that's 14.3GB. You will also need 1-8GB more than that, to store the context.
For example: A 32GB Macbook Air could use Devstral at 14.3+4GB, leaving ~14GB for the system and applications. A 16GB Macbook Air could use Gemma 3 12B at 7.3+2GB, leaving ~7GB for everything else. An 8GB Macbook could use Gemma 3 4B at 2.5GB+1GB, but this is probably not worth doing.
It's kind-of like asking, for which kind of road-trip would you use a Corolla hatchback instead of a Jeep Grand Wagoneer? For me the answer would be "almost all of them", but for others that might not be the case.
This is still too much, a single 4090 costs $3k
What a ripoff, considering that a 5090 with 32GB of VRAM also currently costs $3k ;)
(Source: I just received the one I ordered from Newegg a week ago for $2919. I used hotstocks.io to alert me that it was available, but I wasn’t super fast at clicking and still managed to get it. Things have cooled down a lot from the craziness of early February.)
I hope not. Mine was $1700 almost 2 years go, and the 5090 is out now...
I am hopeful that the prices will drop a bit more with Intel's recently announced Arc Pro B60 with 24GB VRAM, which unfortunately has only half the memory bandwidth of the RTX 3090.
Not sure why other hardware makers are so slow to catch up. Apple really was years ahead of the competition with the M1 Ultra with 800 GB/s memory bandwidth.
Interesting. I've never heard this.
Also, Mistral has been killing it with their most recent models. I pay for Le Chat Pro, it's really good. Mistral Small is really good. Also building a startup with Mistral integration.
I haven't tried it out yet but every model I've tested from Mistral has been towards the bottom of my benchmarks in a similar place to Llama.
Would be very surprised if the real life performance is anything like they're claiming.
My general impression so far is that they aren't quite up to Claude 3.7 Sonnet, but they're quite good. More than adequate for an "AI pair coding assistant", and suitable for larger architectural work as long as you break things into steps for it.
Wouldn't mind some of my taxpayer money flowing towards apache/mit licensed models.
Even if just to maintain a baseline alternative & keep everyone honest. Seems important that we don't have some large megacorps run away with this.
AnhTho_FR•7h ago