I built this because I kept seeing the same question everywhere / "what model should I run on my 16GB Mac?" or "will this fit on my 4060?"
LocalClaw matches your exact hardware (OS, RAM, GPU VRAM) to the right model + quantization out of 125+ options. Takes about 30 seconds.
It recommends for LM Studio specifically, shows file sizes, speed/quality scores, and explains WHY a model fits your setup (RAM usage %, VRAM fit, context window overhead).
Everything runs in the browser. No data collected, no account needed.
Built this in Switzerland, feedback welcome /|/ especially if a recommendation feels off for your hardware.
CDieumegard•1h ago
LocalClaw matches your exact hardware (OS, RAM, GPU VRAM) to the right model + quantization out of 125+ options. Takes about 30 seconds.
It recommends for LM Studio specifically, shows file sizes, speed/quality scores, and explains WHY a model fits your setup (RAM usage %, VRAM fit, context window overhead).
Everything runs in the browser. No data collected, no account needed.
Built this in Switzerland, feedback welcome /|/ especially if a recommendation feels off for your hardware.