Is anyone here using Linux machine with 256Gb or 512Gb RAM to run latest models locally?
I am considering buying a new laptop/desktop to run models locally. Most benchmarks I see are for Mac Mx series chips with MLX, even then for big models (>300B param) people are using quantized versions (3bit, 4bit) and its causing drop in quality.
If anyone used Linux with >256Gb ram and no dedicated GPU, how is your experience?
compressedgas•8h ago