https://nvdam.widen.net/s/tlzm8smqjx/workstation-datasheet-d...
Framework's AMD AI Max PCs also come with LPDDR5x-8000 memory: https://frame.work/desktop?tab=specs
At the lower price points you have the AMD machines which are significantly cheaper, even though they're slower and with worse support. Then there's apple's with higher memory bandwidth and even the nvidia agx Thor is faster in GPU compute at the cost of worse CPU and networking, and at the 3-4K price point even a threadripper system becomes viable that can get significantly more memory
A bit expensive for 128 GB RAM. What can the CPU do ? Can it run flawlessly all svchost.exe instances in Windows 11 ? At this money, does it have a headphones output ?
adam_patarino•1d ago
antinomicus•19h ago
This is why in the long run I believe we all should aspire to do LLM inference locally. But unfortunately we just are not anywhere close to par with the SoTA cloud models available. Something like DGX spark would be a decent step in this direction, but this platform appears to mostly be for prototyping / training models meant to eventually be run on data center nvidia hardware.
Personally I think I will probably spec out an M5 max/ultra Mac Studio once that’s a thing, and start trying to do this more seriously. The tools are getting better every day and “this is the worst it’ll ever be” is much more applicable to locally run models.
BizarroLand•18h ago