Referring to this section?
I don't see a problem with that. This isn't an article about a design intended for 10,000 systems. Just one person's follow through on an interesting project. With disclosure of methodology.
I imagine companies will have first dibs via the likes of agreements with suppliers like CDW, etc, but if Intel had enough of these battlemage dies accumulated, it could also drastically change the local ai enthusiast/hobbyist landscape; for starters this could drive down the price of workstation cards that are ideal for inference, at the very least. I’m cautiously excited.
On the AMD front (really, a sort of open compute front), Vulkan Kompute is picking up steam and it would be really cool to have a standard that mostly(?) ships with Linux, and older ports available for Freebsd, so that we can actually run free as in freedom inference locally.
I will note though, 12GB of VRAM and 32GB of system RAM is a ceiling you’re going to hit pretty quickly if you’re into messing with LLMs. There’s basically no way to do a better job at the budget you’re working with though.
One thing I hear about a lot is people using things like RunPod to briefly get access to powerful GPUs/servers when they need one. If you spend $2/hr you can get access to an H100. If you have a budget of $1300 that could get you about 600 hours of compute time, which (unless you’re doing training runs) should last you several months.
In several months time the specs required to run good models will be different again in ways that are hard to predict, so this approach can help save on the heartbreak of buying an RTX 5090 only to find that even that doesn’t help much with LLM inference and we’re all gonna need the cheaper-but-more-VRAM Intel Arc B60s.
When going for more VRAM, with an RTX 5090 currently sitting at $3000 for 32GB, I'm curious why people aren't trying to get the Dell C4140s. Those seem to go for $3000-$4000 for the whole server with 4x V100 16GB, so 64GB total VRAM.
Maybe it's just because they produce heat and noise like a small turbojet.
For inference, no. For training, only slightly.
All this to say some people do in fact do this ;)
(They probably still are, or at least pretty close to it.)
That informed my decision shortly after, when I built something similar - that video card model was widely panned by gamers (or more accurately, gamer 'influencers'), but it was an excellent choice if you wanted 16GB of VRAM with relatively low power draw (150W peak).
TFA doesn't say where they are, or what currency they're using (which implies the hubris of a North American) - at which point that pricing for a second hand, smaller-capacity, higher-power-drawing 4070 just seems weird.
Appreciate the 'on a budget' aspect, it just seems like an objectively worse path, as upgrades are going to require replacement, rather than augment.
As per other comments here, 32 / 12 is going to be really limiting. Yes - lower parameter / smaller-quant models are becoming more capable, but at the same time we're seeing increasing interest in larger context for these at home use cases, and that chews up memory real fast.
No need for that.
But for those of us outside the USA bubble, it's incredibly tring to have to intuit geo information (when geo information would add to the understanding).
As others noted in sibling comments, TFA had in fact mentioned in passing their location (in their quoted prompt to chatgpt, and at the very end of the third supporting point for the decision to go for an Nvidia 4070) 'California, CA'. I confess that I skimmed over both those paragraphs.
Now, sure, CA is a country code, but I stand corrected that the author completely hid their location. Had I spotted those clues I'd not have to have made any assumptions around wall power capabilities & costs, new & second hand market availability / costs, etc.
I think I mostly catered for those considerations in the rest of my original comment though - asserted power sensitivity makes it surprising that a higher-power-requiring, smaller-RAM-capacity, more-expensive-than-a-sibling-generation-16GB card was selected.
"the 1,440W limit on wall outlets in California" is a pretty good hint.
"I prompted ChatGPT to give me recommendations. Prompt: ... The final build will be located at my residence in San Francisco, CA, ..."
They say California, and I'm seeing the dollar amount in the title and metadata as $1,3k, was that an edit?
As someone who built a period-equivalent rig (with a 12GB 3060 and 128GB RAM) a few years ago, I am not overly optimistic that local models will keep being a cheap alternative (never mind the geopolitics). And yeah, there are vey cheap ways to run inference, but hey become pointless - I can run Qwen and Phi4 locally on an ARM chip like the RK3588, but it is still dog slow.
And in general, if on a budget then why not buy used and not new? And more so as the author himself talks about the resale value for when he sells it on.
The trick is memory bandwidth - not just the amount of VRAM - is important for LLM inference. For example, the B50 specs list a memory bandwidth of 224 GB/s [1], whereas the Nvidia RTX 3090 has over 900GB/s [2]. The 4070's bandwidth is "just" 500GB/s [3].
More VRAM helps run larger models but with lower bandwidth tokens could be generating so slowly it's not really practical for day-to-day use or experimenting.
[1]: https://www.intel.com/content/www/us/en/products/sku/242615/...
[2]: https://www.techpowerup.com/gpu-specs/geforce-rtx-3090.c3622
[3]: https://www.thefpsreview.com/gpu-family/nvidia-geforce-rtx-4...
I'm not really knowledgeable about this space, so maybe I'm missing something:
Why does the bus performance affect token generation? I would expect it to cause a slow startup when loading the model, but once the model is loaded, just how much bandwidth can the token generation possibly use?
Token generation is completely on the card using the memory on the card, without any bus IO at all, no?
IOW, I'm trying to think of what IO the card is going to need for token generation, and I can't think of any other than returning the tokens (which, even on a slow 100MB/s transfer is still going to be about 100x the rate at which tokens are being generated.
This means bandwidth requirements grow as context sizes grow.
For datacenter workloads batching can be used to efficiently use this memory bandwidth and make things compute bound instead
It seems to me that even if you pass in a long context on every prompt, that context is still tiny compared to the execution time on the processor/GPU/tensorcore/etc.
Lets say I load up a model of 12GB on my 12GB VRAM GPU. I pass in a prompt with 1MB of context which causes a response of 500kb after 1s. That's still only 1.5MB of IO transferred in 1s, which kept the GPU busy for 1s. Increasing the prompt is going to increase the duration to a response accordingly.
Unless the GPU is not fully utilised on each prompt-response cycle, I feel that the GPU is still the bottleneck here, not the bus performance.
For reference. llama 3.2 8B used to take 4 KiB per token per layer. At 32 layers that is 128KiB or 8 tokens per MiB of KV cache (context). If your context holds 8000 tokens including responses then you need around 1GB.
>Unless the GPU is not fully utilised on each prompt-response cycle, I feel that the GPU is still the bottleneck here, not the bus performance.
Matrix vector multiplication implies a single floating point multiplication and addition (2 flops) per parameter. Your GPU can do way more flops than that without using tensor cores at all. In fact, this workload bores your GPU to death.
PCIe bus performance is basically irrelevant.
> Token generation is completely on the card using the memory on the card, without any bus IO at all, no?
Right. But the GPU can't instantaneously access data in VRAM. It has to be copied from VRAM to GPU registers first. For every token, the entire contents of VRAM has to be copied to the GPU to be computed. It's a memory-bound process.
Right now there's about an 8x difference in memory bandwidth between low-end and high-end consumer cards (e.g., 4060 Ti vs 5090). Moving up to a B200 more than doubles that performance again.
The caveat is that sometimes a library might be expecting an older version of cuda.
The vram on the GPU does make a difference, so it would at some point be worth looking at another GPU or increasing your system ram if you start running into limits.
However I wouldn't worry too much right away, it's more important to get started and get an understanding of how these local LLMs operate and take advantage of the optimisations that the community is making to make it more accessible. Not everyone has a 5090, and if LLMs remain in the realms of high end hardware, it's not worth the time.
I gave up.
I'll be that guy™ that says if you're going to do any computing half-way reliably, only use ECC RAM. Silent bit flips suck.
7x RTX 3060 - 12 GB which results in 84GB Vram AMD Ryzen 5 - 5500GT with 32GB Ram
All in a 19-inch rack with a nice cooling solution and a beefy power supply.
My costs? 1300 Euro, but yeah, I sourced my parts on ebay / second hand.
(Added some 3d printed parts into the mix: https://www.printables.com/model/1142963-inter-tech-and-gene... https://www.printables.com/model/1142973-120mm-5mm-rised-noc... https://www.printables.com/model/1142962-cable-management-fu... if you think about building something similar)
My power consumption is below 500 Watt at the wall, when using LLLMs,since I did some optimizations:
* Worked on power optimizations and after many weeks of benchmarking, the sweet spot on the RTX3060 12GB cards is a 105 Watt limit
* Created Patches for Ollama ( https://github.com/ollama/ollama/pull/10678) to group models to exactly memory allocation instead of spreading over all available GPUs (This also reduces the VRAM overhead)
* ensured that ASPM is used on all relevant PCI components (Powertop is your friend)
It's not all shiny:
* I still use PCIe3 X1 for most of the cards, which limits their capability, but all I found so far (PCIe Gen4 x4 extender and bifurcation/special PCIE routers) are just too expensive to be used on such low powered cards
* Due to the slow PCIe bandwidth, the performance drops significantly
* Max VRAM per GPU is king. If you split up a model over several cards, the RAM allocation overhead is huge! (See Examples in my ollama patch about). I would rather use 3x 48GB instead of 7x 12G.
* Some RTX 3060 12GB Cards do idle at 11-15 Watt, which is unacceptable. Good BIOSes like the one from Gigabyte (Windforce xxx) do idle at 3 Watt, which is a huge difference when you use 7 or more cards. These BIOSes can be patched, but this can be risky
All in all, this server idles at 90-100Watt currently, which is perfect as a central service for my tinkerings and my family usage.
I know it would increase the idle power consumption, but have you considered a server platform instead of Ryzen to get more lanes?
Even so, you could probably get at least 4x for 4 cards without getting to crazy. 2 m.2 -> pcie adapters, the main GPU slot and the fairly common 4x wired secondary slot.
Splitting the main 16x GPU slot is possible but whenever I looked into this I kind of found the same thing you did. In addition to being a cabling/mounting nightmare the necessary hardware started to eat up enough total system cost that just ponying up for a 3090 started to make more sense.
I think Radeon RX 7900 XT - 20 GB has been the best bang for your buck. Enables full gpu 32B?
Looking at what other people have been doing lately, they arent doing this.
They are getting 64+ core cpus and 512GB of ram. Keeping it on cpu and enabling massive models. This setup lets you do deepseek 671B.
It makes me wonder, how much better is 671B vs 32B?
Cheap too, compared to a lot of what I’m seeing.
32B has improved leaps and bounds in the past year. But Deepseek 671B is still a night and day comparison. 671B just knows so much more stuff.
The main issue with RAM-only builds is that prompt ingestion is incredibly slow. If you're going to be feeding in any context at all, it's horrendous. Most people quote their tokens/s with basically non-existent context (a few hundred tokens). Figure out if you're going to be using context, and how much patience you have. Research the speed you'll be getting for prompt processing / token generation at your desired context length in each instance, and make your decision based on that.
It does make me wonder whether we'll start to see more and more computers with unified memory architecture (like the Mac) - I know nvidia have the Digits thing which has been renamed to something else
So there’s a fundamental tradeoff between cost, inference speed, and hostable model size for the foreseeable future.
I followed in those footsteps to create my own [1] (photo [2]).
I picked up a 24GB M40 for around $300 off eBay. I 3D printed a "cowl" for the GPU that I found online and picked up two small fans from Amazon that got int he cowl. Attached the cowl + fans keep the GPU cool. (These TESLA server GPUs have no fan since they're expected to live in one of those wind-tunnels called a server rack).
I bought the same cheap DELL server PS that the original person had used and I also had to get a break-out board (and power-supply cables and adapters) for the GPU.
Thanks to LLMs, I was able to successfully install Rocky Linux as well as CUDA and NVIDIA drivers. I SSH into it and run ollama commands.
My own hurdle at this point is: I have a 2nd 24 GB M40 TESLA but when installed on the motherboard, Linux will not boot. LLMs are helping me try to set up BIOS correctly or otherwise determine what the issue is. (We'll see.) I would love to get to 48 GB.
[0] https://www.aliexpress.us/item/3256806580127486.html
[1] https://bsky.app/profile/engineersneedart.com/post/3lmg4kiz4...
[2] https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxjqlam...
[1] https://www.tomshardware.com/pc-components/gpus/crazed-modde...
waste of effort, why would you go through the trouble of building + blogging for this?
brought to you by carl's jr.
I am gonna push it this week and launch some LLM models to see how they perform!
How much electric bill efficient are they running locally?
I have Dual E5-2699A v4 w/1.5 TB DDR4-2933 spread across 2 sockets.
The full Deepseek-R1 671B (~1.4 TB) with llama.cpp seems to have a in that local engines that run the LLMs don't do NUMA aware allocation, so cores will often have to pull the weights in from another socket's memory controllers through the inter-socket links (QPI/UPI/Hypertransport) and bottleneck there.
For my platform that's 2x QPI links @ ~39.2GB/s/link that get saturated.
I give it a prompt, go to work and check back on it at lunch and sometimes it's still going.
If you're going to want to achieve interactively I'd aim for 7-10 tokens/s, so realistically it means you'll run one of the 8b models on a GPU (~30 tokens/s) or maybe a 70b model on an M4 Max (~8 tokens/s).
https://www.bosgamepc.com/products/bosgame-m5-ai-mini-deskto...
Last I saw data on this wasn’t true. A like for like comparison (same model and quant) API is cheaper than elec so you never make back hardware cost. That was a year ago and api costs have plummeted so I’d imagine it’s even worse now.
Datacenters have cheaper elec, can do batch inference at scale and more efficient cards. And that’s before we consider the huge free allowances by Google etc
Own AI gear is cool…but not due to economics
The comparison I saw was a small llama 8B model. ie something you can actually get usable numbers on both home and api. So something pretty commoditized
> When ran 24/7, CC would possibly incur more API fees than residential electricity would cost when running on your own gear?
Claude is pretty damn expensive so plausible that you can undercut it with another model. That implies you throw out the like for like assumptions out the door though. Valid play practically, but kinda undermines the buy own rig to save argument
I'm not sure if right now is the best timing for building an LLM rig, as Intel Arc B60(24GBx2) is about to go on sale. Or maybe it is to secure multiples of 16GB cards hastily offloaded before its launch?
I didn't buy second hand parts since i wasn't sure of the quality so it was a little pricey but we have the entire thing working now and over the last week, we added the llm server to the mix. Haven't released it yet though.
I wrote about some "fun" we had getting it together here but it's not as technically detailed as the original article.
https://blog.hpcinfra.com/when-linkedin-met-reality-our-bang...
It would be nice to see a best value home AI setups under different budgets or RAM tiers, e.g. best value configuration for 128 GPU VRAM, etc.
My 48GB GPU VRAM "Home AI Server" cost ~$3100 from all parts on eBay running 3x A4000's in a Supermicro 128GB RAM, 32/64 core Xeon 1U rack server. Nothing amazing but wanted the most GPU VRAM before paying the premium Nvidia tax on their larger GPUs.
This works well for Ollama/llama-server which can make use of all GPU VRAM unfortunately ComfyUI can't make use of all GPU VRAM to run larger models, so on the lookout for a lot more RAM in my next GPU Server.
Really hoping Intel can deliver with its upcoming Arc Pro B60 Dual GPU for a great value 48GB option which can be run 4x in an affordable 192GB VRAM workstation [1]. If it runs Ollama and ComfyUI efficiently I'm sold.
[1] https://www.servethehome.com/maxsun-intel-arc-pro-b60-dual-g...
source code: https://github.com/KevinColemanInc/NSFW-FLASK
The dataset seems to be images of high production value (e.g. limited races, staged poses, etc). If I have time, I will compare it with Bumble's model, but I think the images I'm trying to identify are closer to Bumble's training set.
Admittedly with that amount of VRAM the models I can run are fairly useless for stuff like controlling lights via Home Assistant, occasionally does what I tell it to do but usually not. It is pretty okay for telling me information, like temperature or value of some sensors I have connected to HA. For generating AI paintings it's enough. My server also hosts tons of virtual machines, docker containers and is used for remote gameplay, so the AI thing is just an extra.
https://www.amazon.sg/NVIDIA-Jetson-Orin-64GB-Developer/dp/B...
Those DGX machines are still at right around the corner state.
I'm curiuos why OP didn't go for the more recent Nvidia RTX 4060 Ti with 16 GB VRAM that cost cheaper (~USD500) brand new and lesser power consumption at 165W [1].
[1] RTX 5060 Ti 16GB sucks for gaming, but seems like a diamond in the rough for AI:
You can however solder on double-capacity memory chips to get 22GB:
https://forums.overclockers.com.au/threads/double-your-gpu-m...
I hoped the article would be more along these lines than calling an unremarkable second-hand last-gen gaming pc an "AI Server".
--
Using LLM via api: Starbucks.
Inference at home: Nespresso capsules.
Fine-tune a small model at home: Owning a grinder and an italian espresso machine.
Pre-training a model: Owning a moderate coffee plantation.
vunderba•8mo ago
Most of the recommendations for this budget AI system are on point - the only thing I'd recommend is more RAM. 32GB is not a lot - particularly if you start to load larger models through formats such as GGUF and want to take advantage of system ram to split the layers at the cost of inference speed. I'd recommend at least 2 x 32GB or even 4 x 32GB if you can swing it budget-wise.
Author mentioned using Claude for recommendations, but another great resource for building machines is PC Part Picker. They'll even show warnings if you try pairing incompatible parts or try to use a PSU that won't supply the minimum recommended power.
https://pcpartpicker.com
Aeolun•8mo ago