frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Building an AI server on a budget

https://www.informationga.in/blog/building-an-ai-server-on-a-budget
71•mful•2d ago

Comments

vunderba•2d ago
The RTX market is particularly irritating right now, even second-hard 4090s are still going for MSRP if you can find them at all.

Most of the recommendations for this budget AI system are on point - the only thing I'd recommend is more RAM. 32GB is not a lot - particularly if you start to load larger models through formats such as GGUF and want to take advantage of system ram to split the layers at the cost of inference speed. I'd recommend at least 2 x 32GB or even 4 x 32GB if you can swing it budget-wise.

Author mentioned using Claude for recommendations, but another great resource for building machines is PC Part Picker. They'll even show warnings if you try pairing incompatible parts or try to use a PSU that won't supply the minimum recommended power.

https://pcpartpicker.com

Aeolun•1h ago
I thought those 4090’s were weird. You pay more for them than the brand new 5090. And then there’s AMD, which everyone loves to hate, but has similar GPU’s that cost 1/4th of what a similar Nvidia GPU costs.
uniposterz•2d ago
I had a similar setup for a local LLM, 32GB was not enough. I recommend going for 64GB.
golly_ned•2d ago
Whenever I get to a section that was clearly autogenerated by an LLM I lose interest in the entire article. Suddenly the entire thing is suspect and I feel like I’m wasting my time, since I’m lo lingering encountering the mind of another person, just interacting with a system.
bravesoul2•2d ago
I didn't see anything like that here. Yeah they used bullets.
golly_ned•1d ago
There’s a section that says what the parts of a pc are, and what that part is.
Nevermark•4h ago
> I used the AI-generated recommendations as a starting point, and refined the options with my own research.

Referring to this section?

I don't see a problem with that. This isn't an article about a design intended for 10,000 systems. Just one person's follow through on an interesting project. With disclosure of methodology.

throwaway314155•3h ago
Eh, yeah - the article starts off pretty specific but then gets into the weeds of stuff like how to put your PC together, which is far from novel information and certainly not on-topic in my opinion.
7speter•2d ago
I dunno everyone, but I think Intel has something big on their hands with their announced workstation gpus. The b50 is a low profile card that doesn’t have a powersupply hookup because it only uses something like 60 watts, and comes with 16gb vram at a msrp of 300 dollars.

I imagine companies will have first dibs via the likes of agreements with suppliers like CDW, etc, but if Intel had enough of these battlemage dies accumulated, it could also drastically change the local ai enthusiast/hobbyist landscape; for starters this could drive down the price of workstation cards that are ideal for inference, at the very least. I’m cautiously excited.

On the AMD front (really, a sort of open compute front), Vulkan Kompute is picking up steam and it would be really cool to have a standard that mostly(?) ships with Linux, and older ports available for Freebsd, so that we can actually run free as in freedom inference locally.

Uehreka•2d ago
Love the attention to detail, I can tell this was a lot of work to put together and I hope it helps people new to PC building.

I will note though, 12GB of VRAM and 32GB of system RAM is a ceiling you’re going to hit pretty quickly if you’re into messing with LLMs. There’s basically no way to do a better job at the budget you’re working with though.

One thing I hear about a lot is people using things like RunPod to briefly get access to powerful GPUs/servers when they need one. If you spend $2/hr you can get access to an H100. If you have a budget of $1300 that could get you about 600 hours of compute time, which (unless you’re doing training runs) should last you several months.

In several months time the specs required to run good models will be different again in ways that are hard to predict, so this approach can help save on the heartbreak of buying an RTX 5090 only to find that even that doesn’t help much with LLM inference and we’re all gonna need the cheaper-but-more-VRAM Intel Arc B60s.

semi-extrinsic•2h ago
> save on the heartbreak of buying an RTX 5090 only to find that even that doesn’t help much with LLM inference and we’re all gonna need the cheaper-but-more-VRAM Intel Arc B60s

When going for more VRAM, with an RTX 5090 currently sitting at $3000 for 32GB, I'm curious why people aren't trying to get the Dell C4140s. Those seem to go for $3000-$4000 for the whole server with 4x V100 16GB, so 64GB total VRAM.

Maybe it's just because they produce heat and noise like a small turbojet.

Jedd•2d ago
In January 2024 there was a similar post ( https://news.ycombinator.com/item?id=38985152 ) wherein the author selected dual NVidia 4060 Ti's for an at-home-LLM-with-voice-control -- because they were the cheapest cost per GB of well-supported VRAM at the time.

(They probably still are, or at least pretty close to it.)

That informed my decision shortly after, when I built something similar - that video card model was widely panned by gamers (or more accurately, gamer 'influencers'), but it was an excellent choice if you wanted 16GB of VRAM with relatively low power draw (150W peak).

TFA doesn't say where they are, or what currency they're using (which implies the hubris of a North American) - at which point that pricing for a second hand, smaller-capacity, higher-power-drawing 4070 just seems weird.

Appreciate the 'on a budget' aspect, it just seems like an objectively worse path, as upgrades are going to require replacement, rather than augment.

As per other comments here, 32 / 12 is going to be really limiting. Yes - lower parameter / smaller-quant models are becoming more capable, but at the same time we're seeing increasing interest in larger context for these at home use cases, and that chews up memory real fast.

throwaway314155•4h ago
> which implies the hubris of a North American

No need for that.

topato•2h ago
True, though
topato•2h ago
He did soften the blow by saying North American, rather than the more correctly appropos, American
dfc•1m ago
The author also refers to Californian power limits. So it seems the criticism is misplaced.
T-A•1h ago
> TFA doesn't say where they are

"the 1,440W limit on wall outlets in California" is a pretty good hint.

rcarmo•2d ago
The trouble with these things is that “on a budget” doesn’t deliver much when most interesting and truly useful models are creeping beyond the 16GB VRAM limit and/or require a lot of wattage. Even a Mac mini with enough RAM is starting to look like an expensive proposition, and the AMD Stryx Halo APUs (the SKUs that matter, like the Framework Desktop at 128GB) are around $2K.

As someone who built a period-equivalent rig (with a 12GB 3060 and 128GB RAM) a few years ago, I am not overly optimistic that local models will keep being a cheap alternative (never mind the geopolitics). And yeah, there are vey cheap ways to run inference, but hey become pointless - I can run Qwen and Phi4 locally on an ARM chip like the RK3588, but it is still dog slow.

v5v3•2d ago
I thought prevailing wisdom was that a used 3090 with it's larger vram was the best budget gpu choice?

And in general, if on a budget then why not buy used and not new? And more so as the author himself talks about the resale value for when he sells it on.

olowe•2d ago
> I thought prevailing wisdom was that a used 3090 with it's larger vram was the best budget gpu choice?

The trick is memory bandwidth - not just the amount of VRAM - is important for LLM inference. For example, the B50 specs list a memory bandwidth of 224 GB/s [1], whereas the Nvidia RTX 3090 has over 900GB/s [2]. The 4070's bandwidth is "just" 500GB/s [3].

More VRAM helps run larger models but with lower bandwidth tokens could be generating so slowly it's not really practical for day-to-day use or experimenting.

[1]: https://www.intel.com/content/www/us/en/products/sku/242615/...

[2]: https://www.techpowerup.com/gpu-specs/geforce-rtx-3090.c3622

[3]: https://www.thefpsreview.com/gpu-family/nvidia-geforce-rtx-4...

lelanthran•3h ago
> The trick is memory bandwidth - not just the amount of VRAM - is important for LLM inference.

I'm not really knowledgeable about this space, so maybe I'm missing something:

Why does the bus performance affect token generation? I would expect it to cause a slow startup when loading the model, but once the model is loaded, just how much bandwidth can the token generation possibly use?

Token generation is completely on the card using the memory on the card, without any bus IO at all, no?

IOW, I'm trying to think of what IO the card is going to need for token generation, and I can't think of any other than returning the tokens (which, even on a slow 100MB/s transfer is still going to be about 100x the rate at which tokens are being generated.

retinaros•4h ago
yes it is
politelemon•2d ago
If the author is reading this I'll point out that the cuda toolkit you find in the repositories is generally older. You can find the latest versions straight from Nvidia: https://developer.nvidia.com/cuda-downloads?target_os=Linux&...

The caveat is that sometimes a library might be expecting an older version of cuda.

The vram on the GPU does make a difference, so it would at some point be worth looking at another GPU or increasing your system ram if you start running into limits.

However I wouldn't worry too much right away, it's more important to get started and get an understanding of how these local LLMs operate and take advantage of the optimisations that the community is making to make it more accessible. Not everyone has a 5090, and if LLMs remain in the realms of high end hardware, it's not worth the time.

throwaway314155•3h ago
The other main caveat is that installing from custom sources using apt is a massive pain in the ass.
burnt-resistor•2d ago
Reminds me of https://cr.yp.to/hardware/build-20090123.html

I'll be that guy™ that says if you're going to do any computing half-way reliably, only use ECC RAM. Silent bit flips suck.

DogRunner•2d ago
I used a similar budget and build something like this:

7x RTX 3060 - 12 GB which results in 84GB Vram AMD Ryzen 5 - 5500GT with 32GB Ram

All in a 19-inch rack with a nice cooling solution and a beefy power supply.

My costs? 1300 Euro, but yeah, I sourced my parts on ebay / second hand.

(Added some 3d printed parts into the mix: https://www.printables.com/model/1142963-inter-tech-and-gene... https://www.printables.com/model/1142973-120mm-5mm-rised-noc... https://www.printables.com/model/1142962-cable-management-fu... if you think about building something similar)

My power consumption is below 500 Watt at the wall, when using LLLMs,since I did some optimizations:

* Worked on power optimizations and after many weeks of benchmarking, the sweet spot on the RTX3060 12GB cards is a 105 Watt limit

* Created Patches for Ollama ( https://github.com/ollama/ollama/pull/10678) to group models to exactly memory allocation instead of spreading over all available GPUs (This also reduces the VRAM overhead)

* ensured that ASPM is used on all relevant PCI components (Powertop is your friend)

It's not all shiny:

* I still use PCIe3 X1 for most of the cards, which limits their capability, but all I found so far (PCIe Gen4 x4 extender and bifurcation/special PCIE routers) are just too expensive to be used on such low powered cards

* Due to the slow PCIe bandwidth, the performance drops significantly

* Max VRAM per GPU is king. If you split up a model over several cards, the RAM allocation overhead is huge! (See Examples in my ollama patch about). I would rather use 3x 48GB instead of 7x 12G.

* Some RTX 3060 12GB Cards do idle at 11-15 Watt, which is unacceptable. Good BIOSes like the one from Gigabyte (Windforce xxx) do idle at 3 Watt, which is a huge difference when you use 7 or more cards. These BIOSes can be patched, but this can be risky

All in all, this server idles at 90-100Watt currently, which is perfect as a central service for my tinkerings and my family usage.

incomingpain•2d ago
I've been dreaming on pcpartpicker.

I think Radeon RX 7900 XT - 20 GB has been the best bang for your buck. Enables full gpu 32B?

Looking at what other people have been doing lately, they arent doing this.

They are getting 64+ core cpus and 512GB of ram. Keeping it on cpu and enabling massive models. This setup lets you do deepseek 671B.

It makes me wonder, how much better is 671B vs 32B?

Aeolun•1h ago
I bought an RX 7900 XTX with 24GB, and it’s everything I expected of it. It’s absolutely massive though. I thought I could add one extra for more memory, but that’s a pipe dream in my little desktop box.

Cheap too, compared to a lot of what I’m seeing.

djhworld•4h ago
With system builds like this I always feel the VRAM is the limiting factor when it comes to what models you can run, and consumer grade stuff tends to max out at 16GB or (somemtimes) 24GB for more expensive models.

It does make me wonder whether we'll start to see more and more computers with unified memory architecture (like the Mac) - I know nvidia have the Digits thing which has been renamed to something else

JKCalhoun•3h ago
Go server GPU (TESLA) and 24 GB is not unusual. (And also about $300 used on eBay.)
atentaten•3h ago
Enjoyed the article as I am interested in the same. I would like to have seen more about the specific use cases and how they performed on the rig.
ww520•3h ago
I use a 10-year old laptop to run a local LLM. The time between prompts are 10-30 seconds. Not for speedy interactive usage.
JKCalhoun•3h ago
Someone posted that they had used a "mining rig" [0] from AliExpress for less than $100. It even has RAM and a CPU. He picked up a 2000W (!) DELL server PS for cheap off eBay. The GPUs were NVIDIA TESLAs (M40 for example) since they often have a lot of RAM and are less expensive.

I followed in those footsteps to create my own [1] (photo [2]).

I picked up a 24GB M40 for around $300 off eBay. I 3D printed a "cowl" for the GPU that I found online and picked up two small fans from Amazon that got int he cowl. Attached the cowl + fans keep the GPU cool. (These TESLA server GPUs have no fan since they're expected to live in one of those wind-tunnels called a server rack).

I bought the same cheap DELL server PS that the original person had used and I also had to get a break-out board (and power-supply cables and adapters) for the GPU.

Thanks to LLMs, I was able to successfully install Rocky Linux as well as CUDA and NVIDIA drivers. I SSH into it and run ollama commands.

My own hurdle at this point is: I have a 2nd 24 GB M40 TESLA but when installed on the motherboard, Linux will not boot. LLMs are helping me try to set up BIOS correctly or otherwise determine what the issue is. (We'll see.) I would love to get to 48 GB.

[0] https://www.aliexpress.us/item/3256806580127486.html

[1] https://bsky.app/profile/engineersneedart.com/post/3lmg4kiz4...

[2] https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxjqlam...

rjsw•1h ago
There was an article on Tom's Hardware recently where someone was using a CPU cooler with a GPU [1].

[1] https://www.tomshardware.com/pc-components/gpus/crazed-modde...

iJohnDoe•3h ago
Details about the ML software or AI software?
jacekm•2h ago
For $100 more you could get a used 3090 with twice as much VRAM. You could also get 4060 Ti which is cheaper than 4070 and it has 16 GB VRAM (although it's less powerfull too, so I guess depends on the use case)
msp26•2h ago
> 12GB vram

waste of effort, why would you go through the trouble of building + blogging for this?

pshirshov•1h ago
3090 for ~1000 is much more solid choice. Also these old mining mobos play very well for multi-gpu ollama.
usercvapp•1h ago
I have a server at home sitting IDLE for the last 2 years with 2 TB of RAM and 4 CPUs.

I am gonna push it this week and launch some LLM models to see how they perform!

How much electric bill efficient are they running locally?

T-A•1h ago
I would consider adding $400 for something like this instead:

https://www.bosgamepc.com/products/bosgame-m5-ai-mini-deskto...

whalesalad•53m ago
I would rather spend $1,300 on openai/anthropic credits. The performance from that 4070 cannot be worth the squeeze.
Havoc•17m ago
> You pay a lot upfront for the hardware, but if your usage of the GPU is heavy, then you save a lot of money in the long run.

Last I saw data on this wasn’t true. A like for like comparison (same model and quant) API is cheaper than elec so you never make back hardware cost. That was a year ago and api costs have plummeted so I’d imagine it’s even worse now.

Datacenters have cheaper elec, can do batch inference at scale and more efficient cards. And that’s before we consider the huge free allowances by Google etc

Own AI gear is cool…but not due to economics

Why Android can't use CDC Ethernet (2023)

https://jordemort.dev/blog/why-android-cant-use-cdc-ethernet/
118•goodburb•3h ago•51 comments

Poison everywhere: No output from your MCP server is safe

https://www.cyberark.com/resources/threat-research-blog/poison-everywhere-no-output-from-your-mcp-server-is-safe
49•Bogdanp•2h ago•20 comments

Omnimax

https://computer.rip/2025-06-08-Omnimax.html
51•aberoham•3h ago•14 comments

Building supercomputers for autocrats probably isn't good for democracy

https://helentoner.substack.com/p/supercomputers-for-autocrats
43•rbanffy•3h ago•18 comments

Panjandrum: The 'giant firework' built to break Hitler's Atlantic Wall

https://www.bbc.com/future/article/20250603-the-giant-firework-built-to-break-hitlers-atlantic-wall
77•rmason•3d ago•59 comments

OpenBSD IO Benchmarking: How Many Jobs Are Worth It?

https://rsadowski.de/posts/2025/fio_simple_benckmarking/
13•PaulHoule•1h ago•0 comments

Administering immunotherapy in the morning seems to matter. Why?

https://www.owlposting.com/p/the-time-of-day-that-immunotherapy
107•abhishaike•8h ago•73 comments

My first attempt at iOS app development

https://mgx.me/my-first-attempt-at-ios-app-development
62•surprisetalk•3d ago•28 comments

Show HN: Let’s Bend – Open-Source Harmonica Bending Trainer

https://letsbend.de
74•egdels•8h ago•12 comments

The Wire That Transforms Much of Manhattan into One Big, Symbolic Home

https://www.atlasobscura.com/articles/eruv-manhattan-invisible-wire-jewish-symbolic-religious-home
19•rmason•4h ago•8 comments

Startup Equity 101

https://quarter--mile.com/Startup-Equity-101
91•surprisetalk•3d ago•39 comments

Cheap yet ultrapure titanium might enable widespread use in industry (2024)

https://phys.org/news/2024-06-cheap-ultrapure-titanium-metal-enable.amp
65•westurner•4d ago•37 comments

Tracking Copilot vs. Codex vs. Cursor vs. Devin PR Performance

https://aavetis.github.io/ai-pr-watcher/
11•HiPHInch•3d ago•2 comments

Gaussian integration is cool

https://rohangautam.github.io/blog/chebyshev_gauss/
135•beansbeansbeans•15h ago•28 comments

How Compiler Explorer Works in 2025

https://xania.org/202506/how-compiler-explorer-works
97•vitaut•4d ago•19 comments

I Used AI-Powered Calorie Counting Apps, and They Were Even Worse Than Expected

https://lifehacker.com/health/ai-powered-calorie-counting-apps-worse-than-expected
17•gnabgib•1h ago•3 comments

Endangered classic Mac plastic color returns as 3D-printer filament

https://arstechnica.com/apple/2025/06/new-filament-lets-you-3d-print-parts-in-authentic-1980s-apple-computer-color/
37•CobaltFire•3d ago•0 comments

Binfmtc – binfmt_misc C scripting interface

https://www.netfort.gr.jp/~dancer/software/binfmtc.html.en
76•todsacerdoti•11h ago•19 comments

The last six months in LLMs, illustrated by pelicans on bicycles

https://simonwillison.net/2025/Jun/6/six-months-in-llms/
704•swyx•16h ago•186 comments

Generating Pixels One by One

https://tunahansalih.github.io/blog/autoregressive-vision-generation-part-1/
11•cyruseption•3d ago•0 comments

Efficient mRNA delivery to resting T cells to reverse HIV latency

https://www.nature.com/articles/s41467-025-60001-2
71•matthewmacleod•3d ago•13 comments

Joining Apple Computer (2018)

https://www.folklore.org/Joining_Apple_Computer.html
389•tosh•1d ago•112 comments

Launching the BeOS on Hitachi Flora Prius Systems (1999)

http://testou.free.fr/www.beatjapan.org/mirror/www.be.com/support/guides/hitachi_boot.html
37•doener•9h ago•13 comments

Self-Host and Tech Independence: The Joy of Building Your Own

https://www.ssp.sh/blog/self-host-self-independence/
409•articsputnik•1d ago•196 comments

<Blink> and <Marquee> (2020)

https://danq.me/2020/11/11/blink-and-marquee/
193•ghssds•20h ago•156 comments

Coventry Very Light Rail

https://www.coventry.gov.uk/coventry-light-rail
183•Kaibeezy•1d ago•246 comments

Focus and Context and LLMs

https://taras.glek.net/posts/focus-and-context-and-llms/
65•tarasglek•15h ago•27 comments

Building an AI server on a budget

https://www.informationga.in/blog/building-an-ai-server-on-a-budget
71•mful•2d ago•42 comments

Field Notes from Shipping Real Code with Claude

https://diwank.space/field-notes-from-shipping-real-code-with-claude
273•diwank•1d ago•79 comments

Ask HN: How to learn CUDA to professional level

191•upmind•13h ago•66 comments