frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU

https://github.com/xaskasdf/ntransformer
67•xaskasdf•3h ago
Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"

This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho

Comments

randomtoast•1h ago
0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff
tyfon•1h ago
I didn't really understand the performance table until I saw the top ones were 8B models.

But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.

I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?

tgrowazay•41m ago
LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

DDR4 tops out about 27Gbs

DDR5 can do around 40Gbs

So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.

vlovich123•28m ago
Faster than the 0.2tok/s this approach manages
someguy2026•23m ago
DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.
zozbot234•12m ago
Should be active param size, not model size.
Wuzado•43m ago
I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality.
throwaway2027•1h ago
Didn't DirectX add an API for loading assets directly to GPU memory? Would that work?
someguy2026•1h ago
My impression is that that is limited to assets and really needs to fit into the DirectX framework. From what I can tell, the gpu-nvme-direct is mostly similar to https://github.com/enfiskutensykkel/ssd-gpu-dma and https://github.com/ZaidQureshi/bam
jauntywundrkind•1h ago
Could be neat to see what giving the 8b like 6gb ram instead of 10gb. Something in-between, where you still need NVMe, but not like the 3x ratio of the 70b model on 23GB.

Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!

rl3•58m ago
Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.

One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.

I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.

Curious if anyone's tried this already.

Wuzado•33m ago
I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?
rao-v•29m ago
Yeah I’ve often wondered why folks aren’t training two tier MoEs for VRAM + RAM. We already have designs for shared experts so it cannot be hard to implement a router that allocated 10x or 100x as often to “core” experts vs the “nice to have” experts. I suppose balancing during training is tricky but some sort of custom loss on the router layers should work.

I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.

hedgehog•19m ago
I don't have links handy but there is active research in this area.
svnt•18m ago
Maybe I am misunderstanding something but:

1) This is basically the intention of several recent MoE models: keep particular generally useful experts hot in VRAM.

2) Unless you can swap layers in faster than you consume them there is no point to predicting layers (what does this even really mean? did you mean predicting experts?).

It seems at the moment the best you can do is keep experts and layers more likely to be used for a given query in VRAM and offload the rest, but this is work-dependent.

reitzensteinm•14m ago
I think part of the issue is that in production deployments, you're batching high enough that you'll be paging in those long tail experts constantly.

Unless you're handing that in some kind of fancy way, you'll be holding up the batch while waiting for host memory which will kill your throughout.

It makes much more sense for non batched local inference, especially if you can keep the MoE routing stable like you say, but most folks aren't optimising for that.

zozbot234•8m ago
Ideally, you should rearrange batches so that inference steps that rely on the same experts get batched together, then inferences that would "hold up" a batch simply wait for that one "long tail" expert to be loaded, whereupon they can progress. This might require checkpointing partial inference steps more often, but that ought to be doable.

EDuke32 – Duke Nukem 3D (Open-Source)

https://www.eduke32.com/
134•reconnecting•4h ago•51 comments

Why is Claude an Electron app?

https://www.dbreunig.com/2026/02/21/why-is-claude-an-electron-app.html
291•dbreunig•3h ago•227 comments

Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU

https://github.com/xaskasdf/ntransformer
67•xaskasdf•3h ago•17 comments

Evidence of the bouba-kiki effect in naïve baby chicks

https://www.science.org/doi/10.1126/science.adq7188
49•suddenlybananas•2h ago•12 comments

Parse, Don't Validate and Type-Driven Design in Rust

https://www.harudagondi.space/blog/parse-dont-validate-and-type-driven-design-in-rust/
110•todsacerdoti•5h ago•36 comments

Are compilers deterministic?

https://blog.onepatchdown.net/2026/02/22/are-compilers-deterministic-nerd-version/
4•fragmede•20m ago•0 comments

I verified my LinkedIn identity. Here's what I handed over

https://thelocalstack.eu/posts/linkedin-identity-verification-privacy/
1146•ColinWright•17h ago•405 comments

zclaw: personal AI assistant in under 888 KB, running on an ESP32

https://github.com/tnm/zclaw
80•tosh•12h ago•47 comments

How far back in time can you understand English?

https://www.deadlanguagesociety.com/p/how-far-back-in-time-understand-english
330•spzb•3d ago•198 comments

Happy Zelda's 40th first LLM running on N64 hardware (4MB RAM, 93MHz)

https://github.com/sophiaeagent-beep/n64llm-legend-of-Elya
22•AutoJanitor•3h ago•5 comments

Who's liable when your AI agent burns down production?

https://reading.sh/whos-liable-when-your-ai-agent-burns-down-production-039193d82746?sk=4921ed2db...
5•zenoware•49m ago•1 comments

Toyota Mirai hydrogen car depreciation: 65% value loss in a year

https://carbuzz.com/toyota-mirai-massive-depreciation-one-year/
83•iancmceachern•6h ago•204 comments

CXMT has been offering DDR4 chips at about half the prevailing market rate

https://www.koreaherald.com/article/10679206
143•phront•10h ago•109 comments

Canvas_ity: A tiny, single-header <canvas>-like 2D rasterizer for C++

https://github.com/a-e-k/canvas_ity
49•PaulHoule•5h ago•19 comments

Claws are now a new layer on top of LLM agents

https://twitter.com/karpathy/status/2024987174077432126
168•Cyphase•23h ago•611 comments

What not to write on your security clearance form (1988)

https://milk.com/wall-o-shame/security_clearance.html
362•wizardforhire•7h ago•155 comments

Finding forall-exists Hyperbugs using Symbolic Execution

https://dl.acm.org/doi/full/10.1145/3689761
10•todsacerdoti•4d ago•0 comments

Inputlag.science – Repository of knowledge about input lag in gaming

https://inputlag.science
55•akyuu•4h ago•11 comments

I Don't Like Magic

https://adactio.com/journal/22399
99•edent•3d ago•83 comments

Acme Weather

https://acmeweather.com/blog/introducing-acme-weather
179•cryptoz•17h ago•116 comments

Online Pebble Development

https://cloudpebble.repebble.com/
12•teekert•3h ago•6 comments

Declarative, Inquisitive, then Imperative (2017) [pdf]

https://www.forth.org/svfig/kk/11-2017-Falvo.pdf
3•tosh•4d ago•0 comments

Permacomputing

https://wiki.xxiivv.com/site/permacomputing.html
83•tosh•4d ago•21 comments

Be wary of Bluesky

https://kevinak.se/blog/be-wary-of-bluesky
230•kevinak•1d ago•166 comments

Personal Statement of a CIA Analyst

https://antipolygraph.org/statements/statement-038.shtml
132•grubbs•6h ago•75 comments

Cloudflare outage on February 20, 2026

https://blog.cloudflare.com/cloudflare-outage-february-20-2026/
146•nomaxx117•5h ago•97 comments

Padlet (YC W13) Is Hiring in San Francisco and Singapore

https://padlet.jobs
1•coffeebite•12h ago

MeshTNC is a tool for turning consumer grade LoRa radios into KISS TNC compatib

https://github.com/datapartyjs/MeshTNC
19•todsacerdoti•4h ago•5 comments

Uncovering insiders and alpha on Polymarket with AI

https://twitter.com/peterjliu/status/2024901585806225723
121•somerandomness•1d ago•117 comments

AI uBlock Blacklist

https://github.com/alvi-se/ai-ublock-blacklist
215•rdmuser•16h ago•95 comments