frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Future of Systems

https://novlabs.ai/mission/
1•tekbog•21s ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•4m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
1•throwaw12•6m ago•1 comments

MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•6m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•7m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•9m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•12m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
1•andreabat•15m ago•0 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
1•mgh2•21m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•23m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•28m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•29m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•30m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•32m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•34m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•36m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•37m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•40m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•41m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•44m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•45m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•45m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•47m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•50m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•55m ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•55m ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•58m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
2•ryan_j_naughton•58m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
2•ravenical•1h ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
2•ValdikSS•1h ago•0 comments
Open in hackernews

Virtualizing Nvidia HGX B200 GPUs with Open Source

https://www.ubicloud.com/blog/virtualizing-nvidia-hgx-b200-gpus-with-open-source
116•ben_s•1mo ago

Comments

ben_s•1mo ago
(author of the blog post here)

For me, the hardest part was virtualizing GPUs with NVLink in the mix. It complicates isolation while trying to preserve performance.

AMA if you want to dig into any of the details.

checker659•1mo ago
Isn't SR-IOV a thing with these big GPUs? Or, is it that you're not concerned with fractional granularity?
ben_s•1mo ago
In this article, we're primarily concerned with whole-GPU or multi-GPU partitions that preserve NVLink bandwidth, rather than finer-grained fractional sharing of a single GPU.
spwa4•1mo ago
Would it be possible to implement "virtual memory" for a GPU this way? Let's say you have GPUs at 30% utilization, but memory limited. Could you run 2 workloads by offloading the GPU memory when not in use?
ben_s•1mo ago
Once you oversubscribe GPU memory, performance usually collapses. Frameworks like vLLM can explicitly offload things like the KV cache to CPU memory, but that's an application-level tradeoff, not transparent GPU virtual memory.
girfan•1mo ago
Cool post. Have you looked at slicing a single GPU up for multiple VMs? Is there anything other than MIG that you have come across to partition SMs and memory bandwidth within a single GPU?
ben_s•1mo ago
Thanks! I haven't looked deeply into slicing up a single GPU. My understanding is that vGPU (which we briefly mention in the post) can partition memory but time-shares compute, while MIG is the only mechanism that provides partitioning of both SMs and memory bandwidth within a single GPU.
namibj•1mo ago
Last I checked MIG was the only one that made hard promises about especially memory bandwidth; as long as your memory access patterns aren't secret and you have enough trust in the other guests not being highly unfriendly with their cache usage behavior, you should be able to get away with much less strict isolation. Think docker vs. VMs-with-dedicated-cores.

But I thought MIG did do the job of chopping a GPU that's too big for most individual users into something that behaves very close to a literal array of smaller GPUs stuffed into the same PCIe card form factor? Think how a Tesla K80 was pretty much just two GK210 "GPUs" on a PLX "PCIe switch" which connects them to each other and to the host. Obviously trivial to give one to each of two VMs (at least if the PLX didn't interfere with IOMMU separation or such.... for mere performance isolation it certainly sufficed (once you block a heavy user from power budget throttling the sibling, at least).

tptacek•1mo ago
Can you pass a MIG device into a KVM VM? The team we worked with didn't believe it was possible (they suggested we switch to VMWare); the MIG system interface gives you a UUID, not a PCI BDF.
moondev•1mo ago
Kubevirt has some examples passing a vpgu into kvm

https://kubevirt.io/user-guide/compute/host-devices/

tptacek•1mo ago
Right, vGPUs are explicitly set up to generate BDF addresses that can be passed through (but require host driver support; they're essentially paravirtualized). I'm asking about MIG.
namibj•1mo ago
https://docs.nvidia.com/datacenter/tesla/mig-user-guide/supp... says GPU passhtrough is supported on MIG...
my123•1mo ago
There's a MIG vGPU mode usable for this
tptacek•1mo ago
Have you used it? How does it work? How do you drive it? We tried a lot of different things. Is it not paravirtualized, the way vGPUs are?
my123•1mo ago
It works with SR-IOV instead of mdev afaik

Still needs some host SW to drive it but actually does static partitioning

IIRC it's usable through using the MIG-marked vGPU types

otterley•1mo ago
Is Nvidia’s Fabric Manager and other control plane software Open Source? If so, that’s news to me. It’s not clear that anything in this article relates to Open Source at all; publishing how to do VM management doesn’t qualify. Maybe “open kimono.”

Also, how strong are the security boundaries among multiple tenants when configured in this way? I know, for example, that AWS is extremely careful about how hardware resources are shared across tenants of a physical host to prevent cross-tenant data leakage.

ben_s•1mo ago
Fabric Manager itself is not open source. It's NVIDIA-provided software, and today it's required to bring up and manage the NVLink/NVSwitch fabric on HGX systems. What we meant by "open" is that everything around it - the hypervisor, our control plane logic, partition selection, host configuration, etc. - is implemented in the open and available in our repos. You're right that this isn't a fully open GPU stack.

On isolation: in Shared NVSwitch Multitenancy mode, isolation is enforced at multiple layers. Fabric Manager programs the NVSwitch routing tables so GPUs in different partitions cannot exchange NVLink traffic, and each VM receives exclusive ownership of its assigned GPUs via VFIO passthrough. Large providers apply additional hardening and operational controls beyond what we describe here. We're not claiming this is equivalent to AWS's internal threat model, but it does rely on NVIDIA's documented isolation mechanisms.

mindcrash•1mo ago
In case all of this sounds interesting:

After skimming the article I noticed a large chunk of this article (specifically the bits on deattaching/attaching drivers, qemu and vfio) applies more or less to general GPU virtualization under Linux too!

1) Replace any "nvidia" for "amdgpu" for Team Red based setups when needed

2) The PCI ids are all different, so you'll have look them up with lspci yourselves

3) Note that with consumer GPU's you need to deattach and attach a pair of two devices (GPU video and GPU audio); else things might get a bit wonky

ben_s•1mo ago
Thanks for the comment! You're right that a lot of the mechanics apply more generally. On point (3) specifically: we handle this by allocating at the IOMMU-group level rather than individual devices. Our allocator selects an IOMMU group and passes through all devices in that group (e.g., GPU video + audio), which avoids the partial-passthrough wonkiness you mentioned. For reference: https://github.com/ubicloud/ubicloud/blob/main/scheduling/al...
moondev•1mo ago
In Shared NVSwitch Multitenancy Mode - are there any considerations for leveraging infiniband devices inside each vm at full performance?
ben_s•1mo ago
We haven't looked deeply at inter-machine communication yet. NVLink/NVSwitch (which this post focuses on) are intra-node, so InfiniBand is mostly orthogonal I think and comes down to NIC passthrough, NUMA/PCIe placement, and validating RDMA inside the VM.
tptacek•1mo ago
Did you ever manage to get vGPU's working in any other hardware configuration? I know it's not what Hx00 customers want. I bloodied my forehead on that for a month or two with Cloud Hypervisor --- I got to the "light reverse engineering of drivers" stage before walking away.
ben_s•1mo ago
We didn't focus on vGPU and largely avoided it on purpose. Instead, we focused on whole-GPU and NVSwitch-partitioned passthrough (Shared NVSwitch Multitenancy Mode), which is a better fit for the workloads we care about.
tryauuum•1mo ago
can someone explain me like I'm 10 what is a BAR?

Like it says something about mmaping 256 GB of per GPU. But wouldn't it waste 2T of RAM? or do I fail in my understanding of what "mmap" is as well..

EDIT: yes, seems like my understanding of mmap wasn't good, it wastes not RAM but address space

convolvatron•1mo ago
Base Address Register

this term can be used at a couple different points (including mappings from physical addresses to physical hardware in the memory network), but a PCI BAR is a register in the configuration space that tells the card what PCI host addresses map to internal memory regions in the card. one BAR per region.

the PCI BARs are usually configured by the driver after allocating some address space from the kernel.

DRAM BARs in the switching network are generally configured by something running at the BIOS level based on probes of memory controllers and I2C reads from the DIMMS to find out capacity.

ckastner•1mo ago
A lot of this coincides with my own experiments I did to pass-through consumer AMD GPUs into VMs [1], which the Debian ROCm Team uses in their CI.

The Debian package rocm-qemu-support ships scripts that facilitate most of this. I've since generalized this by adding NVIDIA support, but I haven't uploaded the new gpuisol-qemu package [2] to the official Archive yet. It still needs some polishing.

Just dumping this here, to add more references (especially the further reading section, the Gentoo and Arch wikis had a lot of helpful data).

[1]: https://salsa.debian.org/rocm-team/community/team-project/-/...

[2]: https://salsa.debian.org/ckk/gpu-isolation-tools

latchkey•1mo ago
A couple open relevant issues here:

https://github.com/amd/MxGPU-Virtualization/issues/6

https://github.com/amd/MxGPU-Virtualization/issues/16

ckastner•1mo ago
Coincidentally, the first issue (referencing Navi 21) was the one I started these experiments with, and this turned out to be pretty informative.

Our Navi 21 would almost always go AWOL after a test run had been completed, requiring a full reboot. At some point, I noticed that this only happened when our test runner was driving the test; I never had an issue when testing interactively. I eventually realized that our test driver was simply killing the VM when the test was done, which is fine for a CPU-based test, but this messed with the GPU's state. When working interactively, I was always shutting down the host cleanly, which apparently resolved this. A patch to our test runner to cleanly shut down VMs fixed this.

And I've had no luck with iGPUs, as referenced by the second issue.

From what I understand, I don't think that consumer AMD GPUs can/will ever be fully supported, because the GPU reset mechanisms of older cards are so complex. That's why things like vendor-reset [3] exist, which apparently duplicate a lot of the in-kernel driver code but ultimately only twiddle some bits.

[3]: https://github.com/gnif/vendor-reset