frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Git commands I run before reading any code

https://piechowski.io/post/git-commands-before-reading-code/
1002•grepsedawk•7h ago•213 comments

MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU

https://arxiv.org/abs/2604.05091
137•chrsw•3h ago•30 comments

They're Made Out of Meat (1991)

http://www.terrybisson.com/theyre-made-out-of-meat-2/
132•surprisetalk•4h ago•47 comments

Veracrypt project update

https://sourceforge.net/p/veracrypt/discussion/general/thread/9620d7a4b3/
756•super256•8h ago•265 comments

Škoda DuoBell: A bicycle bell that penetrates noise-cancelling headphones

https://www.skoda-storyboard.com/en/skoda-world/skoda-duobell-a-bicycle-bell-that-outsmarts-even-...
300•ra•7h ago•387 comments

Show HN: Explore the Silk Roads through an interactive map

https://www.intofarlands.com/silk-roads-map
20•intofarlands•1h ago•1 comments

The Future of Everything Is Lies, I Guess

https://aphyr.com/posts/411-the-future-of-everything-is-lies-i-guess
48•pabs3•2h ago•13 comments

US cities are axing Flock Safety surveillance technology

https://www.cnet.com/home/security/when-flock-comes-to-town-why-cities-are-axing-the-controversia...
303•giuliomagnifico•3h ago•145 comments

Audio Reactive LED Strips Are Diabolically Hard

https://scottlawsonbc.com/post/audio-led
117•surprisetalk•1d ago•34 comments

I Ported Mac OS X to the Nintendo Wii

https://bryankeller.github.io/2026/04/08/porting-mac-os-x-nintendo-wii.html
10•blkhp19•24m ago•0 comments

Show HN: Go-Bt: Minimalist Behavior Trees for Go

https://github.com/rvitorper/go-bt
13•rvitorper•1h ago•1 comments

Project Glasswing: Securing critical software for the AI era

https://www.anthropic.com/glasswing
1431•Ryan5453•21h ago•742 comments

Revision Demoparty 2026: Razor1911 [video]

https://www.youtube.com/watch?v=Lw4W9V57SKs&t=5716s
281•tetrisgm•10h ago•94 comments

Microsoft Abruptly Terminates VeraCrypt Account, Halting Windows Updates

https://www.404media.co/microsoft-abruptly-terminates-veracrypt-account-halting-windows-updates/
63•donohoe•1h ago•11 comments

Lunar Flyby

https://www.nasa.gov/gallery/lunar-flyby/
881•kipi•1d ago•215 comments

Teardown of unreleased LG Rollable shows why rollable phones aren't a thing

https://arstechnica.com/gadgets/2026/04/teardown-of-unreleased-lg-rollable-shows-why-rollable-pho...
14•DamnInteresting•1d ago•9 comments

Your File System Is Already A Graph Database

https://rumproarious.com/2026/04/04/your-file-system-is-already-a-graph-database/
97•alxndr•2d ago•46 comments

Show HN: We built a camera only robot vacuum for less than 300$ (Well almost)

https://indraneelpatil.github.io/blog/2026/robot-vacuum/
76•indraneelpatil•2d ago•33 comments

Protect your shed

https://dylanbutler.dev/blog/protect-your-shed/
249•baely•13h ago•66 comments

System Card: Claude Mythos Preview [pdf]

https://www-cdn.anthropic.com/53566bf5440a10affd749724787c8913a2ae0841.pdf
783•be7a•21h ago•583 comments

LLM plays an 8-bit Commander X16 game using structured "smart senses"

https://pvp-ai.russell-harper.com
10•russellharper•3h ago•0 comments

Virtual Mars Traverse: Every inch of Curiosity rover's path since 2012 landing

https://www.rovers.land/
10•bookofjoe•3d ago•1 comments

Mario and Earendil

https://lucumr.pocoo.org/2026/4/8/mario-and-earendil/
45•doppp•6h ago•19 comments

Show HN: I pipe free sports streams into Jellyfin – no ads, just HLS

https://github.com/pcruz1905/hls-restream-proxy
40•pruz•3h ago•8 comments

GLM-5.1: Towards Long-Horizon Tasks

https://z.ai/blog/glm-5.1
585•zixuanlimit•23h ago•239 comments

Show HN: BAREmail ʕ·ᴥ·ʔ – minimalist Gmail client for bad WiFi

https://github.com/matt-virgo/baremail
6•Virgo_matt•1h ago•1 comments

How to get better at guitar

https://www.jakeworth.com/posts/how-to-get-better-at-guitar/
427•jwworth•2d ago•218 comments

Cambodia unveils statue to honour famous landmine-sniffing rat

https://www.bbc.com/news/articles/c0rx7xzd10xo
461•speckx•22h ago•107 comments

Native Americans had dice 12k years ago

https://www.nbcnews.com/science/science-news/native-americans-dice-games-probability-study-rcna26...
115•delichon•4d ago•53 comments

Slightly safer vibecoding by adopting old hacker habits

http://addxorrol.blogspot.com/2026/03/slightly-safer-vibecoding-by-adopting.html
152•transpute•5d ago•82 comments
Open in hackernews

LLM-D: Kubernetes-Native Distributed Inference

https://llm-d.ai/blog/llm-d-announce
120•smarterclayton•10mo ago

Comments

anttiharju•10mo ago
I wonder if this is preferable to kServe
smarterclayton•10mo ago
llm-d would make sense if you are running a very large production LLM serving setup - say 5+ full H100 hosts. The aim is to be much more focused than kserve is on exactly the needs of serving LLMs. It would of course be possible to run alongside kserve, but the user we are targeting is not typically a kserve deployer today.
anttiharju•10mo ago
Do you think https://github.com/openai/CLIP can be ran on it? LLM makes me think of chatbots but I suppose because it's inference-based it would work. Somewhat unclear on what's the difference between LLMs and inference, I think inference is the type of compute LLMs use.

I wonder if inference-d would be a fitting name.

smarterclayton•10mo ago
Inference is the process of evaluating a model ("inferring" a response to the inputs). LLMs are uniquely difficult to serve because they push the limits on the hardware.

The models we support come from the model server vLLM https://docs.vllm.ai/en/latest/models/supported_models.html, which has a focus on large generative models. I don't see CLIP in the list.

dzr0001•10mo ago
I did a quick scan of the repo and didn't see any reference to Ray. Would this indicate that llm-d lacks support for pipeline parallelism?
qntty•10mo ago
I believe this is a question you should ask about vLLM, not llm-d. It looks like vLLM does support pipeline parallelism via Ray: https://docs.vllm.ai/en/latest/serving/distributed_serving.h...

This project appears to make use of both vLLM and Inference Gateway (an official Kubernetes extension to the Gateway resource). The contributions of llm-d itself seems to mostly be a scheduling algorithm for load balancing across vLLM instances.

smarterclayton•10mo ago
We inherit any multi-host support from vLLM, so https://docs.vllm.ai/en/latest/serving/distributed_serving.h... would be the expected path.

We plan to publish examples of multi-host inference that leverages LeaderWorkerSets - https://github.com/kubernetes-sigs/lws - which helps run ranked serving workloads across hosts. LeaderWorkerSet is how Google supports both TPU and GPU multi-host deployments - see https://github.com/kubernetes-sigs/lws/blob/main/config/samp... for an example.

Edit: Here is an example Kubernetes configuration running DeepSeek-R1 on vLLM multi-host using LeaderWorkerSet https://github.com/kubernetes-sigs/wg-serving/blob/main/serv.... This work would be integrated into llm-d.

rdli•10mo ago
This is really interesting. For SOTA inference systems, I've seen two general approaches:

* The "stack-centric" approach such as vLLM production stack, AIBrix, etc. These set up an entire inference stack for you including KV cache, routing, etc.

* The "pipeline-centric" approach such as NVidia Dynamo, Ray, BentoML. These give you more of an SDK so you can define inference pipelines that you can then deploy on your specific hardware.

It seems like LLM-d is the former. Is that right? What prompted you to go down that direction, instead of the direction of Dynamo?

qntty•10mo ago
It sounds like you might be confusing different parts of the stack. NVIDIA Dynamo for example supports vLLM as the inference engine. I think you should think of something like vLLM as more akin to GUnicorn, and llm-d as an application load balancer. And I guess something like NVIDIA Dynamo would be like Django.
smarterclayton•10mo ago
llm-d is intended to be three clean layers:

1. Balance / schedule incoming requests to the right backend

2. Model server replicas that can run on multiple hardware topologies

3. Prefix caching hierarchy with well-tested variants for different use cases

So it's a 3-tier architecture. The biggest difference with Dynamo is that llm-d is using the inference gateway extension - https://github.com/kubernetes-sigs/gateway-api-inference-ext... - which brings Kubernetes owned APIs for managing model routing, request priority and flow control, LoRA support etc.

rdli•10mo ago
I would think that that the NVidia Dynamo SDK (pipelines) is a big difference as well (https://github.com/ai-dynamo/dynamo/tree/main/deploy/sdk/doc...), or am I missing something?
smarterclayton•10mo ago
That's a good example - I can at least answer about why it's a difference: different target user.

As I understand the Dynamo SDK it is about simplifying and helping someone get started with Dynamo on Kubernetes.

From the user set we work with (large inference deployers) that is not a high priority - they already have mature deployment opinions or a set of tools that would not compose well with something like the Dynamo SDK. Their comfort level with Kubernetes is moderate to high - either they use Kubernetes for high scale training and batch, or they are deploying to many different providers in order to get enough capacity and need a standard orchestration solution.

llm-d focuses on helping achieve efficiency dynamically at runtime based on changing traffic or workload on Kubernetes - some of the things the Dynamo SDK encodes are static and upfront and would conflict with that objective. Also, large deployers with serving typically have significant batch and training and they are looking to maximize capacity use without impacting their prod serving. That requires the orchestrator to know about both workloads at some level - which Dynamo SDK would make more difficult.

rdli•10mo ago
In this analogy, Dynamo is most definitely not like Django. It includes inference aware routing, KV caching, etc. -- all the stuff you would need to run a modern SOTA inference stack.
qntty•10mo ago
You're right, I was confusing TensorRT with Dynamo. It looks like the relationship between Dynamo and vLLM is actually the opposite of what I was thinking -- Dynamo can use vLLM as a backend rather than vice versa.
Kemschumam•10mo ago
What would be the benefit of this project over hosting VLLM in Ray?