frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Another GitHub outage in the same day

https://www.githubstatus.com/incidents/lcw3tg2f6zsd
91•Nezteb•1h ago•34 comments

Converting a $3.88 analog clock from Walmart into a ESP8266-based Wi-Fi clock

https://github.com/jim11662418/ESP8266_WiFi_Analog_Clock
265•tokyobreakfast•3h ago•95 comments

Discord will require a face scan or ID for full access next month

https://www.theverge.com/tech/875309/discord-age-verification-global-roll-out
495•x01•5h ago•482 comments

Luce: First Electric Ferrari. Designed by LoveFrom

https://www.ferrari.com/en-US/auto/ferrari-luce
36•kaizenb•58m ago•16 comments

Why is the sky blue?

https://explainers.blog/posts/why-is-the-sky-blue/
235•udit99•4h ago•78 comments

Hard-braking events as indicators of road segment crash risk

https://research.google/blog/hard-braking-events-as-indicators-of-road-segment-crash-risk/
92•aleyan•3h ago•135 comments

Game Boy Advance Audio Interpolation

https://jsgroth.dev/blog/posts/gba-audio-interpolation/
32•ibobev•2h ago•4 comments

UEFI Bindings for JavaScript

https://codeberg.org/smnx/promethee
158•ananas-dev•6h ago•79 comments

Sleeper Shells: Attackers Are Planting Dormant Backdoors in Ivanti EPMM

https://defusedcyber.com/ivanti-epmm-sleeper-shells-403jsp
95•waihtis•5h ago•33 comments

The Markets of Old London

https://spitalfieldslife.com/2024/06/20/the-markets-of-old-london-i/
26•zeristor•1h ago•4 comments

Information Is Beautiful

https://informationisbeautiful.net/
57•surprisetalk•5d ago•3 comments

MIT Living Wage Calculator

https://livingwage.mit.edu/
5•bear_with_me•26m ago•0 comments

Thoughts on Generating C

https://wingolog.org/archives/2026/02/09/six-thoughts-on-generating-c
159•ingve•6h ago•42 comments

The Traffic Mimes of Bogotá

https://www.atlasobscura.com/articles/traffic-mimes-of-colombia
59•IgorPartola•4d ago•13 comments

What's the Entropy of a Random Integer?

https://quomodocumque.wordpress.com/2026/02/03/whats-the-entropy-of-a-random-integer/
14•sebg•4d ago•0 comments

Show HN: Algorithmically finding the longest line of sight on Earth

https://alltheviews.world
318•tombh•10h ago•130 comments

Testing Ads in ChatGPT

https://openai.com/index/testing-ads-in-chatgpt/
111•davidbarker•1h ago•130 comments

Medieval Monks Wrote over Ancient Star Catalog – Particle Accel Reveals Original

https://www.smithsonianmag.com/smart-news/medieval-monks-wrote-over-a-copy-of-an-ancient-star-cat...
59•bookofjoe•5d ago•38 comments

Sandboxels

https://neal.fun/sandboxels/
42•2sf5•4h ago•10 comments

An articulated archer automaton [video]

https://www.youtube.com/watch?v=Bc0bIpDVEa8
3•Teever•45m ago•0 comments

Like Game-of-Life, but on Growing Graphs, with WASM and WebGL

https://znah.net/graphs/
124•znah•1d ago•17 comments

Art of Roads in Games

https://sandboxspirit.com/blog/art-of-roads-in-games/
553•linolevan•23h ago•181 comments

Ask HN: What are you working on? (February 2026)

231•david927•1d ago•790 comments

GitHub is down again

https://www.githubstatus.com/incidents/54hndjxft5bx
397•MattIPv4•4h ago•360 comments

Eddie Bauer, venerable outdoor apparel retailer, declares bankruptcy

https://www.cbsnews.com/news/eddie-bauer-bankrupt-outdoor-apparel/
44•mgh2•2h ago•24 comments

Pg-dev-container is a ready-to-run VS Code development container for PostgreSQL

https://github.com/jnidzwetzki/pg-dev-container
6•mariuz•4d ago•1 comments

AT&T, Verizon blocking release of Salt Typhoon security assessment reports

https://www.reuters.com/business/media-telecom/senator-says-att-verizon-blocking-release-salt-typ...
217•redman25•5h ago•54 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
56•ibobev•5h ago•12 comments

From watchdogs to mouthpieces: Washington Post and the wreckage of legacy media

https://www.thejournal.ie/readme/bezos-washington-post-trump-6950317-Feb2026/
59•DyslexicAtheist•2h ago•28 comments

Nobody knows how the whole system works

https://surfingcomplexity.blog/2026/02/08/nobody-knows-how-the-whole-system-works/
231•azhenley•14h ago•161 comments
Open in hackernews

LLM-D: Kubernetes-Native Distributed Inference

https://llm-d.ai/blog/llm-d-announce
120•smarterclayton•8mo ago

Comments

anttiharju•8mo ago
I wonder if this is preferable to kServe
smarterclayton•8mo ago
llm-d would make sense if you are running a very large production LLM serving setup - say 5+ full H100 hosts. The aim is to be much more focused than kserve is on exactly the needs of serving LLMs. It would of course be possible to run alongside kserve, but the user we are targeting is not typically a kserve deployer today.
anttiharju•8mo ago
Do you think https://github.com/openai/CLIP can be ran on it? LLM makes me think of chatbots but I suppose because it's inference-based it would work. Somewhat unclear on what's the difference between LLMs and inference, I think inference is the type of compute LLMs use.

I wonder if inference-d would be a fitting name.

smarterclayton•8mo ago
Inference is the process of evaluating a model ("inferring" a response to the inputs). LLMs are uniquely difficult to serve because they push the limits on the hardware.

The models we support come from the model server vLLM https://docs.vllm.ai/en/latest/models/supported_models.html, which has a focus on large generative models. I don't see CLIP in the list.

dzr0001•8mo ago
I did a quick scan of the repo and didn't see any reference to Ray. Would this indicate that llm-d lacks support for pipeline parallelism?
qntty•8mo ago
I believe this is a question you should ask about vLLM, not llm-d. It looks like vLLM does support pipeline parallelism via Ray: https://docs.vllm.ai/en/latest/serving/distributed_serving.h...

This project appears to make use of both vLLM and Inference Gateway (an official Kubernetes extension to the Gateway resource). The contributions of llm-d itself seems to mostly be a scheduling algorithm for load balancing across vLLM instances.

smarterclayton•8mo ago
We inherit any multi-host support from vLLM, so https://docs.vllm.ai/en/latest/serving/distributed_serving.h... would be the expected path.

We plan to publish examples of multi-host inference that leverages LeaderWorkerSets - https://github.com/kubernetes-sigs/lws - which helps run ranked serving workloads across hosts. LeaderWorkerSet is how Google supports both TPU and GPU multi-host deployments - see https://github.com/kubernetes-sigs/lws/blob/main/config/samp... for an example.

Edit: Here is an example Kubernetes configuration running DeepSeek-R1 on vLLM multi-host using LeaderWorkerSet https://github.com/kubernetes-sigs/wg-serving/blob/main/serv.... This work would be integrated into llm-d.

rdli•8mo ago
This is really interesting. For SOTA inference systems, I've seen two general approaches:

* The "stack-centric" approach such as vLLM production stack, AIBrix, etc. These set up an entire inference stack for you including KV cache, routing, etc.

* The "pipeline-centric" approach such as NVidia Dynamo, Ray, BentoML. These give you more of an SDK so you can define inference pipelines that you can then deploy on your specific hardware.

It seems like LLM-d is the former. Is that right? What prompted you to go down that direction, instead of the direction of Dynamo?

qntty•8mo ago
It sounds like you might be confusing different parts of the stack. NVIDIA Dynamo for example supports vLLM as the inference engine. I think you should think of something like vLLM as more akin to GUnicorn, and llm-d as an application load balancer. And I guess something like NVIDIA Dynamo would be like Django.
smarterclayton•8mo ago
llm-d is intended to be three clean layers:

1. Balance / schedule incoming requests to the right backend

2. Model server replicas that can run on multiple hardware topologies

3. Prefix caching hierarchy with well-tested variants for different use cases

So it's a 3-tier architecture. The biggest difference with Dynamo is that llm-d is using the inference gateway extension - https://github.com/kubernetes-sigs/gateway-api-inference-ext... - which brings Kubernetes owned APIs for managing model routing, request priority and flow control, LoRA support etc.

rdli•8mo ago
I would think that that the NVidia Dynamo SDK (pipelines) is a big difference as well (https://github.com/ai-dynamo/dynamo/tree/main/deploy/sdk/doc...), or am I missing something?
smarterclayton•8mo ago
That's a good example - I can at least answer about why it's a difference: different target user.

As I understand the Dynamo SDK it is about simplifying and helping someone get started with Dynamo on Kubernetes.

From the user set we work with (large inference deployers) that is not a high priority - they already have mature deployment opinions or a set of tools that would not compose well with something like the Dynamo SDK. Their comfort level with Kubernetes is moderate to high - either they use Kubernetes for high scale training and batch, or they are deploying to many different providers in order to get enough capacity and need a standard orchestration solution.

llm-d focuses on helping achieve efficiency dynamically at runtime based on changing traffic or workload on Kubernetes - some of the things the Dynamo SDK encodes are static and upfront and would conflict with that objective. Also, large deployers with serving typically have significant batch and training and they are looking to maximize capacity use without impacting their prod serving. That requires the orchestrator to know about both workloads at some level - which Dynamo SDK would make more difficult.

rdli•8mo ago
In this analogy, Dynamo is most definitely not like Django. It includes inference aware routing, KV caching, etc. -- all the stuff you would need to run a modern SOTA inference stack.
qntty•8mo ago
You're right, I was confusing TensorRT with Dynamo. It looks like the relationship between Dynamo and vLLM is actually the opposite of what I was thinking -- Dynamo can use vLLM as a backend rather than vice versa.
Kemschumam•8mo ago
What would be the benefit of this project over hosting VLLM in Ray?