frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

O(x)Caml in Space

https://gazagnaire.org/blog/2026-05-14-borealis.html
67•yminsky•1h ago•2 comments

Show HN: Find the best local LLM for your hardware, ranked by benchmarks

https://github.com/Andyyyy64/whichllm
140•andyyyy64•3h ago•15 comments

Explore Wikipedia Like a Windows XP Desktop

https://explorer.samismith.com/
198•smusamashah•3h ago•48 comments

Steve Jobs Next Computer: His Forgotten Exile Years

https://spectrum.ieee.org/steve-jobs-next-computer
38•rbanffy•1h ago•25 comments

Welcome to the Strip Mining Era of OSS Security

https://www.metabase.com/blog/strip-mining-era-of-open-source-security
14•salsakran•43m ago•2 comments

Removing the modem and GPS from my 2024 RAV4 hybrid

https://arkadiyt.com/2026/05/13/removing-the-modem-and-gps-from-my-rav4/
922•arkadiyt•19h ago•478 comments

SigNoz (YC W21, open source Datadog) Is hiring for growth and engineering roles

https://signoz.io/careers
1•pranay01•20m ago

UK government replaces Palantir software with internally-built refugee system

https://www.bbc.com/news/articles/c2l2j1lxdk5o
308•cdrnsf•13h ago•105 comments

Show HN: GlycemicGPT – Open-source AI-powered diabetes management

https://github.com/GlycemicGPT/GlycemicGPT
47•jlengelbrecht•7h ago•30 comments

Where's Ed: Anthropic Told Court $5B but Public $19B

https://www.flyingpenguin.com/wheres-ed-anthropic-told-court-5-billion-but-public-19-billion/
23•jorisw•4h ago•19 comments

A few words on DS4

https://antirez.com/news/165
347•caust1c•13h ago•143 comments

Building ML framework with Rust and Category Theory

https://hghalebi.github.io/category_theory_transformer_rs/
52•adamnemecek•19h ago•14 comments

Details of the Daring Airdrop at Tristan Da Cunha

https://www.tristandc.com/government/news-2026-05-11-airdrop.php
174•kspacewalk2•8h ago•55 comments

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/
621•allenleee•20h ago•145 comments

First public macOS kernel memory corruption exploit on Apple M5

https://blog.calif.io/p/first-public-kernel-memory-corruption
385•quadrige•17h ago•98 comments

Gyroflow: Video stabilization using gyroscope data

https://github.com/gyroflow/gyroflow
106•nateb2022•2d ago•18 comments

New Nginx Exploit

https://github.com/DepthFirstDisclosures/Nginx-Rift
398•hetsaraiya•19h ago•88 comments

Codex is now in the ChatGPT mobile app

https://openai.com/index/work-with-codex-from-anywhere/
362•mikeevans•16h ago•178 comments

UK sovereign LLM inference

https://relax.ai/docs
85•benjamintnorris•2h ago•84 comments

Mullvad exit IPs are surprisingly identifying

https://tmctmt.com/posts/mullvad-exit-ips-as-a-fingerprinting-vector/
438•RGBCube•9h ago•252 comments

Tesla Wall Connector bootloader bypasses the firmware downgrade ratchet

https://www.synacktiv.com/en/publications/exploiting-the-tesla-wall-connector-from-its-charge-por...
106•p_stuart82•15h ago•48 comments

Solar-based sleep patterns compared to modern norms

https://dylan.gr/1775146616
84•James72689•8h ago•73 comments

Claude for Legal

https://github.com/anthropics/claude-for-legal
112•Einenlum•15h ago•102 comments

HDD Firmware Hacking

https://icode4.coffee/?p=1465
197•jsploit•20h ago•26 comments

RISC-V Router

https://router.start9.com/
127•janandonly•16h ago•75 comments

Access to frontier AI will soon be limited by economic and security constraints

https://writing.antonleicht.me/p/cut-off
177•thoughtpeddler•11h ago•169 comments

Porting 3D Movie Maker to Linux

https://benstoneonline.com/posts/porting-3d-movie-maker-to-linux/
139•speckx•3d ago•32 comments

What's in a GGUF, besides the weights – and what's still missing?

https://nobodywho.ooo/posts/whats-in-a-gguf/
158•bashbjorn•18h ago•46 comments

New arXiv policy: 1-year ban for hallucinated references

https://twitter.com/tdietterich/status/2055000956144935055
543•gjuggler•15h ago•191 comments

Overseas fakers using AI videos to push a narrative of UK decline, BBC finds

https://www.bbc.co.uk/news/articles/ckgpyn30dp3o
38•dijksterhuis•2h ago•31 comments
Open in hackernews

LLM-D: Kubernetes-Native Distributed Inference

https://llm-d.ai/blog/llm-d-announce
120•smarterclayton•11mo ago

Comments

anttiharju•11mo ago
I wonder if this is preferable to kServe
smarterclayton•11mo ago
llm-d would make sense if you are running a very large production LLM serving setup - say 5+ full H100 hosts. The aim is to be much more focused than kserve is on exactly the needs of serving LLMs. It would of course be possible to run alongside kserve, but the user we are targeting is not typically a kserve deployer today.
anttiharju•11mo ago
Do you think https://github.com/openai/CLIP can be ran on it? LLM makes me think of chatbots but I suppose because it's inference-based it would work. Somewhat unclear on what's the difference between LLMs and inference, I think inference is the type of compute LLMs use.

I wonder if inference-d would be a fitting name.

smarterclayton•11mo ago
Inference is the process of evaluating a model ("inferring" a response to the inputs). LLMs are uniquely difficult to serve because they push the limits on the hardware.

The models we support come from the model server vLLM https://docs.vllm.ai/en/latest/models/supported_models.html, which has a focus on large generative models. I don't see CLIP in the list.

dzr0001•11mo ago
I did a quick scan of the repo and didn't see any reference to Ray. Would this indicate that llm-d lacks support for pipeline parallelism?
qntty•11mo ago
I believe this is a question you should ask about vLLM, not llm-d. It looks like vLLM does support pipeline parallelism via Ray: https://docs.vllm.ai/en/latest/serving/distributed_serving.h...

This project appears to make use of both vLLM and Inference Gateway (an official Kubernetes extension to the Gateway resource). The contributions of llm-d itself seems to mostly be a scheduling algorithm for load balancing across vLLM instances.

smarterclayton•11mo ago
We inherit any multi-host support from vLLM, so https://docs.vllm.ai/en/latest/serving/distributed_serving.h... would be the expected path.

We plan to publish examples of multi-host inference that leverages LeaderWorkerSets - https://github.com/kubernetes-sigs/lws - which helps run ranked serving workloads across hosts. LeaderWorkerSet is how Google supports both TPU and GPU multi-host deployments - see https://github.com/kubernetes-sigs/lws/blob/main/config/samp... for an example.

Edit: Here is an example Kubernetes configuration running DeepSeek-R1 on vLLM multi-host using LeaderWorkerSet https://github.com/kubernetes-sigs/wg-serving/blob/main/serv.... This work would be integrated into llm-d.

rdli•11mo ago
This is really interesting. For SOTA inference systems, I've seen two general approaches:

* The "stack-centric" approach such as vLLM production stack, AIBrix, etc. These set up an entire inference stack for you including KV cache, routing, etc.

* The "pipeline-centric" approach such as NVidia Dynamo, Ray, BentoML. These give you more of an SDK so you can define inference pipelines that you can then deploy on your specific hardware.

It seems like LLM-d is the former. Is that right? What prompted you to go down that direction, instead of the direction of Dynamo?

qntty•11mo ago
It sounds like you might be confusing different parts of the stack. NVIDIA Dynamo for example supports vLLM as the inference engine. I think you should think of something like vLLM as more akin to GUnicorn, and llm-d as an application load balancer. And I guess something like NVIDIA Dynamo would be like Django.
smarterclayton•11mo ago
llm-d is intended to be three clean layers:

1. Balance / schedule incoming requests to the right backend

2. Model server replicas that can run on multiple hardware topologies

3. Prefix caching hierarchy with well-tested variants for different use cases

So it's a 3-tier architecture. The biggest difference with Dynamo is that llm-d is using the inference gateway extension - https://github.com/kubernetes-sigs/gateway-api-inference-ext... - which brings Kubernetes owned APIs for managing model routing, request priority and flow control, LoRA support etc.

rdli•11mo ago
I would think that that the NVidia Dynamo SDK (pipelines) is a big difference as well (https://github.com/ai-dynamo/dynamo/tree/main/deploy/sdk/doc...), or am I missing something?
smarterclayton•11mo ago
That's a good example - I can at least answer about why it's a difference: different target user.

As I understand the Dynamo SDK it is about simplifying and helping someone get started with Dynamo on Kubernetes.

From the user set we work with (large inference deployers) that is not a high priority - they already have mature deployment opinions or a set of tools that would not compose well with something like the Dynamo SDK. Their comfort level with Kubernetes is moderate to high - either they use Kubernetes for high scale training and batch, or they are deploying to many different providers in order to get enough capacity and need a standard orchestration solution.

llm-d focuses on helping achieve efficiency dynamically at runtime based on changing traffic or workload on Kubernetes - some of the things the Dynamo SDK encodes are static and upfront and would conflict with that objective. Also, large deployers with serving typically have significant batch and training and they are looking to maximize capacity use without impacting their prod serving. That requires the orchestrator to know about both workloads at some level - which Dynamo SDK would make more difficult.

rdli•11mo ago
In this analogy, Dynamo is most definitely not like Django. It includes inference aware routing, KV caching, etc. -- all the stuff you would need to run a modern SOTA inference stack.
qntty•11mo ago
You're right, I was confusing TensorRT with Dynamo. It looks like the relationship between Dynamo and vLLM is actually the opposite of what I was thinking -- Dynamo can use vLLM as a backend rather than vice versa.
Kemschumam•11mo ago
What would be the benefit of this project over hosting VLLM in Ray?