frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
124•valyala•4h ago•22 comments

Tiny C Compiler

https://bellard.org/tcc/
9•guerrilla•47m ago•2 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
57•zdw•3d ago•21 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
29•gnufx•3h ago•24 comments

FDA Intends to Take Action Against Non-FDA-Approved GLP-1 Drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
3•randycupertino•8m ago•1 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
65•surprisetalk•4h ago•79 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
104•mellosouls•7h ago•198 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
147•AlexeyBrin•10h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
107•vinhnx•7h ago•14 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
856•klaussilveira•1d ago•262 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
5•mltvc•43m ago•1 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
23•vedantnair•49m ago•14 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1101•xnx•1d ago•619 comments

First Proof

https://arxiv.org/abs/2602.05192
71•samasblack•7h ago•51 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
246•jesperordrup•14h ago•82 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
67•thelok•6h ago•12 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
12•mbitsnbites•3d ago•0 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
146•valyala•4h ago•122 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
524•theblazehen•3d ago•195 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
34•momciloo•4h ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
95•onurkanbkrc•9h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
15•languid-photic•3d ago•5 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
39•marklit•5d ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
198•1vuio0pswjnm7•11h ago•289 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
51•rbanffy•4d ago•11 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
627•nar001•8h ago•277 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
263•alainrk•9h ago•437 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
126•videotopia•4d ago•40 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
103•speckx•4d ago•129 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
37•sandGorgon•2d ago•17 comments
Open in hackernews

Nano-Vllm: Lightweight vLLM implementation built from scratch

https://github.com/GeeeekExplorer/nano-vllm
125•simonpure•7mo ago

Comments

unwind•7mo ago
Meta: the Title Casing in the title is pretty obnoxious, "Vllm" is exactly the inverse, casing-wise, of how the project spells its name.
msephton•7mo ago
Fwiw op has a small window of time to correct the casing after posting
futurecliff•7mo ago
how did u do it? which portion of vllm refactoring allowed u to get such gains.
zackify•7mo ago
Will this end up getting an open ai compatible web server or is that out of scope.
jimmySixDOF•7mo ago
Little sparse on the documentation side can't tell at a glance if there is a 1:1 hyperperameter tuneability or if this is an opinionated single path locked soft fpga eval-hacking kind of thing.

EDIT: -- Ok, it's legit, here is an example of it put to use by the makers of the Dolphin OpenSource series of FineTunes:

> Here I implement in nano-vllm, efficient sample-K logit extraction, as described in "Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs" by Anshumann et. al. Sampling occurs on the GPU, the non-sampled logits do not get copied out of GPU space. I tried to implement this in @vllm_project, but it was a bit too heavy for me to figure out.

https://github.com/GeeeekExplorer/nano-vllm/pull/34

baalimago•7mo ago
So... It's a language model..? As in, not "large"? I'm a bit unsure of the magnitudes here, but surely "nano" and "large" cancel out
IanCal•7mo ago
No, vLLM is a thing for serving language models: https://github.com/vllm-project/vllm
barrenko•7mo ago
Is it more like llama.cpp then? I don't have access to the good hardware.
jasonjmcghee•7mo ago
llama.cpp is optimized to serve one request at a time.

vllm is optimized to serve many requests at one time.

If you were to fine tune a model and wanted to serve it to many users, you would use vllm, not llama.cpp

jasonjmcghee•7mo ago
Here's a super relevant comment from another post https://news.ycombinator.com/item?id=44366418
barrenko•7mo ago
Appreciate it!
fractorial•7mo ago
Did anyone else click in excitedly after misreading ‘Vllm’ as ‘LLVM?’
omneity•7mo ago
This is an incredible achievement for a solo developer. The dev is from the Deepseek team by the way.
Imustaskforhelp•7mo ago
That is crazy! This is so cool ngl.
tt726259•7mo ago
After seeing the Docker image for vllm jump +5Gb (to 10Gb!) over the past five months, I grew suspicious of vllm's development practices [1]. It's not easy, for sure, to deal with all those flaky python modules [2].

But having the CUDA packages four times in different layers is questionable! [3]

Yet again, as a college mate of mine used to say, "Don't change it. It works."

--

[1]: https://hub.docker.com/r/vllm/vllm-openai/tags

[2]: https://github.com/vllm-project/vllm/issues/13306

[3]: These kinds of workarounds tend to end up accumulating and never get reviewed back:

- https://github.com/vllm-project/vllm/commit/b07d741661570ef1...

- https://github.com/vllm-project/vllm/commit/68d37809b9b52f4d... (this one in particular probably accounts for +3Gb)

mountainriver•7mo ago
Love this project, we need more simplifications like this in the current ML environment