frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Tokasaurus: An LLM inference engine for high-throughput workloads

https://scalingintelligence.stanford.edu/blogs/tokasaurus/
218•rsehrlich•8mo ago

Comments

behnamoh•8mo ago
While Tokasaurus’s Async-TP shows impressive throughput gains, it seems over-engineered for common use cases. The CPU overhead from async tensor parallelism only pays off at 6k+ token batches, and you need NVLink-connected GPUs to see real benefits. Most prod deployments don’t need this complexity — you’re better off with simpler approaches unless you’re specifically optimizing for massive batch throughput. The adaptive manager skipping “optional” tasks under load also feels concerning from a reliability perspective.
bjt12345•8mo ago
Buy surely next years production deployments will be very different to right now, with different use cases...etc
jdiff•8mo ago
Sure. Things change over time. Is there a reason to believe they'd be different in such a way that this would be more useful than in today's landscape? I haven't seen such a forecast myself.
YetAnotherNick•8mo ago
Depends on what production means for you. This is useful for batch production jobs.

Also, this seems very useful for generating synthetic data or labelling a bunch of data. 6k batch size is small for data labelling.

cpard•8mo ago
How big of a use case is synthetic data generation? I’m curious as I see a lot about it coming from academic projects but I haven’t seen much related to commercial use cases
electroglyph•8mo ago
tiny NNs distilled from LLMs can produce some amazing results, i'm surprised it's not more common tbh
cpard•8mo ago
I agree, there are impressive results. This just came out from Berkeley https://arxiv.org/abs/2506.04178

But still, I mainly see work on this direction in academia.

nabakin•8mo ago
> On throughput-focused benchmarks, Tokasaurus can outperform vLLM and SGLang by up to 3x+.

Looks like they don't compare to TensorRT-LLM throughput numbers which, last I checked, are SOTA in open source.

andersa•8mo ago
TensorRT-LLM being open source is a lie, all the important kernels are loaded from cubins.
nabakin•8mo ago
Yeah you're right (although, they started to open source some of that recently iirc). I meant SOTA for inference engines we can actually download and use ourselves.
qeternity•8mo ago
It also appears that this was a sampling benchmark...which is not representative.

Generation benchmark was 5% faster than SGLang.

symbolicAGI•8mo ago
Given chat and API needs for low-latency, llama.cpp is probably still the best choice for self hosted models with or without GPU support. And Ollama is the leader for wrapping llama.cpp.

Because Tokasaurus was mentioned as better than Ollama for conducting darwinian godel machine operations (self-improvement), I looked for the linked repo on GitHub and it was 404. So glad it is back https://github.com/ScalingIntelligence/tokasaurus.

radq•8mo ago
Cool project! The codebase is simple and well documented, a good starting point for anyone interested in how to implement a high-performance inference engine. The prefix sharing is very relevant for anyone running batch inference to generate RL rollouts.
refibrillator•8mo ago
The code has few comments but gotta love when you can tell someone was having fun!

https://github.com/ScalingIntelligence/tokasaurus/blob/65efb...

I’m honestly impressed that a pure python implementation can beat out vLLM and SGLang. Granted they lean on FlashInfer, and of course torch.compile has gotten incredibly powerful in the last few years. Though dynamic shapes have still been a huge thorn in my side, I’ll need to look closer at how they pulled it off…

bobrenjc93•8mo ago
Hi! I work on dynamic shapes in pytorch and would love to hear more about the challenges you’ve run into. We’re always looking to improve the experience, so if you’re open to chatting, feel free to DM me on Twitter (@bobrenjc93) or email me at bobren@meta.com.
gricardo99•8mo ago
since you work on pytorch, what would you say is the best place to ask questions about general usage, trouble shooting? I’ve been struggling with a, what I would consider, a simple torchrun elastic training example, and haven’t found any good resources online. I’ve been spelunking through pytorch but have a feeling a little back and forth with someone familiar with these features would immensely clear things up.
bobrenjc93•8mo ago
PyTorch Dev Discuss is a fantastic forum where many core devs actively participate and answer questions: https://dev-discuss.pytorch.org

In addition to Dev Discuss, a number of core contributors are also active on Twitter. Two particularly helpful and prolific voices are @ezyang and @cHHillee.

Finally, don’t overlook GitHub issues—they’re a surprisingly effective way to start conversations. If you’ve found a bug or have ideas on how to improve the APIs, opening an issue is always welcome.

almostgotcaught•8mo ago
There's also the slack but you gotta know someone to get on that ;)
chillee•8mo ago
I mean, vllm and sglang are both "pure python" essentially as well. But yeah, in ML you rarely require C++ to get good performance for most of the systems people are writing.
AStonesThrow•8mo ago
Stanford was edgy enough to reefer to “toking” in the moniker, but exercises restraint by depicting the titular thunder lizard smoking a putatively conventional tobacco cigarette.

I am hoping to use this “Tokasaurus” nickname with affection for my neighbors. If Stanford is ok with informal usage.

Success with Meta AI / Llama 4:

Hey Meta, I would like to see an image of a Tyrannosaurus Rex, who is clad in a leather jacket, sunglasses, and fedora. He is so cool looking, and smoking a joint of marijuana, and his image is superimposed against a skyline of Phoenix in the golden glow of sunset.

Can you light up the joint with a glowing tip?

Art9681•8mo ago
Proof that attention is not only highly desired by Stanford tech bros, but HN keyboard warriors equipped with LLM tech. Everyone is clever all of the time.
catlifeonmars•8mo ago
I appreciate the double entendre
DiabloD3•8mo ago
Shame this is written in Python, looks very interesting, but I'm no expert in this field.

If there is anything here worth using, it's entirely possible that the llama.cpp crew can save it from vanishing into obscurity.

Szpadel•8mo ago
I'm curious what how big is latency tradeoff. I know assumption here is that it does not matter in those use cases but what order of magnitude it is? 10x? 100x?

this is important for usage in "soft realtime" application, where you do not need instant response but someone is still waiting.

if latency is really big, then it can only be used for basically background processes.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
186•ColinWright•1h ago•172 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
22•valyala•2h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
124•AlexeyBrin•7h ago•24 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
17•valyala•2h ago•1 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
155•alephnerd•2h ago•106 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
65•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
833•klaussilveira•22h ago•250 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
119•1vuio0pswjnm7•8h ago•149 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1061•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
80•onurkanbkrc•7h ago•5 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•58m ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
489•theblazehen•3d ago•177 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
212•jesperordrup•12h ago•73 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
567•nar001•6h ago•259 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
226•alainrk•6h ago•354 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
40•rbanffy•4d ago•7 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
10•momciloo•2h ago•0 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•33 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
77•speckx•4d ago•82 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
275•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
288•dmpetrov•22h ago•155 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
557•todsacerdoti•1d ago•269 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
427•ostacke•1d ago•111 comments