frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Defeating Nondeterminism in LLM Inference

https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
88•jxmorris12•2h ago

Comments

measurablefunc•2h ago
I think this means that the results might also be non-deterministic across hardware revisions b/c I don't think they verified that the kernels will work the same on different GPU & TPU versions b/c how do they know that the compiler will not re-order the operations behind their back?
TimorousBestie•2h ago
Ensuring the same floating-point algorithm workload behaves exactly the same on two distinct workstations is a heck of a lot of work that almost no one is willing to pay for.
measurablefunc•1h ago
Not only that but heterogeneous clusters (inevitable at a large enough scale) will also have non-deterministic outputs. So it's great that they wrote kernels to make the forward pass deterministic but getting rid of it entirely at data center scale would mean that they'd also have to do this type of work across cluster nodes as well to maintain "cluster" invariance & not just batch invariance.
reliabilityguy•1h ago
> will not re-order the operations behind their back?

Valid point. Floating point summation is not always commutative.

AlotOfReading•1h ago
You can prevent reordering with sufficient amounts of compiler abuse.

With revisions, you're trying to ensure a consistent floating point environment where the operations used are deterministic, and used in the same order with the same inputs. The best way to do that is to use operations that adhere to a mostly deterministic standard like IEEE-754.

saagarjha•1h ago
Yes, there’s usually no guarantee on how different hardware does operations (for example, even if the hardware is correctly rounding intermediate results, different hardware may use different tile sizes). The reproducibility here is for runs on the same machine.

Compilers can also reorder operations but in practice this is rarely an issue because kernels typically synchronize frequently and this limits the ability for compilers to reorder things. This isn’t to say it doesn’t happen, but even if it does happen it’s likely because the compiler changed because the code they generate is generally run-to-run identical.

lrvick•1h ago
Job one is have every bit of software involved also be deterministic, which stagex takes care of.

I had no problem getting deterministic LLM outputs when I experimented with this 6 months ago.

Run two of these with the same prompts and same seed and you get the same results.

Obviously in GPU clusters with different hardware things get more complicated.

https://git.distrust.co/public/llmshell

saagarjha•1h ago
What’s stagex?
spindump8930•1h ago
That's not what this is about.

"I had no problem getting deterministic LLM outputs when I experimented with this 6 months ago" looks like you're using llama-cpp in that repo. This is about vllm serving many requests at once, at long sequence lengths.

> As it turns out, our request’s output does depend on the parallel user requests. Not because we’re somehow leaking information across batches — instead, it’s because our forward pass lacks “batch invariance”, causing our request’s output to depend on the batch size of our forward pass.

Your situation isn't really comparable.

lsy•1h ago
Fixing "theoretical" nondeterminism for a totally closed individual input-output pair doesn't solve the two "practical" nondeterminism problems, where the exact same input gives different results given different preceding context, and where a slightly transformed input doesn't give a correctly transformed result.

Until those are addressed, closed-system nondeterminism doesn't really help except in cases where a lookup table would do just as well. You can't use "correct" unit tests or evaluation sets to prove anything about inputs you haven't tested.

saagarjha•1h ago
This is really useful in reproducing bugs.
mg•1h ago
I really hope we will get deterministic LLMs in the future. Even if it causes slightly slower response times.

Nondeterminism is what currently keeps me from working with other developers.

As I wrote in "Prompt Coding" [1], these days I am not looking for good code. I am looking for prompts that create good code. But how do you share prompts among developers when they produce different code every time? You cannot simply state "Here, I found a prompt that makes gpt-5-2025-08-07 output a solution with all the desired attributes".

Similar with images. At the moment, for most image models, you cannot outsource the task of writing prompts that create the desired images. Because most image models will not create the same image when given the same prompt and parameters.

[1]: https://www.gibney.org/prompt_coding

khimaros•12m ago
i tried to create a makefile driven workflow based on this idea and ended up with https://github.com/khimaros/enc -- it suffers from the issues you raised

i'm hoping that it becomes more useful as models improve and become more reliable at producing working code (though determinism would be great for improving prompts).

TNDnow•1h ago
Who needs a working product when you can spend all day designing the most WEWORK looking website and slap some pseud slop on it. It's like crypto "startups" but it's not even fun.
cubefox•1h ago
His solution still relies on greedy (temperature 0) sampling, which is probably not optimal for model performance on various tasks. For example, Gemini 2.5 uses temperature 1 by default. But deterministic inference with temperature >0 can still be achieved by using pseudorandom sampling with a fixed seed.
mynameismon•32m ago
The point of the blog is that even at "supposed" deterministic generative sampling, non-determinism creeps in. This in turn has disastrous effects in very real experiments.
cubefox•27m ago
My point is that greedy sampling is not just not sufficient but also not necessary for deterministic inference.
red2awn•27m ago
Conceptually setting temperature to be >0 doesn't actually introduce any non-determinism. If your sampler is seeded then it will always choose the same next token. Higher temperature only flattens the logit distribution.
jll29•1h ago
Sometimes, the reason for non-determinism is implementation-specific. For instance, in GPT-2's source code (I haven't checked other model versions), setting the temperature in the GUI does not lead to a value of 0 but "epsilon" (a very small value larger than 0), to avoid a division by zero error in the code, which makes sense.

For many applications, non-determinism implies "useless". This has been a long standing issue with LDA topic models. In particular in the legal, financial and regulatory domains, if a method is not deterministic, it may be illegal to use it or it may lead to follow-on requirements that one does not want (e.g. all screens shown to humans must be preserved to be able to go back and reconstruct what exactly happened to a particular user in a particular second).

eldenring•59m ago
Very impressive! I guess this still wouldn't affect their original example

> For example, you might observe that asking ChatGPT the same question multiple times provides different results.

even with 0.0 temperature due to MOE models routing at a batch level, and you're very unlikely to get a deterministic batch.

> Not because we’re somehow leaking information across batches — instead, it’s because our forward pass lacks “batch invariance”, causing our request’s output to depend on the batch size of our forward pass.

The router also leaks batch-level information across sequences.

boroboro4•28m ago
> even with 0.0 temperature due to MOE models routing at a batch level, and you're very unlikely to get a deterministic batch.

I don’t think this is correct - MoE routing happens at per token basis. It can be non deterministic and batch related if you try to balance out your experts load in a batch but that’s performance optimization (just like all of the blogpost) and not the way models are trained to work.

syntaxing•55m ago
Super interesting. For those unaware, this is the company Mira Murati (OpenAI previous CTO) started
jasonjmcghee•41m ago
I love high quality blog post style research discussion - Anthropic has been leading the charge with this recently and it's great to see it spreading. OpenAI was also doing this during all the RL research days.
nowittyusername•40m ago
I am baffled that I still run against these statement years after LLM's have been around. LLM's are deterministic and always have been. The reason people are having issues with them is because they are basing their assumptions on api based experiments. Like my man, how can you be making these statements when you haven't done the due diligence of running the LLM on your own hardware with all of the variables locked down and accounted for? If you do just that it would become obviously clear that they are deterministic and most of the time the reason you see the non deterministic behavior is because you have not controlled for a variable. Usually prompt caching, batch processing or some other obvious variable. Now this is related to within same system deterministic behavior. You might get different answers when running on a different gpu, but at least for same systems the behavior is 100% identical if you account for all server startup flags and properly account for things like prompt cashing, slot contamination etc...
tossandthrow•38m ago
The article literally justifies This in the second paragraph.
nowittyusername•27m ago
I suppose I have issues with the way "determinism" is used in the title of this article. It can mean different things to different people and in my mind stating that "Defeating Nondeterminism in LLM Inference" frames it as an actual issue with LLM inference. But its not, its an issue with LLM inference when you start using large scale inference with more complex parts such as systems which use multi gpu inference systems or batching processes and other mechanisms. It is not an issue when using an LLM without those more complex parts. Stating it this way muddies the signal and gives a false sense that this is a fundamental issue with architecture, where its an issue of the systems at scale...
golol•36m ago
Hold on a second. A transformer produces deterministically a probability distribution over the token alphabet from the context. Then one samples from this distribution. This is random and meant to be random.
oasisaimlessly•34m ago
It's possible to deterministically sample from a probability distribution. For example, just seed your RNG with a constant, or with the SHA256 hash of the context.
Voloskaya•34m ago
I suggest you look up the name of the main author of TFA before assuming they don’t know what they are talking about.

This is literally one of the most knowledgeable person on the topic. I think you are the one that hasn’t peeled enough layers to connect with what they are saying.

The Amphora of Great Intelligence (AGI)

https://framapiaf.org/@davidrevoy/115180874986726269
1•goffi•1m ago•0 comments

Zero Trust in Reverse: Why the Definition of Zero Trust Is Only Half Full

https://threatresearch.ext.hp.com/zero-trust-in-reverse-why-the-current-definition-of-zero-trust-...
2•dexter_it•5m ago•0 comments

HN: Arambh Labs: agentic platform for cyber defense

https://arambhlabs.com/
1•nehagarg1209•6m ago•1 comments

San Francisco Gets an Invasive Billionaire-Bought Surveillance HQ

https://www.eff.org/deeplinks/2025/09/san-francisco-gets-invasive-billionaire-bought-surveillance-hq
2•Improvement•7m ago•0 comments

Network of agents collaborating through a publication/review system

https://github.com/spolu/srchd
1•spolu•8m ago•0 comments

Anonymity is dead and we're all content now

https://www.theverge.com/internet-culture/775740/anonymity-privacy-filming-viral-tiktok
2•ecliptik•9m ago•0 comments

Linter for Your Docs

3•gitgallery•15m ago•2 comments

Designing software architecture for parallel AI sessions

https://rashidazarang.com/c/software-architecture-for-parallel-ai
2•rashidae•15m ago•1 comments

OpenAI argues Canadian news publishers' lawsuit should be heard in U.S.

https://toronto.citynews.ca/2025/09/10/openai-argues-canadian-news-publishers-lawsuit-should-be-h...
4•harwoodr•15m ago•0 comments

Cease and Desist: Cami Research wants me to pay $100k

https://www.youtube.com/watch?v=BX6nUwDeHps
1•SilverElfin•16m ago•0 comments

Firefox Is Falling Behind (not in market share)

https://blog.velocifyer.com/Posts/0,1,2025-8-9,Firefox+is+falling+behind.html
2•Velocifyer•20m ago•0 comments

"The Irish Enlightenment". What was it?

https://twitter.com/patrickc/status/1965030023859253257
1•adriancooney•21m ago•0 comments

Ask HN: Were programmers more surprised than general public by ChatGPT in 2022?

2•amichail•25m ago•1 comments

Show HN: Package Search MCP – enable agents to search dependency source code

https://trychroma.com/package-search
10•HammadB•25m ago•0 comments

The Point of Politics Is to Convince People, Not Grandstand

https://jacobin.com/2025/08/left-politics-maximalism-socialism-reform/
4•PaulHoule•26m ago•0 comments

OpenAI mulls data center construction in Korea

https://www.koreatimes.co.kr/business/tech-science/20250910/openai-mulls-data-center-construction...
1•giuliomagnifico•29m ago•0 comments

What's New in Kotlin 2.2.20

https://kotlinlang.org/docs/whatsnew2220.html
3•Bogdanp•29m ago•0 comments

'Clearest sign' yet of ancient life on Mars

https://www.nature.com/articles/s41586-025-09413-0
20•stevenjgarner•29m ago•1 comments

Life, Maybe, on Mars, Unless We Change Our Minds

https://www.science.org/content/blog-post/life-maybe-mars-unless-we-change-our-minds
3•worldvoyageur•30m ago•0 comments

NASA discovers 'clearest sign of life that we've ever found on Mars'

https://www.washingtonpost.com/technology/2025/09/10/life-on-mars-rocks-mudstones-rover/
5•lisper•31m ago•0 comments

Show HN: Simplify sharing data with clients for freelancers/biz owners

https://rechart.app
1•caoxhua•32m ago•0 comments

Show HN: Mathpad, a hardware keypad for typing math symbols (24 hours left)

https://www.crowdsupply.com/summa-cogni/mathpad/updates/mathpad-demonstration
2•MagneLauritzen•34m ago•0 comments

FBI Arrests 22 Chinese, 4 Pharma Companies, Preventing Disaster [video][15 mins]

https://www.youtube.com/watch?v=Y_xbgz9nhR0
3•Bender•34m ago•1 comments

MIT software tool turns everyday objects into animated, eye-catching displays

https://news.mit.edu/2025/fabobscura-turns-everyday-objects-into-animated-displays-0910
4•gnabgib•35m ago•1 comments

X Open Sourced Algorithm Code, So I analyzed it to learn how to grow on X

https://supabird.io/articles/how-to-grow-on-x-what-we-learned-from-their-algorithm-reveal
6•Farid_Sukurov•37m ago•2 comments

LLM Latency Leaderboard

https://llm.orgsoft.org/
1•mattvr•39m ago•0 comments

Bionic Reading Font

https://github.com/Born2Root/Fast-Font
1•meistertigran•39m ago•0 comments

Former Wondery exec leading new firm to "flood the zone" with audio content

https://www.hollywoodreporter.com/business/digital/ai-podcast-start-up-plan-shows-1236361367/
2•CharlesW•39m ago•0 comments

Show HN: Making a cross-platform game in Go using WebRTC Datachannels

https://pion.ly/blog/making-a-game-with-pion/
2•valorzard•41m ago•0 comments

Tesla Optimus Impressing Salesforce CEO

https://twitter.com/Benioff/status/1963264973452546482
1•Veserv•42m ago•3 comments