Empirical study on LLM output consistency in regulated financial tasks (RAG, JSON, SQL). Governance focus: Smaller models (Qwen2.5-7B, Granite-3-8B) hit 100% determinism at T=0.0, passing audits (FSB/BIS/CFTC), vs. larger like GPT-OSS-120B at 12.5%. Gaps are huge (87.5%, p<0.0001, n=16) and survive multiple-testing corrections.
Caveat: Measures reproducibility (edit distance), not full accuracy—determinism is necessary for compliance but needs semantic checks (e.g., embeddings to ground truth). Includes harness, invariants (±5%), and attestation.
Thoughts on inverse size-reliability? Planning follow-up with accuracy metrics vs. just repro.
colechristensen•1h ago
Outputs not being deterministic with temperature = 0 doesn't match my understanding of what "temperature" meant, I thought the definition of T=0 was determinism.
Is this perhaps inference implementation details somehow introducing randomness?
> As it turns out, our request’s output does depend on the parallel user requests. Not because we’re somehow leaking information across batches — instead, it’s because our forward pass lacks “batch invariance”, causing our request’s output to depend on the batch size of our forward pass.
tl;dr: the way inference is batched introduces non-determinism.
doctorpangloss•48m ago
“Determinism is necessary for compliance”
Says who?
The stuff you comply with changes in real time. How’s that for determinism?
raffisk•8m ago
Author here—fair point, regs are a moving target . But FSB/BIS/CFTC explicitly require reproducible outputs for audits (no random drift in financial reports). Determinism = traceability, even when rules update at the very least
Most groups I work with stick to traditional automation/rules systems, but top-down mandates are pushing them toward frontier models for general tasks—which then get plugged into these workflows. A lot stays in sandbox, but you'd be surprised what's already live in fin services.
Curious how you'd tackle that real-time changing reg?
throwdbaaway•20m ago
It is the reasoning. During the reasoning process, the top few tokens have very similar or even same logprobs. With gpt-oss-120b, you should be able to get deterministic output by turning off reasoning, e.g. by appending:
Of course, the model will be less capable without reasoning.
measurablefunc•1h ago
This is b/c these things are Markov chains. You can not expect consistent results & outputs.
SrslyJosh•31m ago
Using an LLM for a "financial workflow" makes as much sense as integrating one with Excel. But who needs correct results when you're just working with money, right? ¯\_(ツ)_/¯
mirekrusin•28m ago
Humans are non deterministic yet they use excel, work with financial workflows and deal with the money.
ACCount37•30m ago
Did you actually read what the paper was about before leaving a low quality comment?
raffisk•2h ago
Caveat: Measures reproducibility (edit distance), not full accuracy—determinism is necessary for compliance but needs semantic checks (e.g., embeddings to ground truth). Includes harness, invariants (±5%), and attestation.
Thoughts on inverse size-reliability? Planning follow-up with accuracy metrics vs. just repro.
colechristensen•1h ago
Is this perhaps inference implementation details somehow introducing randomness?
kakugawa•50m ago
https://news.ycombinator.com/item?id=45200925
https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
> As it turns out, our request’s output does depend on the parallel user requests. Not because we’re somehow leaking information across batches — instead, it’s because our forward pass lacks “batch invariance”, causing our request’s output to depend on the batch size of our forward pass.
tl;dr: the way inference is batched introduces non-determinism.
doctorpangloss•48m ago
Says who?
The stuff you comply with changes in real time. How’s that for determinism?
raffisk•8m ago
Most groups I work with stick to traditional automation/rules systems, but top-down mandates are pushing them toward frontier models for general tasks—which then get plugged into these workflows. A lot stays in sandbox, but you'd be surprised what's already live in fin services.
The authorities I cited (FSB/BIS/CFTC) literally just said last month AI monitoring is "still at early stage" cc https://www.fsb.org/2024/11/the-financial-stability-implicat...
Curious how you'd tackle that real-time changing reg?
throwdbaaway•20m ago