frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

LLMs Are Not a Higher Level of Abstraction

https://www.lelanthran.com/chap15/content.html
35•lelanthran•7h ago

Comments

jqpabc123•6h ago
In other words, LLMs are probabilistic, not deterministic.
sscaryterry•6h ago
Dare I say, so are humans?
jqpabc123•5h ago
This used to be a big reason why we used computers --- to help eliminate the probability of error.

But apparently, not so much any more.

somewhereoutth•3h ago
Right, it was the perfect match: Humans for fuzzy touchy feely stuff, computers for hard edged correct calculations. How have we managed to screw this up so badly?
irishcoffee•1h ago
I think the big unmentioned elephant in the room is the gambling/dopamine aspect of using an LLM. It’s to the point where people at $dayjob joke about it… but they’re not joking. That’s how it got screwed up so badly.

We have a bunch of engineers paying money to open loot boxes and they get visibly upset when they run out of tokens.

LLM companies have done an absolutely brilliant job of figuring out how to burn more tokens quickly, couch it as “more advanced” and people throw money at them.

I realize this wasn’t the thrust of your point, but tangentially, we fucked it up so badly because people desperately want to ignore this bit, and instead of looking at these tools analytically, there are the ardent defenders and the staunchly opposed… much like every other topic under the sun these days.

I use the free stuff work pays for, and I’ve never hit any token limit or anything like that. But I’m also trying extremely hard to ensure my skillsets don’t atrophy. I just use the web interface and ask questions. I have no interest in tying my development experience directly into an LLM, not after what I’ve seen at work over the last few weeks.

mpyne•8m ago
Digital computers were named after the humans whose jobs they automated out of existence.

They were invented to reduce cost of computation, not to eliminate the probability of error per se. Ask a Windows 11 user, they'll tell you computers still make errors.

cyanydeez•3h ago
This makes sense, but you need to understand that you're ignoring the compiler once you're past the machine code level which isn't an abstraction right, it's the root. So ignoring that part of the missive, goin from C to Python, different compilers do add different machine code.

C and Python have a bunch of different compilers, so you don't if you take the same code, the f' output can be different. There's determinism within the same compiler. Add in different architectures, and the machine code output definitely is more varied than presented.

But that's still a manageable; then what if you add in all the dependencies, well you get a more florid complexity.

So really, it's a shitty abstraction rather than an inaccurate analogy. If you lined them up in levels, there could be some universe where they are a valid abstraction. But it's not the current universe, because we know the models function on non-determinism.

I'd posit if there was a 'turtles all the way down' abstraction for the LLM, it's simply coming from the other end, the one where human mind might start entering the picture.

bigstrat2003•2h ago
You're right, but the reality is that the people who are excited about LLMs don't care about determinism. They are happy to hand off the thinking to a third party, even if it will give wrong answers they don't notice.
conorbergin•2h ago
LLMs are deterministic, the same model under the same conditions will produce the same output, unless some randomness is purposefully injected. Neural networks in general can be thought of as universal function approximators.
0-_-0•1h ago
You're being downvoted, but you're right. Determinism is a different concept and doesn't characterise LLMs well. You can have deterministic random number generators for example.
alansaber•1h ago
Yes theres a good thinking machines lab blog about this
2ndorderthought•1h ago
That's not really true. If you turn a few knobs you can make them deterministic. Namely setting temperature to zero, and turning off all history. But none of the cloud providers do this. Because it's not a product as far as they are concerned. So in practice - not so much.
maplethorpe•1h ago
Can someone explain why this is? Do LLMs somehow contain a true random number generator? Why wouldn't they produce the same outputs given the same inputs (even temperature)?

edit: I'm not talking about an LLM as accessed through a provider. I'm just talking about using a model directly. Why wouldn't that be deterministic?

2ndorderthought•1h ago
Yea sure. So temperature is baked into these LLM models and when it isn't zero it increases the probability of taking a different path to decode the tokens. Whether it's at a provider or downloaded on your own machine.

Technically even when the temperature is 0 it's not deterministic but it's more likely to be... You can have ties in probabilities for generating the next words. And floating point noise is real.

All these models are doing is guesstimating the next token to say.

evrydayhustling•58m ago
An LLM model itself -- that is, the weights and the mathematical functions linking them -- does not tell you exactly how to train from data, nor how to generate an output. Instead, it describes a function providing relative likelihood(output | input).

Deciding how to pick a particular output given that likelihood function is left as an exercise for the user, which we call inference.

One obvious choice is to keep picking the highest likelihood token, feed it into the model, and get another -- on repeat. This is what most algorithms call "temperature=0". But doing this for token after token can lead boring output, or steer you into pathological low-probability sequences like a set of endless repeats.

So, the current SOTA is to intentionally introduce a random factor (temperature>0) to the sampling process -- along with other hacks, like explicit suppression of repeats.

anon373839•46m ago
The model outputs a probability distribution for the next token, given the sequence of all previous tokens in the context window. It’s just a list of floats in the same order as the list of tokens that the tokenizer uses.

After that, a piece of software that is NOT the LLM chooses the next token. This is called the sampler. There are different sampling parameters and strategies available, but if you want repeatable* outputs, just take the token with the highest probability number.

* Perfect determinism in this sense is difficult to achieve because GPU calculations naturally have a minor bit of nondeterminism. But you can get very close.

2ndorderthought•41m ago
I'm not so sold the LLM is an LLM without a sampler but it's not worth quibbling over. It's part of the statistical model anyways.
slashdave•12m ago
Eh, conceptually true, but in practice, it is rather hard to get any decent performance out of a GPU and still produce a deterministic answer.

And in any case, setting the temperature to zero will not produce a useful result, unless you don't mind your LLM constantly running into infinite loops.

mrob•1h ago
Whenever somebody calls LLMs "non-deterministic", assume they meant "chaotic", in the informal sense of being a system where small changes of input can cause large changes to output, and the only way to find out if it will happen is by running the full calculation.

For many applications, this is equally troublesome as true non-determinism.

legerdemain•1h ago
This is absurd. The author misrepresents the type of "abstraction" that people mean. This abstraction ladder goes as follows:

  - contributing individually
  - contributing as a tech lead
  - contributing as a technical manager
  - leaving the occupation to open a vanity business, such as a gastropub or horse shoeing service
maplethorpe•1h ago
Abstraction has a specific meaning in computer programming. I don't think he's misrepresenting it.

https://en.wikipedia.org/wiki/Abstraction_(computer_science)

LeCompteSftware•1h ago
OP is being a bit tongue-in-cheek, I believe they mean that some vibe coders really want to be abstracted away from their own jobs, and are very much not interested in computer-scientific abstraction.
maplethorpe•1h ago
Oh.
dimtion•1h ago
I'm not sure why people struggle with the fact that an abstraction can be built on top of a non-deterministic and stochastic system. Many such abstractions already exist in the world we live.

Take sending a packet over a noisy, low SNR cell network. A high number of packets may be lost. This doesn't prevent me, as a software developer, from building an abstraction on top of a "mostly-reliable" TCP connection to deliver my website.

There are times when the service doesn't work, particularly when the packet loss rate is too high. I can still incorporate these failures into my mental model of the abstraction (e.g through TIMEOUTs, CONN_ERRs…).

Much of engineering and reliability history revolves around building mathematical models on top of an unpredictable world. We are far from solving this problem with LLMs, but this doesn't prevent me from thinking of LLMs as a new level of abstraction that can edit and transform code.

distalx•1h ago
A transmission error has a strictly contained, predictable blast radius. If a packet drops, the system knows exactly how to handle it: it throws a timeout, drops a connection, or asks for a retry. The worst-case scenario is known.

A reasoning error has an infinite, unpredictable blast radius. When an LLM hallucinates, it doesn't fail safely but it writes perfectly compiling code that does the wrong thing. That "wrong thing" might just render a button incorrectly, or it might silently delete your production database, or open a security backdoor.

You can build reliable abstractions over failures that are predictable and contained. You cannot abstract away unpredictable destruction.

yunwal•30m ago
> A reasoning error has an infinite, unpredictable blast radius.

Says who? It’s quite easy to limit the blast radius of a reasoning error.

td2•23m ago
I mean if your talking about packets, your already one abstraction over the real data Transmission, in wich is noisy. So bits can randomly flip, noise could be interpreted as bits, and bits could get lost. A much larger blast radius
evrydayhustling•51m ago
Besides deeply unpredictable factors (like signal transmission), most users of higher-level abstractions do so without certainty about how the translation will be executed. For example, one of the main selling points of C when I was growing up was that you could write code independent of architecture, and leave the architecture-specific translation to assembly to the compiler!

Abstractions often embrace nondeterministic translation because lower level details are unknown at time of expression -- which is the moivation for many LLM queries.

dominotw•27m ago
that would make sense if ai said "fail. i dont know" . Its active deception is what makes it difficult.
zadikian•14m ago
I'm fine with that. The part that makes it not really an abstraction is, you still deliver code in the end. It'd be different if your deliverable were prompt+conversation, and the code were merely an intermediate build artifact. Usually people throw away the convo. Some have tried making markdown files the deliverable instead, so far that doesn't really work.

It makes even less sense when people compare an LLM to a compiler. Imagine making a pull request that's just adding a binary because you threw the source code away.

mpyne•10m ago
The whole field of reproducible builds is only a field because compilers also have had trouble historically of producing binary artifacts with guaranteed provenance and binary compatibility even when built from the same source codes.

If I assign a bug fix ticket to a human developer on my team, I won't be able to precisely replicate how they go about solving the bug but for many bugs I can at least be assured that the bug will get solved, and that I understand the basic approach the assigned dev would use to troubleshoot and resolve the ticket.

This is an organizational abstraction but it's an abstraction just the same, leaky as it is.

yongjik•1h ago
It's orthogonal to whether LLMs can be a useful abstraction layer, but ...

I have a feeling that if LLMs were built on a deterministic technology, a lot of the current AI-is-not-intelligent crowd would be saying "These LLMs can only generate one answer given a question, which means they lack human creativity and they'll never be intelligent!"

madisonmay•1h ago
LLMs are not inherently non-deterministic during inference. I don't believe non-determinism implies lack of abstraction. Abstraction is simply hiding detail to manage complexity.
danpalmer•22m ago
Non-determinism is configurable at the level of the mathematical model, but current production systems do not support deterministic evaluation of LLMs.
calf•1h ago
There are a few things being confused because people are having to learn/re-learn/re-discover basic computer science classes, but both formal specifications and informal specifications - such as pseudocode (I balk imagining how many AI users might not know this term), or natural language documentation - are all forms of abstraction. Programming languages and underlying models of computation all enable varying degrees of hiding details or emphasizing important ideas/information. Human thought and language, and mathematics, are already examples of abstraction in general. LLMs thus also purport to provide a (via computational model alternative to Turing machines) higher kind of abstraction, the debate is whether it is a good one, if its hallucinations make it unreliable, etc.
Legend2440•34m ago
I don't agree with this take. Determinism is a nice property for abstractions to have, but it isn't necessary to be an abstraction.

And LLMs can handle very abstract concepts that could not possibly be encoded in C++, like the user's goal in using software.

The text mode lie: why modern TUIs are a nightmare for accessibility

https://xogium.me/the-text-mode-lie-why-modern-tuis-are-a-nightmare-for-accessibility
63•SpyCoder77•1h ago•19 comments

Agentic Coding Is a Trap

https://larsfaye.com/articles/agentic-coding-is-a-trap
86•ayoisaiah•2h ago•54 comments

BYOMesh – New LoRa mesh radio offers 100x the bandwidth

https://partyon.xyz/@nullagent/116499715071759135
246•nullagent•7h ago•79 comments

Let's Buy Spirit Air

https://letsbuyspiritair.com/
62•bjhess•1h ago•36 comments

DeepClaude – Claude Code agent loop with DeepSeek V4 Pro, 17x cheaper

https://github.com/aattaran/deepclaude
147•alattaran•2h ago•62 comments

The 'Hidden' Costs of Great Abstractions

https://jdgr.net/the-hidden-costs-of-great-abstractions
41•jdgr•1h ago•10 comments

Southwest Headquarters Tour

https://katherinemichel.github.io/blog/travel/southwest-headquarters-tour-2026.html
178•KatiMichel•8h ago•53 comments

US–Indian space mission maps extreme subsidence in Mexico City

https://phys.org/news/2026-04-usindian-space-mission-extreme-subsidence.html
84•leopoldj•2d ago•38 comments

A desktop made for one

https://isene.org/2026/05/Audience-of-One.html
227•xngbuilds•9h ago•92 comments

OpenAI's o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors

https://www.theguardian.com/technology/2026/apr/30/ai-outperforms-doctors-in-harvard-trial-of-eme...
269•donsupreme•1d ago•227 comments

Tar files made in macOS generate "xattr" errors when expanded in Linux

https://aruljohn.com/blog/macos-created-tar-files-linux-errors/
29•heresie-dabord•3d ago•19 comments

Introduction to Atom

https://validator.w3.org/feed/docs/atom.html
21•susam•2h ago•6 comments

New statue in London, attributed to Banksy, of a suited man, blinded by a flag

https://www.smithsonianmag.com/smart-news/attributed-to-banksy-a-new-statue-of-a-suited-man-blind...
257•dryadin•6h ago•259 comments

Using "underdrawings" for accurate text and numbers

https://samcollins.blog/underdrawings/
16•samcollins•2d ago•2 comments

Mercedes-Benz commits to bringing back physical buttons

https://www.drive.com.au/news/mercedes-benz-commits-to-bringing-back-phycial-buttons/
598•teleforce•10h ago•345 comments

Bad Connection: Global telecom exploitation by covert surveillance actors

https://citizenlab.ca/research/uncovering-global-telecom-exploitation-by-covert-surveillance-actors/
91•miohtama•8h ago•7 comments

Text-to-CAD

https://github.com/earthtojake/text-to-cad
73•softservo•2d ago•24 comments

LLMs Are Not a Higher Level of Abstraction

https://www.lelanthran.com/chap15/content.html
35•lelanthran•7h ago•36 comments

I recreated the Apple Lisa computer inside an FPGA [video]

https://www.youtube.com/watch?v=8jNQDcpHc68
68•cyrc•7h ago•10 comments

Security through obscurity is not bad

https://mobeigi.com/blog/security/security-through-obscurity-is-not-bad/
114•mobeigi•10h ago•130 comments

Denuvo has been cracked in all single-player games it previously protected

https://www.tomshardware.com/video-games/pc-gaming/denuvo-has-been-bypassed-in-all-single-player-...
220•oceansky•5d ago•137 comments

Make your own microforest (2025)

https://ambrook.com/offrange/environment/a-forest-in-your-pocket
59•bookofjoe•5h ago•14 comments

I built my own hair electrolysis machine

https://www.scd31.com/posts/diy-hair-electrolysis-machine
174•y1n0•4d ago•45 comments

Lost in translation: The linguistic challenges facing N. Korean defectors (2025)

https://www.dailynk.com/english/lost-in-translation-the-linguistic-challenges-facing-n-korean-def...
30•spzb•2d ago•20 comments

Why TUIs Are Back

https://wiki.alcidesfonseca.com/blog/why-tuis-are-back/
248•rickcarlino•6h ago•280 comments

Automatic Brightness in Plasma

https://zamundaaa.github.io/wayland,display/2026/04/24/automatic-brightness.html
15•speckx•2d ago•4 comments

What is Z-Angle Memory and why is Intel developing it?

https://www.hpcwire.com/2026/02/05/what-is-z-angle-memory-and-why-is-intel-developing-it/
81•rbanffy•2d ago•35 comments

Metal Gear Solid 2's source code has been leaked on 4chan

https://www.thegamer.com/mgs2-hd-edition-source-code-massive-leak/
222•rishabhd•8h ago•91 comments

Show HN: Apple's SHARP running in the browser via ONNX runtime web

https://github.com/bring-shrubbery/ml-sharp-web
157•bring-shrubbery•15h ago•39 comments

How far behind is each major Chromium browser?

https://chromium-drift.pages.dev/
160•skaul•8h ago•57 comments