frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

LLMs Bring New Nature of Abstraction

https://martinfowler.com/articles/2025-nature-abstraction.html
49•hasheddan•3d ago

Comments

dingnuts•6h ago
If LLMs are the new compilers, enabling software to be built with natural language, why can't LLMs just generate bytecode directly? Why generate HLL code at all?
Uehreka•6h ago
Why would the ability to generate source code imply the ability to generate bytecode? Also you wouldn’t want that, humans can’t review bytecode. I think you may be taking the metaphor too literally.
pixl97•5h ago
I dont think they are... LLMs can learn from anything thats been tokenized. Feed enough decompiled and labeled data with the bytecode and it's likely the machine will be able to dump out an executable. I wouldn't be surprised if an llm could output a valid elf right now other than the tokens may have been stripped in pretraining.
bird0861•4h ago
https://ai.meta.com/research/publications/meta-large-languag...
skydhash•2h ago
Because the semantic for each term in a programming language is pretty much a 1:1 relation to a sequential and logic-based ordering of terms in bytecode (which are still code).

> Also you wouldn’t want that, humans can’t review bytecode

The one great thing about automation (and formalism) is that you don't have to continuously review it. You vet it once, then you add another mechanism that monitors for wrong output/behavior. And now, the human is free for something else.

VinLucero•6h ago
I agree here. English (human language) to Bytecode is the future.

With reverse translation as needed.

thfuran•5h ago
English is a pretty terrible language for describing the precise behavior of a program.
demirbey05•4h ago
How will you figure out or solve hallucinated assembly code ?
imiric•12m ago
The vibe coders would tell you: you don't. You test the program, or ask the LLM to write tests for you, and if there are any issues, you ask it to fix them. And you do that in a loop until there are no more issues.

I imagine that at some point they must wonder what their role is, and why the LLM couldn't do all of that independently.

akavi•3h ago
Same reason humans use high-level languages: limited context windows.

Both humans and LLMs benefit from non-leaky abstractions—they offload low-level details and free up mental or computational bandwidth for higher-order concerns. When, say, implementing a permissioning system for a web app, I can't simultaneously track memory allocation and how my data model choices aligns with product goals. Abstractions let me ignore the former to "spend" my limited intelligence on the latter; same with LLMs and their context limits.

Yes, more intelligence (at least in part) means being able to handle larger contexts, and maybe superintelligent systems could keep everything "in mind." But even then, abstraction likely remains useful in trading depth for surface area. Chris Sawyer was brilliant enough to write Rollercoaster Tycoon in assembly, but probably wouldn't be able to do the same for Elden Ring.

(Also, at least until LLMs are so transcendentally intelligent they outstrip our ability to understand their actions, HLLs are much more verifiable by humans than assembly is. Admittedly, this might be a time-limited concern)

ptx•6h ago
> As we learn to use LLMs in our work, we have to figure out how to live with this non-determinism [...] but there will also things we'll gain that few of us understand yet.

No thanks. Let's not give up determinism for vague promises of benefits "few of us understand yet".

aradox66•5h ago
Determinism isn't always ideal. Determinism may trade off with things like accuracy, performance, etc. There are situations where the tradeoff is well worth it.
pixl97•5h ago
Yep, there are plenty of things that aren't computable without burning all the entropy in the visible universe, yet if you exchange it with a heuristic you can get a good enough answer in polynomial time.

Weather forecasts are a good example of this.

josefx•4h ago
Most heuristics are still deterministic.
betenoire•4h ago
I understand there are probabilities and shortcuts in weather forecasts.... but what part is non-deterministic?
aradox66•5h ago
Also, at temperature 0 LLMs can behave deterministically! Indeterminism isn't necessarily quite the right word for the kind of abstraction LLMs provide
josefx•4h ago
That runs into the issue that nobody runs LLMs with a temperature of zero.
bird0861•4h ago
Not true. Perhaps very few do, but some do in fact run them at 0. I've done it myself. There are many small models that will gladly perform well in QA with temp 0. Of course there are few situations where this is the recommended setup -- we all know RAG takes less than a billion parameters now to do effectively. But nevertheless there are people who do this, and there are plausibly some use cases for it.
bird0861•4h ago
Quite pleased you mentioned this. I would like to add transformer LLMs can be turing complete, see the work of Franz Nowak and his colleagues (I think there were at least one or two other papers by other teams but I read Nowak's the closest as it was the latest one when I became aware of this).
gpm•3h ago
Even at temperature != 0 it's trivial to just use a fixed seed in the RNG... it's just a computer being used in a naive, not even multi threaded (i.e. with race conditions), way.

I wouldn't be surprised to find out different stacks multiple fp16s slightly differently or something. Getting determinism across machines might take some work... but there's really nothing magic going on here.

billyp-rva•4h ago
Nobody was stopping anyone from making compilers that introduced random different behavior every time you ran them. I think it's telling this didn't catch on.
gpm•4h ago
I think there was actually a very big push to stop people from doing that - https://en.wikipedia.org/wiki/Reproducible_builds

There were definitely compilers that used things like data-structures with an unstable iteration order resulting in non-determinism, and people went stopping other people from doing that. This behavior would result in non-deterministic performance everywhere, and combined with race conditions or just undefined behavior other random non-deterministic behaviors too.

At least in part this was achieved with techniques that can be used to make LLMs to, like by seeding RNGs in hash tables deterministically. LLMs are in that sense no less deterministic than iterating over a hash table (they are just a bunch of matrix multiplications with a sampling procedure at the end, after all).

danenania•3h ago
I think this gets at a major hurdle that needs to be overcome for truly human-level AGI.

Because the human brain is also non-deterministic. If you ask a software engineer the same question on different days, you can easily get different answers.

So I think what we want from LLMs is not determinism, just as that's not really what you'd want from a human. It's more about convergence. Non-determinism is ok, but it shouldn't be all over the map. If you ask the engineer to talk through the best way to solve some problem on Tuesday, then you ask again on Wednesday, you might expect a marginally different answer considering they've had time to think on it, but you'd also expect quite a lot of consistency. If the second answer went in a completely different direction, and there was no clear explanation for why, you'd probably raise an eyebrow.

Similarly, if there really is a single "right" answer to a question, like something fact-based or where best practices are extremely well established, you want convergence around that single answer every time, to the point that you effectively do have determinism in that narrow scope.

LLMs struggle with this. If you ask an LLM to solve the same problem multiple times in code, you're likely to get wildly different approaches each time. Adding more detail and constraints to the prompt helps, but it's definitely an area where LLMs are still far behind humans.

sgt101•5h ago
LLMs are deterministic.

If you run an LLM with optimization turned on on a NVIDIA GPU then you can get non-deterministic results.

But, this is a choice.

bwfan123•5h ago
Can authors of such articles at least cite Dijkstra's "On the foolishness of "natural language programming"." which appeared eons ago ? Which presents an argument against the "english is a programming language" hype.

[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

awb•4h ago
Interesting read and thanks for sharing.

Two observations:

1. Natural language appears to be to be the starting point of any endeavor.

2.

> It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.

LLMs are trying to replicate all of the intellect in the world.

I’m curious if the author would consider that these lofty caveats may be more plausible today than they were when the text was written.

bwfan123•3h ago
> I’m curious if the author would consider that these lofty caveats may be more plausible today than they were when the text was written.

What is missed by many and highlighted in the article is the following: that there is no way to be "precise" with natural languages. The "operational definition" of precision involves formalism. For example, I could describe to you in english how an algorithm works, and maybe you understand it. But for you to precisely run that algorithm requires some formal definition of a machine model and steps involved to program it.

The machine model for english is undefined ! and this could be considered a feature and not a bug. ie, It allows a rich world of human meaning to be communicated. Whereas, formalism limits what can be done and communicated in that framework.

skydhash•3h ago
I forgot where I read it, but the reason that natural languages works so well for communication is because the terms are labels for categories instead of identifiers. You can concatenate enough to refer to a singleton, but for the person in front, it can be many items or an empty set. Some labels may even be nonexistent in their context

So when we want deterministic process, we invent a set of labels where each is a singleton. Alongside them is a set of rules that specify how to describe their transformation. Then we invented machines that can interpret those instructions. The main advantage was that we know the possible outputs (assuming a good reliability) before we even have to act.

LLMs don't work so well in that regard, as while they have a perfect embedding of textual grammar rules, they don't have a good representation for what those labels refers to. All they have are relations between labels and how likely are they used together. But not what are the sets that those labels refer to and how the items in those sets interact.

akavi•2h ago
> All they have are relations between labels and how likely are they used together. But not what are the sets that those labels refer to and how the items in those sets interact.

Why would "membership in a set" not show up as a relationship between the items and the set?

In fact, it's not obvious to me that there's any semantic meaning not contained in the relationship between labels.

skydhash•2h ago
Because the training data does not have it. So we have the label "rock" which intersect with some other label like "hard" and "earth". But the item itself have more attributes that we don't bother assigning label to them. Instead, we just experience them. So the label get linked to some qualia. We can assume that there's a collective intersection of the qualia that the label "rock" refers to.

LLMs don't have access to these hidden attributes (think how to describe "blue" to someone born blind). They may understand that color is a property of object, or that "black" is the color you wear for funerals in some locations. But ask them how to describe the color of a specific object and the output is almost guaranteed to be wrong. Unless they are in a funeral in the above location, so he can predict that most people wear black. But it's a guess, not an informed answer.

akavi•2h ago
But for most human endeavors, "operational precision" is a useful implementation detail, not a fundamental requirement.

We want software to be operationally precise because it allows us to build up towers of abstractions without needing to worry about leaks (even the leakiest software abstraction is far more watertight than any physical "abstraction").

But, at the level of the team or organization that's _building_ the software, there's no such operational precision. Individuals communicating with each other drop down to such precision when useful, but at any endeavor larger than 2-3 people, the _vast_ majority of communication occurs in purely natural language. And yet, this still generates useful software.

The phase change of LLMs is that they're computers that finally are "smart" enough to engage at this level. This is fundamentally different from the world Dijkstra was living in.

3abiton•1h ago
On that note, I wonder if having LLM agents communicating with each others in a human language rather than latent space is a big limitation.
moregrist•4h ago
This would require Martin Fowler to know about this article and to appreciate that it might be important.

He might, but I am not encouraged by his prior work or those of his contemporary Agile boosters. Regardless of how you feel about the Agile Manifesto (my feelings are quite mixed), these boosters over the last 25-ish years tend to love citing Agile and OOP things and rarely seem to look beyond that to historical or fundamental CS.

They found a lucrative niche telling management why their software process is broken and how to fix it. And why the previous attept at Agile wasn't really Agile and so things are still broken.

Perhaps now they have to ride the AI hype train, too. I can only guess that whatever AI-driven lucrative consulting / talk-circuit may emerge from this will also be able to explain why the last AI attempt wasn't really AI and that's why things are still broken.

bgwalter•4h ago
Dupe of https://news.ycombinator.com/item?id=44366904 , which was abruptly sinking from the front page.
w10-1•3h ago
> I've not had the opportunity to do more than dabble with the best Gen-AI tools, but I'm fascinated as I listen to friends and colleagues share their experiences. I'm convinced that this is another fundamental change

So: impressions of impressions is the foundation for a declaration of fundamental change?

What exactly is this abstraction? why nature? why new?

RESULT: unfounded, ill-formed expression

agentultra•3h ago
Abstraction? Hardly.

What are the new semantics and how are the lower levels precisely implemented?

Spoken language isn’t precise enough for programming.

I’m starting to suspect what people are excited about is the automation.

skydhash•2h ago
But that's not really automation.

It's more search and act based on the first output. You don't know what's going to come out, you just hope it will be good. The issue is that that the query is fed to the output function. So what you get is a mixture of what is a mixture of what you told it and what's was stored. Great if you can separate the two afterwards, not so if the output is tainted by the query.

With automation, what you seek is predictability. Not an echo chamber.

ADDENDUM

If we continue with the echo chamber analogy:

Prompt Engineering: Altering your voice so that the result back is more pleasant

System Prompt: The echo chamber's builders altering the configuration to get the above effects

RAG: Sound effects

Agent: Replace yourself in front of the echo chamber with someone/something that act based on the echo.

abeppu•2h ago
I think we should shift the focus from adapting LLMs to our purposes (e.g. external tool use) and adapting how we think about software and focus on getting models that internally understand compilation and execution. Rather than merely building around next token prediction, the industry should take advantage of the fact that software in particular provides a cheap path to learning a domain-specific "world model".

Currently I sometimes get predictions where a variable that doesn't exist gets used or a method call doesn't match the signature. The text of the code might look pretty plausible but it's only relatively late that a tool invocation flags that something is wrong.

If instead of just code text, we trained a model on (code text,IR, bytecode) tuples, (byte code, fuzzer inputs, execution trace) examples, and (trace, natural language description) annotations. The model needs to understand not just what token sequences seem likely but (a) what will the code compile to? (b) what does the code _do_ and (c) how would a human describe this behavior? Bonus points for some path to tie in pre/post conditions, invariants, etc

"People need to adapt to weaker abstractions in the LLM era" is a short term coping strategy. Making models that can reason about abstractions in a much tighter loop and higher fidelity loop may get us code generation we can trust.

yuvadam•1h ago
These types of errors are not only rare in one-shots, but also very easy to fix in subsequent iterations - e.g. Claude Code with Sonnet rarely makes these errors.

JavaScript Trademark Update

https://deno.com/blog/deno-v-oracle4
342•thebeardisred•3h ago•118 comments

MCP: An (Accidentally) Universal Plugin System

https://worksonmymachine.substack.com/p/mcp-an-accidentally-universal-plugin
429•Stwerner•8h ago•188 comments

BusyBeaver(6) Is Quite Large

https://scottaaronson.blog/?p=8972
156•bdr•5h ago•110 comments

Life of an inference request (vLLM V1): How LLMs are served efficiently at scale

https://www.ubicloud.com/blog/life-of-an-inference-request-vllm-v1
50•samaysharma•4h ago•3 comments

2025 ARRL Field Day

https://www.arrl.org/field-day
55•rookderby•3h ago•16 comments

Addictions Are Being Engineered

https://masonyarbrough.substack.com/p/engineered-addictions
290•echollama•7h ago•175 comments

We ran a Unix-like OS Xv6 on our home-built CPU with a home-built C compiler

https://fuel.edby.coffee/posts/how-we-ported-xv6-os-to-a-home-built-cpu-with-a-home-built-c-compiler/
198•AlexeyBrin•10h ago•17 comments

Schizophrenia Is the Price We Pay for Minds Poised Near the Edge of a Cliff

https://www.psychiatrymargins.com/p/schizophrenia-is-the-price-we-pay
30•Anon84•1h ago•19 comments

Show HN: Vet – A tool for safely running remote shell scripts

https://getvet.sh
29•a10r•2h ago•8 comments

Unheard works by Erik Satie to premiere 100 years after his death

https://www.theguardian.com/music/2025/jun/26/unheard-works-by-erik-satie-to-premiere-100-years-after-his-death
165•gripewater•12h ago•44 comments

Show HN: AGL a toy language that compiles to Go

https://github.com/alaingilbert/agl
20•alain_gilbert•3d ago•3 comments

Memory Safe Languages: Reducing Vulnerabilities in Modern Software Development [pdf]

https://media.defense.gov/2025/Jun/23/2003742198/-1/-1/0/CSI_MEMORY_SAFE_LANGUAGES_REDUCING_VULNERABILITIES_IN_MODERN_SOFTWARE_DEVELOPMENT.PDF
26•todsacerdoti•4h ago•0 comments

NovaCustom – Framework Laptop alternative focusing on privacy

https://novacustom.com/
23•CHEF-KOCH•4h ago•26 comments

Show HN: I'm an airline pilot – I built interactive graphs/globes of my flights

https://jameshard.ing/pilot
1407•jamesharding•1d ago•189 comments

Sirius: A GPU-native SQL engine

https://github.com/sirius-db/sirius
62•qianli_cs•8h ago•8 comments

Finding Peter Putnam

https://nautil.us/finding-peter-putnam-1218035/
60•dnetesn•12h ago•58 comments

Parsing JSON in Forty Lines of Awk

https://akr.am/blog/posts/parsing-json-in-forty-lines-of-awk
66•thefilmore•7h ago•24 comments

The Book Cover Trend of Text on Old Paintings

https://www.nytimes.com/2025/06/21/books/review/book-cover-trends.html
7•zdw•3d ago•1 comments

LLMs Bring New Nature of Abstraction

https://martinfowler.com/articles/2025-nature-abstraction.html
49•hasheddan•3d ago•39 comments

ZeQLplus: Terminal SQLite Database Browser

https://github.com/ZetloStudio/ZeQLplus
46•amadeuspagel•10h ago•10 comments

Evaluating Long-Context Question and Answer Systems

https://eugeneyan.com/writing/qa-evals/
7•swyx•3d ago•0 comments

Reproducible Builds

https://en.wikipedia.org/wiki/Reproducible_builds
4•optimalsolver•34m ago•0 comments

The Great Illusion: When We Believed BeOS Would Save the World

https://www.desktoponfire.com/haiku_inc/782/the-great-illusion-when-we-believed-beos-would-save-the-world-and-maybe-it-was-right/
10•naves•2h ago•12 comments

IDF officers ordered to fire at unarmed crowds near Gaza food distribution sites

https://www.haaretz.com/israel-news/2025-06-27/ty-article-magazine/.premium/idf-soldiers-ordered-to-shoot-deliberately-at-unarmed-gazans-waiting-for-humanitarian-aid/00000197-ad8e-de01-a39f-ffbe33780000
966•ahmetcadirci25•15h ago•698 comments

Lago (Open-Source Usage Based Billing) is hiring for ten roles

https://www.ycombinator.com/companies/lago/jobs
1•AnhTho_FR•10h ago

Why the moon shimmers with shiny glass beads

https://phys.org/news/2025-06-moon-shimmers-shiny-glass-beads.html
11•PaulHoule•3d ago•2 comments

Lossless LLM 3x Throughput Increase by LMCache

https://github.com/LMCache/LMCache
129•lihanc111•4d ago•37 comments

No One Is in Charge at the US Copyright Office

https://www.wired.com/story/us-copyright-office-chaos-doge/
104•rntn•5h ago•66 comments

After successfully entering Earth's atmosphere, a European spacecraft is lost

https://arstechnica.com/space/2025/06/a-european-spacecraft-company-flies-its-vehicle-then-loses-it-after-reentry/
48•rbanffy•3d ago•20 comments

The Death of the Middle-Class Musician

https://thewalrus.ca/the-death-of-the-middle-class-musician/
4•pseudolus•36m ago•0 comments