frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
67•aktau•1h ago•21 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
93•todsacerdoti•2h ago•47 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
503•cdrnsf•4h ago•251 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
37•lstoll•57m ago•34 comments

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
2162•HellsMaddy•22h ago•932 comments

TikTok's 'Addictive Design' Found to Be Illegal in Europe

https://www.nytimes.com/2026/02/06/business/tiktok-addictive-design-europe.html
297•thm•4h ago•202 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
16•surprisetalk•3d ago•2 comments

Invention of DNA "Page Numbers" Opens Up Possibilities for the Bioeconomy

https://www.caltech.edu/about/news/invention-dna-page-numbers-synthesis-kaihang-wang
78•dagurp•5h ago•43 comments

A new bill in New York would require disclaimers on AI-generated news content

https://www.niemanlab.org/2026/02/a-new-bill-in-new-york-would-require-disclaimers-on-ai-generate...
326•giuliomagnifico•6h ago•124 comments

LLMs could be, but shouldn't be compilers

https://alperenkeles.com/posts/llms-could-be-but-shouldnt-be-compilers/
65•alpaylan•2h ago•68 comments

GPT-5.3-Codex

https://openai.com/index/introducing-gpt-5-3-codex/
1414•meetpateltech•22h ago•554 comments

Things Unix can do atomically (2010)

https://rcrowley.org/2010/01/06/things-unix-can-do-atomically.html
196•onurkanbkrc•10h ago•72 comments

My AI Adoption Journey

https://mitchellh.com/writing/my-ai-adoption-journey
753•anurag•21h ago•296 comments

Nixie-clock using neon lamps as logic elements (2007)

https://www.pa3fwm.nl/projects/neonclock/
27•jacquesm•4d ago•6 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
32•antves•1d ago•17 comments

NIMBYs Aren't Just Shutting Down Housing

https://inpractice.yimbyaction.org/p/nimbys-arent-just-shutting-down-housing
57•toomuchtodo•42m ago•55 comments

Animated Engines

https://animatedengines.com/
8•surprisetalk•19h ago•1 comments

DNS Explained – How Domain Names Get Resolved

https://www.bhusalmanish.com.np/blog/posts/dns-explained.html
61•okchildhood•3d ago•16 comments

Solving Shrinkwrap: New Experimental Technique

https://kizu.dev/shrinkwrap-solution/
18•spiros•12h ago•2 comments

Stay Away from My Trash

https://tldraw.dev/blog/stay-away-from-my-trash
94•EvgeniyZh•3d ago•41 comments

Plasma Effect (2016)

https://www.4rknova.com/blog/2016/11/01/plasma
68•todsacerdoti•3d ago•13 comments

We tasked Opus 4.6 using agent teams to build a C Compiler

https://www.anthropic.com/engineering/building-c-compiler
623•modeless•21h ago•613 comments

Systems Thinking

http://theprogrammersparadox.blogspot.com/2026/02/systems-thinking.html
180•r4um•10h ago•95 comments

Recreating Epstein PDFs from raw encoded attachments

https://neosmart.net/blog/recreating-epstein-pdfs-from-raw-encoded-attachments/
439•ComputerGuru•1d ago•165 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
113•bsgeraci•12h ago•39 comments

Coding Agents and Use Cases

https://justsitandgrin.im/posts/coding-agents-use-cases/
23•vinhnx•3d ago•6 comments

The time I didn't meet Jeffrey Epstein

https://scottaaronson.blog/?p=9534
292•pfdietz•20h ago•360 comments

Animated Knots

https://www.animatedknots.com/
284•ostacke•4d ago•38 comments

The RCE that AMD won't fix

https://mrbruh.com/amd/
304•MrBruh•16h ago•134 comments

Unlocking high-performance PostgreSQL with key memory optimizations

https://stormatics.tech/blogs/unlocking-high-performance-postgresql-key-memory-optimizations
76•camille_134•4d ago•5 comments
Open in hackernews

LLMs could be, but shouldn't be compilers

https://alperenkeles.com/posts/llms-could-be-but-shouldnt-be-compilers/
65•alpaylan•2h ago

Comments

codingdave•2h ago
The discussions around this point are taking it too seriously, even when they are 100% correct. LLMs are not deterministic, so they are not compilers. Sure, if you specify everything - every tiny detail, you can often get them to mostly match. But not 100%. Even if you do fix that, at that point you are coding using English, which is an inefficient language for that level of detail in a specification. And even if you accept that problem, you still have gone to a ton of work just to fight the fundamental non-deterministic nature of LLMs.

It all feels to me like the guys who make videos of using using electric drills to hammer in a nail - Sure, you can do that, but it is the wrong tool for the job. Everyone knows the phrase: "When all you have is a hammer, everything looks like a nail." But we need to also keep in mind the other side of that coin: "When all you have is nails, all you need is a hammer." LLMs are not a replacement for everything that happens to be digital.

alpaylan•2h ago
I think the point I wanted to make was that even if it was deterministic (which you can technically make it to be I guess?) you still shouldn’t live in a world where you’re guided by the “guesses” that the model makes when solidifying your intent into concrete code. Discounting hallucinations (I know this a is a big preconception, I’m trying to make the argument from a disadvantaged point again), I think you need a stronger argument than determinism in the discussion against someone who claims they can write in English, no reason for code anymore; which is what I tried to make here. I get your point that I might be taking the discussion to seriously though.
liveoneggs•1h ago
The future is about embracing absolute chaos. The great reveal of LLMs is that, for the most part, nothing actually mattered except the most shallow approximation of a thing.
wizzwizz4•1h ago
The great reveal of LLMs is that our systems of checks and balances don't really work, and allow grifters to thrive, but despite that most people were actually trying to do their jobs properly. Perhaps nothing matters to you except the most shallow approximation of a thing, but there are usually people harmed by such negligence.
skydhash•1h ago
Imagine if the amount of a bank transfer does not matter, but it can only be an approximation, also you can approximate the selected account too. Or the system for monitoring the temperature of blood stockage for transfusion…

Often it seems like tech maximalists are the most against tech reliability.

snovv_crash•52m ago
No need to be so practical.

I suggest when their pointer dereferences, it can go a bit forward or backwards in memory as long as it is mostly correct.

SecretDreams•51m ago
Let's give people a choice. My banking will be deterministic, others can have probabilistic banking. Every so often, they transfer me some money by random chance, but at least they can say their banking is run by LLMs. Totally fair trade.
liveoneggs•3m ago
I'm just as upset as you are about it, believe me. Unfortunately I have to live in the world as I see it and what I've observed in the last 18-ish months is a complete breakdown of prior assumptions.
blazinglyfast•1h ago
> even if it was deterministic (which you can technically make it to be I guess?)

No. LLMs are undefined behavior.

xixixao•1h ago
OP means “given the same input, produce the same output” determinism. This isn’t really much different from normal compilers, you might have a language spec, but at the end of the day the results are determined by the concrete compiler’s implementation.

But most LLM services on purpose introduce randomness, so you don’t get the same result for the same input you control as a user.

WithinReason•1h ago
LLMs are deterministic at minimal temperature. Talking about determinism completely misses the point. The human brain is also non-deterministic and I don't see anybody dismiss human written code based on that. If you remove randomness and choose tokens deterministically, that doesn't magically solve the problems of LLMs.
SecretDreams•49m ago
> The human brain is also non-deterministic and I don't see anybody dismiss human written code based on that.

Humans, in all their non deterministic brain glory, long ago realized they don't want their software to behave like their coworkers after a couple of margaritas.

WithinReason•48m ago
You seem to be under the impression that I'm promoting LLMs, not sure where you got that idea. The argument is that non-determinism has nothing to do with the issues of LLMs.
CGMthrowaway•1h ago
>LLMs are not deterministic, so they are not compilers.

"Deterministic" is not the the right constraint to introduce here. Plenty of software is non-deterministic (such as LLMs! But also, consensus protocols, request routing architecture, GPU kernels, etc) so why not compilers?

What a compiler needs is not determinism, but semantic closure. A system is semantically closed if the meanings of its outputs are fully defined within the system, correctness can be evaluated internally and errors are decidable. LLMs are semantically open. A semantically closed compiler will never output nonsense, even if its output is nondeterministic. But two runs of a (semantically closed) nondeterministic compiler may produce two correct programs, one being faster on one CPU and the other faster on another. Or such a compiler can be useful for enhancing security, e.g. programs behave identically, resist fingerprinting.

Nondeterminism simply means the compiler selects any element of an equivalence class. Semantic closure ensures the equivalence class is well‑defined.

moregrist•59m ago
Perhaps you're comfortable with a compiler that generates different code every time you run it on the same source with the same libraries (and versions) and the same OS.

I am not. To me that describes a debugging fiasco. I don't want "semantic closure," I want correctness and exact repeatability.

SecretDreams•54m ago
Agree. I'm not sure what circle of software hell the OP is advocating for. We need consistent outputs from our most basic building blocks. Not performance probability functions. Many softwares run congruently across multiple nodes. What a nightmare it would be if you had to balance that for identical hardware.
candiddevmike•53m ago
I wish these folks would tell me how you would do a reproducible build, or reproducible anything really, with LLMs. Even monkeying with temperature, different runs will still introduce subtle changes that would change the hash.
cjbgkagh•48m ago
There is nothing intrinsic to LLM prevents reproducibility. You can run them deterministically without adding noise, it would just be a lot slower to have a deterministic order of operations, which takes an already bad idea and makes it worse.
candiddevmike•46m ago
Please tell me how to do this with any of the inference providers or a tool like llama.cpp, and make it work across machines/GPUs. I think you could maybe get close to deterministic output, but you'll always risk having some level of randomness in the output.
cjbgkagh•12m ago
Just because you can’t do it with your chosen tools it does not mean it cannot be done. I’ve already granted the premise that it is impractical. Unless there is a framework that already guarantees determinism you’ll have to roll your own, which honestly isn’t that hard to do. You won’t get competitive performance but that’s already being sacrificed for determinism so you wouldn’t get that anyway.
mvr123456•34m ago
This reminds me of how you can create fair coins from biased ones and vice versa. You toss your coin repeatedly, and then get the singular "result" in some way by encoding/decoding the sequence. Different sequences might map to the same result, and so comparing results is not the same as comparing the sequences.

Meanwhile, you press the "shuffle" button, and code-gen creates different code. But this isn't necessarily the part that's supposed to be reproducible, and isn't how you actually go about comparing the output. Instead, maybe two different rounds of code-generation are "equal" if the test-suite passes for both. Not precisely the equivalence-class stuff parent is talking about, but it's simple way of thinking about it that might be helpful

tjr•47m ago
Sometimes determinism is exactly what one wants. For avionics software, being able to claim complete equivalence between two builds (minus an expected, manually-inspected timestamp) is used to show that the same software was used / present in both cases, which helps avoid redundant testing, and ensure known-repeatable system setups.
cv5005•39m ago
Bitwise identical output from a compiler is important for verification to protect against tampering, supply chain attacks, etc.
bigstrat2003•32m ago
> What a compiler needs is not determinism, but semantic closure.

No, a compiler needs determinism. The article is quite correct on this point: if you can't trust that the output of a tool will be consistent, you can't use it as a building block. A stochastic compiler is simply not fit for purpose.

hackinthebochs•14m ago
Compiler output can be inconsistent and correct. For any source code there is an infinite number of machine code sequences that maintain the semantic constraints of the source code. Correctness is defined semantically, not by consistency.
thwarted•21m ago
No, deterministic means that given the same inputs—source code, target architecture, optimization level, memory and runtime limits (because if the optimizer has more space/time it might find better optimizations), etc—a compiler will produce the same exact output. This is what reproducible builds is about: tightly controlling the inputs so the same output is produced.

That a compiler might pick among different specific implementations in the same equivalency class is exactly what you want a multi-architecture optimizing compiler to do. You don't want it choosing randomly between different optimization choices within an optimization level, that would be non-deterministic at compile time and largely useless assuming that there is at most one most optimized equivalent. I always want the compiler to choose to xor a register with itself to clear it if that's faster than explicitly setting it to zero if that makes the most sense to do given the inputs/constraints.

sureglymop•10m ago
Don't LLMs create the same outputs based on the same inputs if the temperature is 0? Maybe I'm just misunderstanding.
bee_rider•1h ago
Are conventional compilers actually deterministic, with all the bells and whistles enabled? PGO seems like it ought to have a random element.
123malware321•1h ago
well considering you use components like DFA to build compilers, yes they are determenistic. you also have reproducible builds etc.

or does your binary always come out differently each time you compile the same file??

You can try it. try to compile the same file 10 times and diff the resultant binaries.

Now try to prompt a bunch of LLMs 10 times and diff the returned rubbish.

sigbottle•44m ago
I think one of the best ways to understand the "nice property" of compilers we like isn't necessarily determinacy, but "programming models".

There's this really good blog post about how autovectorization is not a programming model https://pharr.org/matt/blog/2018/04/18/ispc-origins

The point is that you want to reliably express semantics in the top level language, tool, API etc. because that's the only way you can build a stable mental model on top of that. Needing to worry about if something actually did something under the hood is awful.

Now of course, that depends on the level of granularity YOU want. When writing plain code, even if it's expressively rich in the logic and semantics (e.g. c++ template metaprogramming), sometimes I don't necessarily care about the specific linker and assembly details (but sometimes I do!)

The issue I think is that building a reliable mental model of an LLM is hard. Note that "reliable" is the key word - consistent. Be it consistently good or bad. The frustrating thing is that it can sometimes deliver great value and sometimes brick horribly and we don't have a good idea for the mental model yet.

To constrain said possibility space, we tether to absolute memes (LLMs are fully stupid or LLMs are a superset of humans).

Idk where I'm going with this

vlovich123•55m ago
No, modulo bugs generally the same set of inputs to a compiler are guaranteed to produce the same output bit for bit which is the definition of determinism.

There’s even efforts to guarantee this for many packages on Linux - it’s a core property of security because it lets you validate that the compilation process or environment wasn’t tampered with illicitly by being able to verify by building from scratch.

Now actually managing to fix all inputs and getting deterministic output can be challenging, but that’s less to do with the compiler and more to do with the challenge of completely taking the entire environment (the profile you are using for PGO, isolating paths on the build machine being injected into the binary, programs that have things in their source or build system that’s non deterministic (e.g. incorporating the build time into the binary)

candiddevmike•50m ago
Yes, they will output the same file hash every time, short of some build time mutation. Thus we can have nice things like reproducible builds and integrity checks.
9rx•1h ago
> LLMs are not deterministic

They are designed to be where temperature=0. Some hardware configurations are known defy that assumption, but when running on perfect hardware they most definitely are.

What you call compilers are also nondeterministic on 'faulty' hardware, so...

vlovich123•54m ago
Even with temperature and a batch size of 1 and fixed seed LLMs should be deterministic. Of course batch size of 1 is not economical.
troupo•20m ago
with temperature=0 and no context. That is, a clean run with t=0, pk=0 etc. etc. will produce the same output for the same question. However if you ask the same question in the same session, output will be different.

To say the least, this is garbage compared to compilers

9rx•15m ago
> However if you ask the same question in the same session, output will be different.

When isn't that true?

    int main() {
        printf("Continue?\n");
    }
and

    int main() {
        printf("Continue?\n");
        printf("Continue?\n");
    }
do not see the compiler produce equivalent outputs and I am not sure how they ever could. They are not equivalent programs. Adding additional instructions to a program is expected to see a change in what the compiler does with the program.
behnamoh•1h ago
> Specifying systems is hard; and we are lazy.

The more I use LLMs, the more I find this true. Haskell made me think for minutes before writing one line of code. Result? I stopped using Haskell and went back to Python because with Py I can "think while I code". The separation of thinking|coding phases in Haskell is what my lazy mind didn't want to tolerate.

Same goes with LLMs. I want the model to "get" what I mean but often times (esp. with Codex) I must be very specific about the project scope and spec. Codex doesn't let me "think while I vibe", because every change is costly and you'd better have a good recovery plan (git?) when Codex goes stray.

rvz•1h ago
Anyone who knows 0.1% about LLMs should know that they are not deterministic systems and are totally unpredictable with their outputs meaning that they cannot become compilers at all.

The obvious has been stated.

WithinReason•1h ago
Anyone who knows 0.2% about LLMs should know that they can be sampled deterministically, and yet that doesn't change the argument.
rvz•1h ago
We do not trust them (LLMs) 100% to reliably emit correct assembled code (why would anyone) compared with a compiler which the latter is deterministic and the former is fundamentally stochastic, no matter how you sample them.

LLMs are not designed for that.

hackinthebochs•30m ago
There's almost a good point here, but you're misusing concepts that obfuscate the point you're trying to make. Determinism is about producing the same output given the same input. In this sense, LLMs are fundamentally deterministic. Inference produces scores for every word in their vocabulary. This score map is then sampled from according to the temperature to produce the next token. But this non-determinism is artificially injected.

But the determinism/non-determinism axis isn't the core issue here. The issue is that they are trained by gradient descent which produces instability/unpredictability in its output. I can give it a set of rules and a broad collection of examples in its context window. How often it will correctly apply the supplied rules to the input stream is entirely unpredictable. LLMs are fundamentally unpredictable as a computing paradigm. LLMs training process is stochastic, though I hesitate to call them "fundamentally stochastic".

mvr123456•1h ago
Looking at LLMs as a less-than-completely-reliable compiler is a good idea, but it's misleading to think of them as natural-language-to-implementation compiler because they are actually an anything-to-anything compiler.

If you don't like the results or the process, you have to switch targets or add new intermediates. For example instead of doing description -> implementation, do description -> spec -> plan -> implementation

jerf•1h ago
A lot of people are mentally modeling the idea that LLMs are either now or will eventually be infinitely capable. They are and will stubbornly persist in being finite, no matter how much capacity that "finite" entails. For the same reason that higher level languages allow humans to worry less about certain details and more about others, higher level languages will allow LLMs to use more of their finite resources on solving the hard problems as well.

Using LLMs to do something like what a compiler can already do is also modelling LLMs as infinite rather than finite. In fact in this particular situation not only are they finite, they're grotesquely finite, in particular, they are expensive. For example, there is no world where we just replace our entire infrastructure from top to bottom with LLMs. To see that, compare the computational effort of adding 10 8-digit numbers with an LLM versus a CPU. Or, if you prefer something a bit less slanted, the computational costs of serving a single simple HTTP request with modern systems versus an LLM. The numbers run something like LLMs being trillions of times more expensive, as an opening bid, and if the AIs continue to get more expensive it can get even worse than that.

For similar reasons, using LLMs as a compiler is very unlikely to ever produce anything even remotely resembling a payback versus the cost of doing so. Let the AI improve the compiler instead. (In another couple of years. I suspect today's AIs would find it virtually impossible to significatly improve an already-optimized compiler today.)

Moreover, remember, oh, maybe two years back when it was all the rage to have AIs be able to explain why they gave the answer they did? Yeah, I know, in the frenzied greed to be the one to grab the money on the table, this has sort of fallen by the wayside, but code is already the ultimate example of that. We ask the LLM to do things, it produces code we can examine, and the LLM session then dies away leaving only the code. This is a good thing. This means we can still examine what the resulting system is doing. In a lot of ways we hardly even care what the LLM was "thinking" or "intending", we end up with a fantastically auditable artifact. Even if you are not convinced of the utility of a human examining it, it is also an artifact that the next AI will spend less of its finite resources simply trying to understand and have more left over to actually do the work.

We may find that we want different programming languages for AIs. Personally I think we should always try to retain that ability for humans to follow it, even if we build something like that. We've already put the effort into building AIs that produce human-legible code and I think it's probably not that great a penalty in the long run to retain that. At the moment it is hard to even guess what such a thing would look like, though, as the AIs are advancing far faster than anyone (or any AI) could produce, test, prove out, and deploy such a language, against the advantage of other AIs simply getting better at working with the existing coding systems.

skybrian•1h ago
Here's an experiment that might be worth trying: temporarily delete a source file, ask your coding agent to regenerate it, and examine the diffs to see what it did differently.

This could be a good way to learn how robust your tests are, and also what accidental complexity could be removed by doing a rewrite. But I doubt that the results would be so good that you could ask a coding agent to regenerate the source code all the time, like we do for compilers and object code.

explosion-s•1h ago
This is an interesting problem, one I've thought a lot about myself. On one hand, LLMs have the capacity to greatly help people, and I think, especially in the realm of gradually learning how to program, on the other hand, the non-determinism is such a difficult problem to work around.

One current idea of mine, is to iteratively make things more and more specific, this is the approach I take with psuedocode-expander ([0]) and has proven generally useful. I think there's a lot of value in the LLM instead of one shot generating something linearly, building from the top down with human feedback, for instance. I give a lot more examples on the repo for this project, and encourage any feedback or thoughts on LLM driven code generation in a more sustainable then vibe-coding way.

[0]: https://github.com/explosion-Scratch/psuedocode-expander/

Tade0•56m ago
> on the other hand, the non-determinism is such a difficult problem to work around.

Well, you can always set temperature to 0, but that doesn't remove hallucinations.

aethrum•1h ago
Can't we just turn the temp down to 0?
abm53•1h ago
More to the point: is randomness of representation or implementation an inherent issue if the desired semantics of a program are still obeyed?

This is not really a point about whether LLMs can currently be used as English compilers, but more questioning whether determinism of the final machine code output is a critical property of a build system.

kibwen•1h ago
That doesn't make a difference here. Even with a nonzero temperature, an LLM could still be deterministic as long as you have control of its random seed. As the article says:

"This gets to my core point. What changes with LLMs isn’t primarily nondeterminism, unpredictability, or hallucination. It’s that the programming interface is functionally underspecified by default."

helloplanets•31m ago
Even if you turn the temperature down to 0, it's not deterministic. Floating points are messy. If there is even a tiny difference when it comes to the order of operations on the actual GPU that's running the billions of parallelized floating point operations over and over, it's very possible to end up with changing top probability logits.
mickdarling•1h ago
This is where the desire to NOT anthropomorphize LLMs actually gets in the way.

We have mechanisms for ensuring output from humans, and those are nothing like ensuring the output from a compiler. We have checks on people, we have whole industries of people whose whole careers are managing people, to manage other people, to manage other people.

with regards to predictability LLMs essentially behave like people in this manner. The same kind of checks that we use for people are needed for them, not the same kind of checks we use for software.

skydhash•1h ago
> The same kind of checks that we use for people are needed for them

Those checks works for people because humans and most living beings respond well to rewards/punishment mechanisms. It’s the whole basis of society.

> not the same kind of checks we use for software.

We do have systems that are non deterministic (computer vision, various forecasting models…). We judge those by their accuracy and the likely of having false positive or false negatives (when it’s a classifier). Why not use those metrics?

wizzwizz4•29m ago
Because by those metrics, LLMs aren't very good.

LLM code completion compares unfavourably to the (heuristic, nigh-instant) picklist implementations we used to use, both at the low-level (how often does it autocomplete the right thing?) and at the high-level (despite many believing they're more effective, the average programmer is less effective when using AI tools). We need reasons to believe that LLMs are great and do all things, therefore we look for measurements that paint it in a good light (e.g. lines of code written, time to first working prototype, inclination to output Doom source code verbatim).

The reason we're all using (or pretending to use) LLMs now is not because they're good. It's almost entirely unrelated.

bigstrat2003•27m ago
> The same kind of checks that we use for people are needed for them...

The whole benefit of computers is that they don't make stupid mistakes like humans do. If you give a computer the ability to make random mistakes all you have done is made the computer shitty. We don't need checks, we need to not deliberately make our computers worse.

plastic-enjoyer•1h ago
> It’s that the programming interface is functionally underspecified by default. Natural language leaves gaps; many distinct programs can satisfy the same prompt. The LLM must fill those gaps.

I think this is an interesting development, because we (linguists and logicians in particular) have spent a long time developing a highly specified language that leaves no room for ambiguity. One could say that natural language was considered deficient – and now we are moving in the exact opposite direction.

fragmede•1h ago
In the comparison to compilers, it relevant to point out that work began on them in the 1950's. That they're basically solid by the time most people here used them, should be looked at with that time frame in mind. ChatGPT came out in 2022, 3-4 years ago. Compilers have had around three quarters of a century years to get where they are today. I'll probably be dead in seventy years, nevermind have any idea what AI (or society) is going to look like then!

But for reference, we don't (usually) care which register three compiler uses for which variable, we just care that it works, with no bugs. If the non-dertetminism of LLMs mean the variable is called file, filename, or fileName, file_name, and breaking with convention, why do we care? At the level Claude let's us work with code now, it's immaterial.

Compilation isn't stable. If you clear caches and recompile, you don't get a bit-for-bit exact copy, especially on today's multi-core processors, without doing extra work to get there.

SpicyLemonZest•1h ago
But the reason we don't care which register the compiler uses is that compilers, even without strict stability, reliably enforce abstractions that free us from having to care. If your compiler decided on 5% of inputs that it just doesn't feel like using more than two data registers, you'd have to think about it on 100% of inputs.
ryanschneider•1h ago
I’m kind of surprised no one has mentioned this one yet: https://www.lesswrong.com/posts/gQyphPbaLHBMJoghD/comp-sci-i...
dpweb•1h ago
Compilation is transforming one computing model to another. LLMs aren't great at everything, but seem particularly well suited for this purpose.

One of the first things I tried to have an llm do is transpile. These days that works really well. You find an interesting project in python, i'm a js guy, boom js version. Very helpful.

echelon•1h ago
Forest for the trees.

You see a business you like, boom competing business.

These are going to turn into business factories.

Anthropic has a business factory. They can make new businesses. Why do they need to sell that at all once it works?

We're focusing on a compiler implementation. Classic engineering mindset. We focus on the neat things that entertain us. But the real story is what these models will actually be doing to create value.

lunarboy•1h ago
Are LLMs not already compilers? They translate human natural language to code pretty well now. But yeah, they probably don't fit the bill of English based code to machine code
rvz•53m ago
> Are LLMs not already compilers? They translate human natural language to code pretty well now.

Can you formally verify prose?

> But yeah, they probably don't fit the bill of English based code to machine code

Which is why LLMs cannot be compilers that transform code to machine code.

kittikitti•1h ago
"LlMs HAlLuCinATE"

Stop this. This is such a stupid way way of describing mistakes from AI. Please try to use the confusion matrix or any other way. If you're going to try and make arguments, it's hard to take them seriously if you keep regurgitating that LLM's hallucinate. It's not a well defined definition so if you continually make this your core argument, it becomes disingenuous.

dgxyz•1h ago
How about "expected poor ratio of corn to shit".?
jtrn•1h ago
That was a painfull read for me. It reminds me of a specific annoyance I had at university with a professor who loved to make sweeping, abstract claims that sounded incredibly profound in the lecture hall but evaporated the moment you tried to apply them. It was always a hidden 'I-am-very-smart' attempt that fell apart if you actually deconstructed the meaning, the logic, or the claimed results. This article is the exact same breed of intellectualizing. It feels deep, but there is no actual logical hold if you break up the claims and deductive steps.

You can see it clearly if you just translate the article's expensive vocabulary into plain English. When the author writes, 'When you hand-build, the space of possibilities is explored through design decisions you’re forced to confront,' they are just saying, 'When you write code yourself, you have to choose how to write it.' When they claim, 'contextuality is dominated by functional correctness,' they just mean, 'Usually, we just care if the code works.' When they warn about 'inviting us to outsource functional precision itself,' they really mean, 'LLMs let you be lazy.' And finaly, 'strengthening the will to specify,' is just a dramatic way of saying, 'We need to write better requirements.' It is obscurantism plain and simple. using complexity to hide the fact that the insight is trivial.

But that is just an estethical problem to me. Worse. The argument collapses entirely when you look at the logical leap between the premises.

The author basically argues that because Natural Language is vague, engineers will inevitably stop caring about the details and just accept whatever reasonable output the AI gives. This is pure armchair psychology. It assumes that just because the tool allows for vagueness, professionals will suddenly abandon the concept of truth or functional requirements. That is a massive, unsubstantiated jump.

If we use fuzzy matching to find contacts on our phones all the time. Just because the search algorithm is imprecise doesn't mean we stop caring if we call the right person. We don't say, 'Well, the fuzzy match gave me Bob instead of Bill, I guess I'll just talk to Bob now.' The hard constraint, the functional requirement of talking to the specific person you need, remains absolute. Similarly, in software, the code either compiles and passes the tests, or it doesn't. The medium of creation might be fuzzy, but the execution environment is binary. We aren't going to drift into accepting broken banking software just because the prompt was in English.

This entire essay feels like those social psychology types that now have been thoroughly been discredited by the replication crisis in psychology. The ones who are where concerned with dazzling people with verbal skills than with being right. It is unnecessarily complex, relying on projection of dreamt up concepts and behavior, rather than observation. THIS tries to sound profound by turning a technical discussion into a philosophical crisis, but underneath the word salad, it is not just shallow, it is wrong.

Daviey•56m ago
If you have decent unit and functional tests, why do you care how the code is written?

This feels like the same debate assembly programmers had about C in the 60s. "You don’t understand what the compiler is doing, therefore it’s dangerous". Eventually we realised the important thing isn’t how the code was authored but whether the behaviour is correct, testable, and maintainable.

If code generated by an LLM:

  - passes a real test suite (not toy tests),
  - meets performance/security constraints,
  - goes through review like any other change,
then the acceptance criteria haven’t changed. The test suite is part of the spec. If the spec is enforced in CI, the authoring tool is secondary.

The real risk isn’t "LLMs as compilers", it’s letting changes bypass verification and ownership. We solved that with C, with large dependency trees, with codegen tools. Same playbook applies here.

If you give expected input and get expected output, why does it matter how the code was written?

hollowturtle•49m ago
They're giant pattern regurgitators, impressive for sure, but they only can be as good as their training data, reason why they seems to be more effective for TypeScript, Python etc. Nothing less nothing more. No AGI, no Job X is done. Hallucinations are a feature, otherwise they would just spit out training data. The thing is the whole discussion around these tools is so miserable that I'm pondering the idea of canceling from every corner of the internet, the fatigue is real and pushing back the hype feels so exausting, worse than crypto, nft and web3. I'm a user of these tools me pushing back the hype is because its ripple effects arrive inside my day job and I'm exausted of people handing to you generated shit just to try making a point and saying "see? like that"
lfsss•12m ago
You want to fly on an AI-developed airplane. I don't (just kidding haha).