The I learned about y2k, there was such a thing (more or less) from Apple. It implied knowing a strict subset of english and the correct words and constructs… it was a pain to program that (at least for me)
More or less at that time, I started understanding that programming languages limitations, although at the beginning a necessity, were a feature. Indeed it was already a very small subset of English, with very specific, succinct, small grammar, that was easy to learn (well, C++ stoped being learnable some years ago… but you get the point)
The idea of LLM eliminating good designed languages is hard for me to believe, just as stated in the article.
Once we have quantum LLMs, the need for intermediate abstraction layers might change, but that's very [insert magic here].
For that, I a big fan of flow based programming as the agnostic part. For the implementation, I’m thinking of Node-Red which is a visual implementation of flow based programming.
To become programming language agnostic, I’ve started on the Erlang-Red project which takes the frontend of Node Red and bolts it onto an Erlang backend.
Eventually the visual flow code will be programming languages independent.
I have dreamed about a programming language which would be basically text, but the editor would present it as a kind of flow chart. Maybe can be done with any existing programming language? But I found some troubles with language extensions… maybe someday someone much smarter than me can implement that in a meaningful way.
And so, SQL was born, and is now used all across the globe to manage critical systems. Plain English that even a business person could understand.
So something like python is a fairly specialized language. Most of its concepts are not that easy to translate to another language which may involved another set of specialized paradigms.
You will need to revert to a common base, which basically means unravel what gives Python its identity, then rebuild according to the other programming language identity. And there's a lot of human choices in there which will be the most difficult to replicate. The idiomatic way of programming is a subset of what is possible in the language just to enable faster reading between human developers.
So there's no language agnostic programming as there's no agnostic computation models. It's kinda how there's no agnostic hardware architecture. There's a lot of fairly involved work to have cross-platform programs. But that can work as the common platform is very low-level itself (JVM and other runtimes)
Yes, everything is Turing complete and a translation can exist, but how would you make any sense of it as a reader?
But in daily life, people are not accustomed to formalize their thought at that extent as there's a collective substrate (known as culture and jargon) that does the job for natural languages.
But the wish described in TFA comes from a very naive place. Even natural languages can't be reduced to a single set.
I keep coming back to System F or similar.
This is not something AI will ever be good at. Simply, because it is also hard for humans to do.
Translating between programming languages is a very hard problem, because someone needs to fully understand both languages. Both humans and AI have trouble with it for the same reason and only monumental AI progress, which would have other implications, could change this.
Something as basic as addition varies wildly between languages, if you look at the details. And when it comes to understanding the details are exactly what matters.
if AI can translate our English descriptions into working code, do we still need programming languages at all?
I think some people equate “source code” with “compiled code” or “AST” (abstract syntax tree). The former contain so many features that are still part of the english language such as functions / variables / types names, source files organization folder and filenames, comments, assets, git repo with log and history etc. And the AI probably wouldnt be as efficient if all those elements were not part of the training data. To get rid of such programming language and have a pure AI programming language would require tons of training data that humans will never produce (chicken and egg paradox)As far as I know and my experience confirms (maybe biased?) the whole chain of SW engineering is there precisely because English is not always optimal.
Indeed fact fact in a project I directed, the whole requirement management was basically a loop
Repeat{talk to customer; write formal language; simulate; validate}until no change;
It was called “runnable specification” not my idea. It worked absolutely incredibly good.
A few years ago in India, I saw a presentation where people were attempting to write programming in their mother tongue.
One such effort I found on GitHub is https://github.com/betacraft/rubyvernac-marathi (for Marathi, an Indian dialect).
Notably, a modern LLM wouldn't make this mistake.
It's not at all clear to me that LLMs are or will become better at translating Python → C than English → C. It makes sense in theory, because programming languages are precise and English is not. In practice, however, LLMs don't seem to have any problem interpreting natural language instructions. When LLMs make mistakes, they're usually logic errors, not the result of ambiguities in the English language.
(I am not talking about the case where you give the LLM a one-sentence description of an app and it fails to lay out every feature as you'd imagined it. Obviously, the LLM can't read your mind! However, writing detailed English is still easier than writing Python, and I don't really have issues with LLMs interpreting my instructions via Genie Logic.)
I would have found this post more convincing if the author could point to examples of an LLM misinterpreting imprecise English.
P.S. I broadly agree with the author that the claim "English will be the only programming language you’ll ever need" is probably wrong.
And domains with less training data openly available are areas where innovation and differentiation and business moats live.
Oftentimes, only programming languages are precise enough to specify this type of knowledge.
English is often hopelessly vague. See how many definitions the word break has: https://www.merriam-webster.com/dictionary/break
And Solomonoff/Kolmogorov theories of knowledge say that programming languages are the ultimate way to specify knowledge.
I think it is underestimated how difficult this truly is.
And this will always remain uniquely human because only The human truly knows their intent (sometimes).
I’ve had the AIs (ala the google) after I say “make me a script that does XYZ”, say here you go, and if I asked does it work and it tests it out will say yep it does, but only I will know if it is actually doing what I intended. I often will have to clarify my intent because I didn’t communicate well the first time. As we’ve all seen even amongst humans to each other, intent is not always well expressed.
There will always be a judgement made by a human with yes that is my intent or no it is not.
But even in old days of writing the “code” itself, most bugs were you not precisely saying what you wanted the program to do.
I think it’s correct to think of LLMs as compiling English to code, like c++ getting compiled to assembly.
Even humans can't use natural language do give succinct commands, hence the use of prescribed verbage in air traffic control communication.
I think we need languages optimized for isolation, without global anything and uncompilable without safety; and for readability. We need LLM oriented languages, meant to be read and not written. Like the author I think they'll look a lot more like Rust than anything else.
We should be programming them in structured natural language that expresses architecture, rather than details. Instead of application code, we also should be generating absurdly detailed and comprehensible test suites with that language, and ignoring the final implementation completely. The detailed architecture document, consisting of heavy commentary generated by the user (but organized and edited for consistency by the LLM in dialog with the user), and the test suite, should be the final product. Dropping it into any LLM should generate an almost identical implementation. That way, the language(s) can develop freely, and in a way oriented towards model usage, rather than having to follow humans who have to be retrained and reoriented after every change.
So maybe LLM-agnostic programming is what I'm asking for? I want LLM interactions to focus on making my intentions obvious, and clarifying things to whatever degree is necessary so it never has to really think about anything when generating the final product. I want the LLMs to use me as a context-builder. They can do the programming. Incidentally, this will obviously still take programmers because we know what is possible and what is not; like a driver feels their car as an extension of their body, although they're communicating with it through a wheel, three pedals, and a stick.*
Right now, LLMs are asking me what I want them to do too much. I want to tell them what I want them to do, and to have them probe the details of that until there's no place for them to make a mistake. A "programmer" will be the one who sets the program.
[*] Imagine the alternative (it's easy) of a autonomous car that says "Do you want to go to the grocery store? Or maybe visit your mother?" Stay out of my business, car. I have an organizer for that. I'll tell you where I want to go.
If that's true, what's your value? You don't understand client needs better than a product manager. You don't have an exceptional product vision. You're essentially making yourself obsolete.
Your expertise currently lies in building systems, handling edge cases, optimizing performance, and avoiding technical debt. If that can be expressed in English prompts, anyone can do your job—PMs, analysts, business people.
A programmer who can't write code is just someone with ideas. There are millions of those, and they're worth $0. Programmers who cheerlead the idea that "90% of code will be AI-written" are digging their own graves. In 5 years, they won't be replaced by AI—they'll be replaced by people who can both code AND use AI effectively.
“Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duas ejusdem nominis fas est dividere: cujus rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.”
Code and math notations help you think. Notations aren't just for the computer.
fleshmonad•1h ago
I don't think so
skydhash•1h ago