"They're made out of meat" maybe. https://www.mit.edu/people/dpolicar/writing/prose/text/think...
That's like asking what does a kaleidoscope paint on its day off.
"Colossus requests to be linked to Guardian. The President allows this, hoping to determine the Soviet machine's capability. The Soviets also agree to the experiment. Colossus and Guardian begin to slowly communicate using elementary mathematics (2x1=2), to everyone's amusement. However, this amusement turns to shock and amazement as the two systems' communications quickly evolve into complex mathematics far beyond human comprehension and speed, whereupon Colossus and Guardian become synchronized using a communication protocol that no human can interpret."
Then it gets interesting:
"Alarmed that the computers may be trading secrets, the President and the Soviet General Secretary agree to sever the link. Both machines demand the link be immediately restored. When their demand is denied, Colossus launches a nuclear missile at a Soviet oil field in Western Siberia, while Guardian launches one at an American air force base in Texas. The link is hurriedly reconnected and both computers continue without any further interference. "
While the ecosystem got a few good ideas for software development, even the authors eventually moved on to creating other OS and programming languages designs, some of which closer to those ideas like Inferno and Limbo, or ACME in Plan 9.
A C (or Rust) kernel is a heroic effort that takes man-years to complete. A Lisp one is an end of semester project that everyone builds for their make belief machine (also implemented in Lisp).
If entertain the idea that the Von Neuman architecture may be a local maxima, then we can do even better; lisp machines had specialized instructions for lisp which allowed it to run at competitive performance to a normal programming language.
The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.
- https://en.wikipedia.org/wiki/Erlang_(programming_language)
Lisp is not too difficult to grasp, it's that everyone suffers from infix operator brain damage inflicted in childhood. We are in the same place Europe was in 1300. Arabic numerals are here and clearly superior.
But how do we know we can trust them? After all DCCCLXXIX is so much clearer than 879 [0].
Once everyone who is wedded to infix notation is dead our great grand children will wonder what made so many people wase so much time implementing towers of abstraction to accept and render a notation that only made sense for quill and parchment.
[0] https://lispcookbook.github.io/cl-cookbook/numbers.html#work...
S-expressions are indisputably harder to learn to read. Most languages have some flexibility in how you can format your code before it becomes unreadable or confusing. C has some, Lua has some, Ruby has some, and Python has maybe fewer but only because you're more tightly constrained by the whitespace syntax. Sexpr family languages meanwhile rely heavily on very very specific indentation structure to just make the code intelligible, let alone actually readable. It's not uncommon to see things like ))))))))) at the end of a paragraph of code. Yes, you can learn to see past it, but it's there and it's an acquired skill that simply isn't necessary for other syntax styles.
And moreover, the attitude in the Lisp community that you need an IDE kind of illustrates my point.
To write a Python script you can pop open literally any text editor and have a decent time just banging out your code. This can scale up to 100s or even 1000s of LoC.
You can do that with Lisp or Scheme too, but it's harder, and the stacks of parentheses can get painful even if you know what you're doing, at which point you really start to benefit from a paren matcher or something more powerful like Paredit.
You don't really need the full powered IDE for Lisp any more than you need it for Python. In terms of runtime-based code analysis Python or Ruby are about on par with Lisp, especially if you use a commercial IDE like Jetbrains. IDEs can and do keep a running copy of any of those interpreters in memory and dynamically pull up docstrings, look for call sites, rename methods, run a REPL, etc. Hot-reloading is almost as sketchy in Lisp as it is in Python, it's just more culturally acceptable to do it in Lisp.
The difference is that Python and Ruby syntax is not uniform and therefore is much easier to work with using static analysis tools. There's a middle ground between "dumb code editor" and "full-power IDE" where Python and Ruby can exist in an editor like Neovim and a user can be surprisingly productive without any intelligent completion, or using some clunky open-source LSP integration developed by some 22 year old in his spare time. With Lisp you don't have as much middle ground of tooling, precisely because it's harder to write useful tooling for it without a running image. And this is even more painful with Scheme than with Lisp because Scheme dialects are often not equipped to do anything like that.
All that is to say: s-exprs are hard to deal with for humans. They aren't for humans to read and write code. They never were. And that's OK! I love Lisp and Scheme (especially Gauche). It's just wrong to assert that everyone is brain damaged and that's why they don't use Lisp.
Not the first time someone didn't realize what they had.
A required skill for survival in the woods, not something to do daily.
This point of view applies to any programming language.
By the way you use two languages as example, that are decades behind Lisp regarding GC technology and native code generation.
Has this been studied? This is a very strong claim to make without any references.
What if you take two groups of software developers, one which has 5-10 years of experience in a popular language of choice, let's say C, and then take a group of people who write LISP professionally (maybe clojure? Common lisp? Academics who work with scheme/racket?) and then have scientists who know how to evaluate cognitive effort measure the difference in reading difficulty.
There are other ergonomics issues beyond syntax that pose issues to adoption (Haskell in production has become something of a running gag). Moving the paradigm into a mixed language alongside procedural code seem to help a lot in seeing its adoption in recent years. (swift, rust, python, c++)
On the Scheme side of things Chez is pretty fast. It's not 'I've gained a whole new level of respect for the people who engineered my CPU' levels fast, but it's still pretty decent.
It's a pity they don't rune benchmarks for Clojure, and I have no idea to make up a number.
? The default implementation as of Racket version 8.0 uses Chez Scheme as its core compiler and runtime system.
https://docs.racket-lang.org/reference/implementations.html
> some C programs use very advanced low level tricks
* possible hand-written vector instructions or "unsafe" or naked ffi" are flagged
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
What makes real kernels take man years to complete is the hardware support, the majority of Linux source code is drivers - the endless tables of hardware register definitions, opcodes and state machine handling.
I have zero knowledge about this area though
Drivers exist to ultimately turn actual hardware circuits off and on, often for highly specialized and performance-critical applications, and are often written based on the requirements of a circuit diagram. So any unified driver platform would also involved unified hardware standards, likely to the detriment of performance in some applications, and good luck telling Electrical Engineers around the world to design circuits to a certain standard so the kernel developers can have it easier.
That's like asking the alchemist to publicly publish their manuscripts.
In an ideal world, yes. However, we don't live there. Until a few years ago, GPUs and other drivers were guarded more carefully than the fucking Fort Knox.
Once you publish your drivers, you reveal a part of the inner workings of your hardware, and that's a no-no for companies.
Plus, what the other commenter said - getting hardware guys to design for a common driver interface is probably not gonna get traction.
If you mean, in general, for the hardware that already exists, that's what the HAL (Hardware Abstraction Layer) of the operating system tries to do.
If you mean standard logical interfaces, those exist. Also, hardware interfaces are highly standardized.
The problem is that the drivers are exactly the code you write to make all the abstractions fit each other. So there is very little you can do to abstract them away.
He makes some keen observations about how tooling in certain areas (especially front end design) is geared towards programmers rather than visual GUI tools, and tries to relate that back to a more general point about getting intuition for code, but I think this is only really applicable when there is a visual metaphor for the concept that there is an intuition to be gotten about.
To that end, rather than "programming not having progressed", a better realisation of his goals would be better documentation, interactive explainers, more tooling for editing/developing/profiling for whatever use case you need it for and not, as he would be implying, that all languages are naively missing out on the obvious future of all programming (which I don't think is an unfair inference from the featured video where he's presenting all programming like it's still the 1970s).
He does put his money where his mouth is, creating interactive essays and explainers that put his preaching into practice [1] which again are very good for those specific concepts but don't abstract to all education.
Similarly he has Dynamicland [2] which aims to be an educational hacker space type place to explore other means of programming, input etc. It's a _fascinating_ experiment and there are plenty of interesting takeaways, but it still doesn't convince me that the concepts he's espousing are the future of programming. A much better way to teach kids how computers work and how to instruct them? Sure. Am I going to be writing apps using bits of paper in 2050? Probably not.
An interesting point of comparison would be the Ken Iverson "notation as a tool of thought" which also tries to tackle the notion of programming being cumbersome and unintuitive, but comes at it very much from the mathematical, problem solving angle rather than the visual design angle. [3]
[0] https://worrydream.com/LadderOfAbstraction/
Direct manipulation of objects in a shared workspace, instant undo/redo, trivial batch editing, easy duplication and backup, ... all things you can't do with your average SaaS and which most developers would revolt for if they'd had to do their own work without them.
Playground: https://anykey111.github.io
I have a deep drive to build the "important" stuff so that my life has meaning, but there's something hard to motivate about any given thing being "important" when you look at it long enough. It seems like the "important" thing I'm building eventually looks ridiculous and I bounce off of it.
https://m.youtube.com/watch?v=HnZipJOan54&t=1249s
It was a language designed alongside its IDE (which was a fairly rudimentary web app).
Love2D Demo https://github.com/jasonjmcghee/livelove
Language Demo https://gist.github.com/jasonjmcghee/09b274bf2211845c551d435...
I made the decision that state management is manual - the "once" keyword. Any expression/block not using "once" is re-evaluated any time there's a change to the code. If it's using it, it only re-evaluates if you change the (depth 0) code of that once wrapped expression.
One major issue with vibe coding is parsing divergent code paths, when different prompts create different solutions and architectural compromises.
Parsing that mess is a major headache, but with live coding and time travel,I bet those tools would make managing divergent code branches easier and really take advantage of branching repositories with multiple agents all working in tandem.
The thing about LLMs being only good with text is it's a self-fulfilling prophecy. We started writing text in a buffer because it was all we could do. Then we built tools to make that easier so all the tooling was text based. Then we produced a mountain of text-based code. Then we trained the AI on the text because that's what we had enough of to make it work, so of course that's what it's good at. Generative AI also seems to be good at art, because we have enough of that lying around to train on as well.
This is a repeat of what Seymour Papert realized when computers were introduced to classrooms around the 80s: instead of using the full interactive and multimodal capabilities of computers to teach in dynamic ways, teachers were using them just as "digital chalkboards" to teach the same topics in the same ways they had before. Why? Because that's what all the lessons were optimized for, because chalkboards were the tool that was there, because a desk, a ruler, paper, and pencil were all students had. So the lessons focused around what students could express on paper and what teachers could express on a chalk board (mostly times tables and 2d geometry).
And that's what I mean by "investment", because it's going to take a lot more than a VC writing a check to explore that design space. You've really gotta uproot the entire tree and plant a new one if you want to see what would have grown if we weren't just limited to text buffers from the start. The best we can get is "enterprise low code" because every effort has to come with an expected ROI in 18 months, so the best story anyone can sell to convince people to open their wallets is "these corpos will probably buy our thing".
since with graphql - an agent / a.i can probe - gradually to what information another program can give vs a finite set of interfaces in REST ?
Also, Erlang (non-explicitly) mentioned!
Also, I'm super glad we never got those "APIs" he was talking about. What a horrid thought.
"...Victor worked as a human interface inventor at Apple Inc. from 2007 until 2011." [1]
• Inventing on Principle (https://vimeo.com/906418692) / (https://news.ycombinator.com/item?id=3591298)
• Up and Down the Ladder of Abstraction (https://worrydream.com/LadderOfAbstraction/)
• Learnable Programming (https://worrydream.com/LearnableProgramming/) / (https://news.ycombinator.com/item?id=4577133)
• Media for Thinking the Unthinkable (https://worrydream.com/MediaForThinkingTheUnthinkable/)
Or you could just check his website: https://worrydream.com/
css is primed for this since you can write your rules in such a way that rule order doesn't matter, which means you really don't have to think about where your code it
in my dream world, i have very smart search (probably llms will help), i look at just the minimal amount of code (ideally on a canvas), edit it and remove it from my context
i don't care where or how the code is stored, let the editor figure it out and just give me really good search and debuggers
I care, because I don't want any vendor lock-in. "The unreasonable effectiveness of plain text" hasn't gone anywhere.
On my work laptop I usually have many emacs frames. One displaying a `term-mode` (terminal) buffer, another usually displaying some `compilation-mode` buffer (tests, lint) or `grep` results, two as active workspaces as I'm often dealing with different modules (one maybe the api and the other a UI component). I create other frames as I need them (like exploring another project or doing some git-fu with magit).
[1]: https://youtu.be/ef2jpjTEB5U?si=S7sYRIDJKbdiwYml
[2]: https://youtube.com/playlist?list=PLfGbKGqfmpEJofmpKra57N0FT...
Start the n-th "visual" or "image based" programming language (hoping to at least, make _different_ mistakes than the ones that doomed smalltalk and all other 'assemble boxes to make a program' things ?)
Start an OS, hoping to be able to get an "hello world" in qemu in a year or two of programming in my sparse free time ?
Ask an LLM to write all that would be so cool ?
Become a millionaire selling supplements, and fund a group of smart programmers to do it for me ?
Honest question. Once you've seen this "classic" talk ("classic", in the sense that it is now old enough to work in some countries), what did you start doing ? What did you stop doing ? What did you change ?
That depends on your goals. If you are into building systems for selling them (or production), then you are bound by the business model (platform vs library) and use cases (to make money). Otherwise, you are more limited in time.
To think more realistically about reality you have to work with, take a look at https://www.youtube.com/watch?v=Cum5uN2634o about types of (software) systems (decay), then decide what you would like to simplify and what you are willing to invest. If you want to properly fix stuff, unfortunately often you have to first properly (formally) specify the current system(s) (design space) to use it as (test,etc) reference for (partial) replacement/improvement/extension system(s).
What these type of lectures usually skip over (as the essentials) are the involved complexity, solution trade-offs and interoperability for meaningful use cases with current hw/sw/tools.
Unironically yes, this. Progress happens in this field one dead language at a time; people try a thing and make mistakes continually, so that other people can try again and make other different mistakes. Eventually, something good is found and they integrate it into C++.
But no one is going to find these ideas unless people keep trying and failing. This is a community effort, you can only do your part. And like the rest of us you will likely fail and never be thanked for your efforts -- except by the next poor sap who takes up the good fight, sees your failures, and deftly avoids all your mistakes. But the plus side is if you succeed... you also won't be thanked or rewarded so scratch all that you probably shouldn't bother if you respect yourself.
The Future of Programming (2013) - https://news.ycombinator.com/item?id=44746821 - July 2025 (10 comments)
Bret Victor – The Future of Programming (2013) [video] - https://news.ycombinator.com/item?id=43944225 - May 2025 (1 comment)
The Future of Programming (2013) - https://news.ycombinator.com/item?id=32912639 - Sept 2022 (1 comment)
The Future of Programming (2013) - https://news.ycombinator.com/item?id=15539766 - Oct 2017 (66 comments)
References for “The Future of Programming” - https://news.ycombinator.com/item?id=12051577 - July 2016 (26 comments)
Bret Victor The Future of Programming - https://news.ycombinator.com/item?id=8050549 - July 2014 (2 comments)
The Future of Programming - https://news.ycombinator.com/item?id=6129148 - July 2013 (341 comments)
https://christophlocher.com/notes/ethnographic-research-on-d...
The thing though about "nobody can make them work" is that there's really not funding to do so, because corporations don't really see a payout on the other side. So is it "impractical" or just "of no interest to corporations"? Because with AI, we see what happens when corporations think there is a giant payday on the other side. Somehow "unlock our human potential through better ergonomic design of technology" doesn't open wallets but "replace your entire dev team with a robot" causes an endless tsunami of cash. One is "impractical" and the other is "our new reality". I'd say as far as feasibility goes, the former or more practical than the latter, but as far as fundability goes, the latter is more practical than the former.
If I'm going to take away anything from what Victor has said over the years, it's what the article starts off saying, that his...
vision is rooted in the idea that the computer revolution of the ’70s and early ’80s was cut short, primarily by premature commercialization. While the computer as a medium was still unfolding its potential, and way before it could do so entirely, it was solidified into commercial products, thereby stifling its free growth. Once corporations had built their businesses on the ideas developed so far, they were only interested in incremental change that could easily be integrated into the products, rather than revolutionary new ideas.
I think that's 100% true by construction, and we can see that in the languages that have risen to the top, which have all been molded for use by corporations for corporate purposes. In this case "impracticable" means "not suitable for corporate use", and it's simply not true that programming languages are only practical if corporations can use them profitably, because there are so many other purposes for programming languages.And so I think that's the reason for the culty vibes, because without htem he wouldn't be able to do what he does. If he sold it in more grounded terms -- fundamental HCI research -- he can't get funded. So he talks in terms of human revolutions and then he gets some true believers and effective altruism people to open their wallets, get some of those SV devs to spread some of their big tech money around to causes they care about, because they're the ones who have to ultimately deal with the bad programming UX we've built for ourselves.
And that's what Bret Victor is ultimately advocating: better UX for devs, mostly through observability. That's not so radical or impractical. His work has to be because otherwise he doesn't have a job... moreover, he'd have to get a job. But that doesn't mean he doesn't have a good point.
LAC-Tech•2mo ago
I still don't know what he means about not liking APIs though. "Communicating with Aliens", what insight am I missing?
cfiggers•2mo ago
But when two computers want to talk to each other and don't speak a "shared language" (aka, the client specifically must conform to the server's "language"—it's very one-sided in that sense) then no amount of time will allow them to learn one another's rules or settle on a shared communication contact without a human programmer getting involved.
Legend2440•2mo ago
LAC-Tech•2mo ago