They showed signs that some people there understood that their development environment was it, but it obviously never fully got through to decision-makers: They had CLOE, a 386 PC deployment story in partnership with Gold Hill, but they’d have been far better served by acquiring Gold Hill and porting Genera to the 386 PC architecture.
https://www.digiater.nl/openvms/decus/vmslt05a/vu/alpha_hist...
> Although Alpha was declared an "open architecture" right from the start, there was no consortium to develop it. All R&D actions were handled by DEC itself, and sometimes in cooperation with Mitsubishi. In fact, though the architecture was free de jure, most important hardware designs of it were pretty much closed de facto, and had to be paid-licensed (if possible at all). So, it wasn't that thing helping to promote the architecture. To mention, soon after introduction of EV4, DEC's high management offered to license manufacturing rights to Intel, Motorola, NEC, and Texas Instruments. But all these companies were involved in different projects and were of very little to no interest in EV4, so they refused. Perhaps, the conditions could be also unacceptable, or something else. Mistake #5.
Networking was the initial impetus, but the phrase came to include programming interfaces, which is why POSIX was considered such a big deal. The idea was to promote interoperability and portability, as oposed to manufacturer-specific islands like those from IBM and DEC.
What I’m suggesting is that they could have done a full port to the hardware; OpenGenera is still an Ivory CPU emulator. In 1986-7 you could get an AT-compatible 80386 system running at 16-25MHz that supported 8-32MB of RAM for 10-20% the price of a Symbolics workstation, and while it might not run Lisp quite as fast as a 3600 series system, it would still be fast enough for both deployment and development—and the next generation would run Lisp at comparable performance.
This would be tied to the bit width of the system.
What? They’re awesome. They present a vision of the future that never happened. And I don’t think anyone serious expects lisp machines to come back btw.
ARM also used to have opcodes for Java: https://en.wikipedia.org/wiki/Jazelle
Hauntology strikes again
Amiga romantics.
8-bit romantics.
PDP-10 romantics.
Let them stay. Let them romantizice. <glasses tint="rose">
Its fair enough to say that lisp machines had this or that hardware limitation, or that they weren't really compatible with market needs, but to criticize 'lisp machine romantics' like this article does is to fail to understand what really motivates that romanticism. Maybe you have to be a romantic to really get it. Romanticism is abstract, its about chasing feelings and inspirations that you don't really understand yet. Its about unrealized promises more than its about the actual concrete thing that inspires them.
(I'm also an Amiga romantic, and I think what inspires me about that machine is equally abstract and equally points to a human attitude towards making and using software that seems sadly in decline today)
A novice was trying to fix a broken Lisp machine by turning the power off and on.
Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”
Knight turned the machine off and on.
The machine worked.
Here's another Moon story from the humor directory:
https://github.com/PDP-10/its/blob/master/doc/humor/moon's.g...
Moon's I.T.S. CRASH PROCEDURE document from his home directory, which goes into much more detail than just turning it off and on:
https://github.com/PDP-10/its/blob/master/doc/moon/klproc.11
And some cool Emacs lore:
https://github.com/PDP-10/its/blob/master/doc/eak/emacs.lore
Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":
https://news.ycombinator.com/item?id=7878679
http://lispm.de/symbolics-lisp-machine-ergonomics
https://news.ycombinator.com/item?id=7879364
eudox on June 11, 2014
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples
agumonkey on June 11, 2014
Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.
DonHopkins on June 14, 2014
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html
Also eudox posted this link:
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....
https://github.com/portacle/portacle/issues/182
In 2020 they went full-time on developing the game Kandria.
They're still active:
The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.
I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.
So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.
And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.
This mentality seems to have carried over to (most) modern FP stacks
Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.
They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.
Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.
For a while Conda seemed to have cracked this, but there too I now get unresolvable conflicts. It is really boggling the mind how you could get this so incredibly wrong and still have the kind of adoption that python has.
With the exception of Gstreamer. I use some awful hacks to break out of virtual environments and use the system Gstreamer, because it's not on PyPi....
Where I see Python used is in places where you do not need it packaged as executables:
1. Linux - where the package manager solves the problem. I use multiple GUI apps written in python
2. On servers - e.g. Django web apps, where the the environment is set up per application
3. Code written for specific environments - even for specific hardware
4. One off installs - again, you have a specified target environment.
In none of the above cases do I find the environment to be fragile. On the other hand, if you are trying to distribute a Windows app to a large number of users I would expect it to be problematic.
Which is significantly more than was needed for different technologies to achieve similar results.
Quick start guide: works on my machine.
I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.
I feel that is the biggest barrier to their adoption nowadays (and also silly things like requiring ;; at the end of the line)
Pure functions are a good theoretical exercise but they can't exist in practice.
Well, they can. But not all the way up to the top level of your program. But the longer you can hold off from your functions having side effects the more predictable and stable your codebase will be, with as an added benefit fewer bugs and less chance of runtime issues.
Q: How many Prolog programmers does it take to change a lightbulb?
A: Yes.
Also have you managed to eliminate the side effect of your IP register changing when your program is running? ;)
I find that both Python and Javascript allow you to use functional code when appropriate, without forcing you to use it when it isn’t.
Pure functions often exist in practice and are useful for preventing many bugs. Sure, they may not be suitable for some situations but they can prevent a lot of foot guns.
Here's a Haskell example with all of the above:
import System.Random (randomRIO)
main :: IO ()
main = do
num <- randomRIO (1, 100)
print $ pureFunction num
pureFunction :: Int -> Int
pureFunction x = x * x + 2 * x + 1Also: https://hanshuebner.github.io/lmman/pathnm.xml
It is worth mentioning that while it is not versioning per se, APFS and ZFS support instantaneous snapshots and clones as well.
Btrfs supports snapshots, too.
HAMMER2 in DragonFlyBSD has the ability to store revisions in the filesystem.
(SCCS handled collaborative development and merges a lot worse than anything current, but... versioning file systems were worse there, too; one war story I heard involved an overenthusiastic developer "revising" someone else's file with enough new versions that by the time the original author came back to it, their last version of the code was unrecoverable.)
The hardware was never very interesting to me. It was the "lisp all the way down" that I found interesting, and the tight integration with editing-as-you-use. There's nothing preventing that from working on modern risc hardware (or intel, though please shoot me if I'm ever forced back onto it).
The ".obj" file was a binary file that contain machine instructions and data. It was "fast loaded" and the file format was called "fasl" and it worked well.
The issue of building an application wasn't an issue because we had "dumplisp" which took the image in memory and wrote it to disk. The resulting image could be executed to create a new instance of the program, at the time dumplisp was run. Emacs called this "unexec" and it did approximately the same thing.
Maybe your discussions with my group predated me and predated some of the above features, I don't know. I was Fateman's group from '81-84.
I assume your source control comments were about the Lisp Machine and not Franz Lisp. RCS and SCCS were a thing in the early 80's, but they didn't really gain steam until after I arrived at UCB. I was the one (I think... it was a long time ago) that put Franz Lisp under RCS control.
This is the original Oppen-Nelson simplifier, the first SAT solver. It was modified by them under contract for the Pascal-F Verifier, a very early program verifier.
We kept all the code under SCCS and built with make, because the LISP part was only part of the whole system.
Were the macros originally from another dialect of Lisp?
I talked to Fateman at some point. Too long ago to remember about what.
Source: I was a mainframe compiler developer at IBM during this era.
It's hard to find where to draw the line when it comes to specialized hardware, and the line moves forth and back all the time. From personal experience it went from something like "multiple input boards, but handle the real time Very Fast interrupts on the minicomputer". And spend six months shaving off half a millisecond so that it worked (we're in the eighties here). Next step - shift those boards into a dedicated box, let it handle the interrupts and DMA and all that, and just do the data demuxing on the computer. Next step (and I wasn't involved in that): Do all the demuxing in the box, let the computer sit back and just shove all of that to disk. And that's the step which went too far, the box got slow. Next step: Make the box simpler again, do all of the heavy demuxing and assembling on the computer, computers are fast after all..
And so on and so forth.
- Naylor and Runciman (2007) ”The Reduceron: Widening the von Neumann Bottleneck for Graph Reduction using an FPGA”: https://mn416.github.io/reduceron-project/reduceron.pdf
- Burrows (2009) “A combinator processor”: https://q4.github.io/dissertations/eb379.pdf
- Ramsay and Stewart (2023) “Heron: Modern Hardware Graph Reduction”: https://dl.acm.org/doi/10.1145/3652561.3652564
- Nicklisch-Franken and Feizerakhmanov (2024) “Massimult: A Novel Parallel CPU Architecture Based on Combinator Reduction”: https://arxiv.org/abs/2412.02765v1
- Xie, Ramsay, Stewart, and Loidl (2025) “From Haskell to a New Structured Combinator Processor” (KappaMutor): https://link.springer.com/chapter/10.1007/978-3-031-99751-8_...
jesus christ dont say that around here, youll be swamped by fanatical emacs users describing various bits of lisp theyve written over the years and what they each do. it will send you insane
Author falls into the same trap he talks about in the article. AI is not going away, we are not going back to the pre-AI world.
I can't predict when the shakeout will be, but I can predict that not every AI company is going to survive when it happens. The ones that do survive will be the ones that found a viable niche people are willing to pay for, just as the dot-com bubble bursting didn't kill Paypal, eBay, and so on. But there are definitely going to be some companies going bankrupt, that's pretty clear even at this point.
I'm juuust about old enough to remember the end of the Lisp Machine bubble (we had one or two at uni in the early 90s, and they were archaic by then). But obviously Lisp machines were the wrong way to go, even if they were a necessary step - obviously, hardware-mediated permanent object storage is the way forwards! POP! Ah, maybe not. Okay but can't you see we need to run all this on a massive transputer plane? POP! Oh. Okay how about this, we actually treat the microcode as the machine language, so the user-facing opcodes are like 256 bits long, and then we translate other instruction sets into that on the fly, like this - the Transmeta Crusoe! It's going to revolutionise everything! POP! Ah, what? Okay well how about...
And we're only up to the early 2000s.
It's bubbles, all the way back. Many of these things were indeed necessary steps - if only so We Learned Not To Do That Again - but ultimately are a footnote in history.
In 30 years' time people will have blog posts about how in the mid-2020s people had this thing where they used huge sheds full of graphics cards to run not-working-properly Boolean algebra to generate page after page after page of pictures of wonky-looking dogs and Santa Clauses, and we'll look at that with the same bemused nostalgia as we do with the line printer Snoopy calendars today.
The culture was nerdy, and the product promises were too abstract to make sense outside of Nerdania.
They were fundamentally different to the dot com bubble, which was hype-driven, back when "You can shop online!" was a novelty.
The current AI bubble is an interesting hybrid. The tech is wobbly research-grade, but it's been hyped by a cut-throat marketing engine aimed at very specific pain points - addictive social contact for younger proles, "auto-marketing team" for marketers, and "cut staffing and make more money" promises for management.
No, as it turns out.
But it was fun while it lasted.
You know what they say about when the taxi driver is giving you strong financial opinions.
The author is saying that those special purpose machines will age out quickly when the task of advanced computing shifts (again).
You seem to be making the assumption that "the huge neural networks they were designed for" are the only way to build AI. Things could shift under our feet again.
The author (and I) have seen too many people say that only to be proved very wrong shortly thereafter. This means that it doesn't quite have the logical force that one might think to assert that this time we have the approach right (ignore the previous 7 times somebody else said just the same thing).
I feel it would be cool to sometime run code on a radiation hardened Forth chip, or some obscure Lisp hardware, but would it be life changing? I doubt it.
Making something like that has turned into a lifetime project for me. Implemented a freestanding lisp on top of Linux's stable system call interface. It's gotten to the point it has delimited continuations.
https://github.com/lone-lang/lone/
It's a lisp interpreter with zero dependencies targeting Linux exclusively.
I've written about a few of its development milestones:
https://www.matheusmoreira.com/articles/self-contained-lone-...
https://www.matheusmoreira.com/articles/delimited-continuati...
I'm particularly proud of my ELF hack to allow the interpreter to introspect into a lisp code section at runtime without any /proc/self/exe shenanigans. Wish other languages would adopt it.
Top comment and its replies talk about linking the lisp code into a self-contained, easily distributable application:
https://news.ycombinator.com/item?id=45989721
I think I addressed that problem adequately. I can create applications by copying the interpreter and patching in some special ELF segments containing lisp modules. The mold linker even added features to make it easy and optimal.
Since there is no libc nonsense, Linux compatibility depends only on the system calls used. Theoretically, applications could target kernels from the 90s.
My Linux system call philosophy:
https://www.matheusmoreira.com/articles/linux-system-calls
At some point I even tried adding a linux_system_call builtin to GCC itself but unfortunately that effort didn't pan out.
Emacs is incredibly stable. Most problems happen in custom-made packages. I don't even remember Emacs ever segfaulting for me on Linux. On Mac it can happen, but very rarely. I don't ever remember losing my data in Emacs - even when I deliberately kill the process, it recovers the unsaved changes.
You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.
If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.
It isn't now... ;-)
It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.
And, for fairly obvious reasons!
Stephen Furber has extended discussion of the trade-offs involved in those decisions in his "VLSI RISC Architecture and Organization" (and also pretty much admits that having PC as a GPR is a bad idea: hardware is noticeably complicated for rather small gains on the software side).
It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel. Even ARM's medium cores have had higher IPC than same-generation x86 big cores since at least A78. SiFive's latest RISC-V cores are looking to match or exceed x86 IPC too. x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
ISA matters.
x86 is quite constrained by its decoders with Intel's 6 and 8-wide cores being massive and sucking an unbelievable amount of power and AMD choosing a hyper-complex 2x4 decoder implementation with a performance bottleneck in serial throughput. Meanwhile, we see 6-wide
32-bit ARM is a lot more simple than x86, but ARM claimed a massive 75% reduction in decoder size switching to 64-bit-only in A715 while increasing throughput. Things like uop cache aren't free. They take die area and power. Even worse, somebody has to spend a bunch of time designing and verifying these workarounds which balloons costs and increases time to market.
Another way the ISA matters is memory models. ARM uses barriers/fences which are only added where needed. x86 uses much tighter memory model that implies a lot of things the developers and compiler didn't actually need/want and that impact performance. The solution (not sure if x86 actually does this) is doing deep analysis of which implicit barriers can be provably ignored and speculating on the rest. Once again though, wiring in all these various proofs into the CPU is complicated and error-prone which slows things down while bloating circuitry, using extra die area/power, and sucking up time/money that could be spent in more meaningful ways.
While the theoretical performance mountain is the same, taking the stairs with ARM or RISC-V is going to be much easier/faster than trying to climb up the cliff faces.
It seems to me that Apple is simply going to require native ARM versions of new software if you want it to be signed and verified by them (which seems pretty reasonably after 5+ years).
These companies target different workloads. ARM, Apple, and Qualcomm are all making processors primarily designed to be run in low power applications like cell phones or laptops, whereas Intel and AMD are designing processors for servers and desktops.
> x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
My napkin math is that Apple’s transistor volumes are roughly comparable to the entire PC market combined, and they’re doing most of that on TSMC’s latest node. So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.
This hasn't been true for at least half of a decade.
The latest generation of phone chips run from 4.2GHz all the way up to 4.6GHz with even just a single core using 12-16 watts of power and multi-core hitting over 20w.
Those cores are designed for desktops and happen to work in phones, but the smaller, energy-efficient M-cores and E-cores still dominate in phones because they can't keep up with the P-cores.
ARM's Neoverse cores are mostly just their normal P-cores with more validation and certification. Nuvia (designers of Qualcomm's cores) was founded because the M-series designers wanted to make a server-specific chip and Apple wasn't interested. Apple themselves have made mind-blowingly huge chips for their Max/Ultra designs.
"x86 cores are worse because they are server-grade" just isn't a valid rebuttal. A phone is much more constrained than a watercooled server in a datacenter. ARM chips are faster and consume less power and use less die area.
> So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.
Apple doesn't design ARM's chips and we know ARM's peak revenue and their R&D spending. ARM pumps out several times more cores per year along with every other thing you would need to make a chip (and they announced they are actually making their own server chips). ARM does this with an R&D budget that is a small fraction of AMD's budget to do the same thing.
What is AMD's excuse? Either everybody at AMD and Intel suck or all the extra work to make x86 fast (and validating all the weirdness around it) is a ball and chain slowing them down.
Neither is "simple" but the axis is similar.
I don't know a lot of Lisp. I did some at school as a teenager, on BBC Micros, and it was interesting, but I never did anything really serious with it. I do know about Forth though, so perhaps people with a sense of how both work can correct me here.
Sadly, Forth, much as I love it and have done since I got my hands on a Jupiter Ace when I was about 9 or 10 years old, has not been a success, and probably for the same reasons as Lisp.
It just looks plain weird.
It does. I mean I love how elegant Forth is, you can implement a basic inner interpreter and a few primitives in a couple of hundred lines of assembler and then the rest is just written in Forth in terms of those primitives (okay pages and pages of dw ADDRESS_OF_PRIMITIVE instructions rather Forth proper). I'm told that you can do the same trick with Lisp, and maybe I'll look into that soon.
But the code itself looks weird.
Every language that's currently successful looks like ALGOL.
At uni, I learned Turbo Pascal. That have way to Modula-2 in "real" programming but by then I'd gotten my hands on an account on the Sun boxes and was writing stuff in C. C looked kind of like Pascal once you got round the idea that curly brackets weren't comments any more, so it wasn't a hard transition. I wrote lots of C, masses and masses, and eventually shifted to writing stuff in Python for doing webby stuff and C for DSP. Python... looks kind of like ALGOL, actually, you don't use "begin" and "end", you just indent properly, which you should be doing. Then Go, much later, which looks kind of like Pascal to me, which in turn looks kind of like ALGOL.
And so on.
You write line after line after line of "this thing does this to that", and it works. It's like writing out a recipe, even more so if you declare your ingredients^W variables at the top.
I love Forth, I really want to love Lisp but I don't know enough about it, but everyone uses languages that look like ALGOL.
In the late 1960s Citroën developed a car where the steering and speed were controlled by a single joystick mounted roughly where the steering wheel would be. No throttle, no clutch, no gears, just a joystick with force feedback to increase the amount of force needed to steer as the car sped up. Very comfortable, very natural, even more so when the joystick was mounted in the centre console like in some aircraft. Buuuuut, everyone uses steering wheels and pedals. It was too weird for people.
I don't like when anything short of taking over the world counts as failure. Forth has been an enormous success! Forth has visited asteroids, run factories and booted millions of computers. It has done well, and if it's heading off into the sunset it should be remembered for what it did rather than what it didn't do. I would be beyond thrilled if my language did a tenth as well as Forth.
It fits a particular ecological niche, but these days there's almost no reason to do things that way. In the olden days of the early 90s when I needed to write embedded code to run on what was basically a Z80 SBC, it was easier to write a Forth for it and assemble it natively on a clunky old CP/M machine (I used a Kaypro of some sort at work, but an Osborne 1 at home) than it was to struggle on with the crappy (like, really crappy) MS-DOS cross-assembler on the PCs we had.
Now of course I could emulate every single computer in the entire company on a ten quid embedded board, all at the same time.
> No, it wasn’t.
I kind of think it was. The best argument I think is embodied in Kent Pitman's comments in this usenet thread [1] where he argues that for the Lisp Machine romantics (at least the subset that include him) what they are really referring to is the total integration of the software, and he gives some pretty good examples of the benefits they bring. He freely admits there's not any reason why the experience could not be reproduced on other systems, it's that it hasn't been that is the problem.
I found his two specific examples particularly interesting. Search for
* Tags Multiple Query Replace From Buffer
and * Source Compare
which are how he introduced them. He also describes "One of the most common ways to get a foothold in Genera for debugging" which I find pretty appealing, and still not available in any modern systems.[1] https://groups.google.com/g/comp.lang.lisp/c/XpvUwF2xKbk/m/X...
It's not like it's the only system that suffers this, but "working well with others" is a big key to success in almost every field.
I'm absolutely fascinated by what worked and was possible in that venue, just like I find rust code fascinating. These days lisp is much more workable, as they slowly get over the "must coexist with other software". There are still things that are really hard to put in other computer languages.
C++ might be easier now; I don't know.
From a look a little, it seems rust has this pretty reliably - probably helped by sharing link environments with LLVM.
(I've only explored this a little from time to time). Mostly my work is all C and a bit of C++.
I dunno, as a Lisper I don't even have to think very hard - virtually any platform available to me, I can write almost anything in Lisp - for JVM and .Net - with Clojure; for Lua with Fennel; for Flutter with ClojureDart; Python - libpython-clj; C/C++ - Jade, CL, Carp and Jank; BEAM - Clojerl and LFE; Shell-scripting - babashka; For targeting js there are multiple options - clojurescript, nbb, squint.
Knowing some Lisp today is as practical as it gets. I really feel like a true polyglot coder - switching between different Lisps, even for drastically dissimilar platforms incurs virtually zero overhead while jumping even between JS and TS is always a headache.
i barely got to play with one for a few hours during an "ai" course, so i didn't really figure much of it out but ... oh yeah, it was "cool"! also way-way-way over my budget. i then kept an eye for a while on the atari transputer workstation but no luck, it never really took off.
anyway, i find this article quite out of place. what hordes of romantically spoiled lisp machine nostalgia fanatics harassed this poor guy to the extreme that he had to go on this (pretty pointless) disparaging spree?
https://userpages.umbc.edu/%7Evijay/mashey.on.risc.html
explains a lot of "what happened in the 1980s?" particularly why VAX and 68k were abandoned by their manufacturers. The last table shows how processors that had really baroque addressing modes, particularly involving indirection, did not survive. The old 360 architecture was by no means RISC but it had simple addressing modes and that helped it survive.
A Lisp-optimized processor would be likely to have indirection and generally complex ways how instructions can fail which gets in the way of efficient pipelined implementations. People like to talk about "separation of specification and implementation" but Common Lisp was designed with one eye on the problem of running it efficiently on the "32-bit" architectures of the 1980s and did OK on the 68k which was big then and also with the various RISC architectures and x86 which is simple enough that it is practical to rewrite the instruction stream into microinstructions which can be easily executed.
FWIW: Technology Connections did a teardown of why Betamax wasn't better than VHS: https://www.youtube.com/watch?v=_oJs8-I9WtA&list=PLv0jwu7G_D...
And the whole series if you actually enjoy watching these things: https://www.youtube.com/playlist?list=PLv0jwu7G_DFUrcyMYAkUP...
> The tapes were more compact and used less storage space.
That generally is considered the "death nail" of the format. People generally chose VHS because they could record 6-9 hours on a single tape, (while on vacation,) but the smaller size of the Betamax cassette limited it to shorter recordings.
It also impacted quality of feature length movies: They used the fastest tape speed on VHS, but had to be a slower tape speed on Betamax, negating the supposed quality improvement on Betamax.
> I also liked that you could use betamax with a Sony PCM F1 processor to record digital audio before the advent of the DAT format (digital audio tape)
Most people (consumers) never used their VCRs to record and play back digital audio, they used CDs and cassettes.
The PCM F1 was a professional / prosumer device, not a consumer device like a CD player. I assume that people who were using it were going to have a separate VCR for studio use than their living room (VHS), and weren't going to decide between VHS vs Betamax for pairing with their PCM F1.
It's probably worth reading this Alan Kay comment, which I excerpted from https://www.quora.com/Papers-about-the-Smalltalk-history-ref... on Quora before it started always blocking me as a robot:
> The idea of microcode was invented by Maurice Wilkes, a great pioneer who arguably made the earliest programmable computer — the EDSAC (pace Manchester Baby). The idea depends partly on the existence of a “large enough” memory that is much faster (3–10 times) than the 1st level RAM of the computer.
> A milestone happened when the fast memory for microcoding was made reloadable. s Now programmable functions that worked as quickly as wired functions could be supplied to make a “parametric” meta-machine. This technique was used in all of the Parc computers, both mainframes and personal computers.
> Typical ratios of speed of microcode memory to RAM were about 5x or more, and e.g the first Altos had 4kbytes (1k microinstructions) that could be loaded on the fly. The Alto also had 16 program counters into the microcode and a shared set of registers for doing work. While running, conditions on the Alto — like a disk sector passing, or horizontal retrace pulse on the CRT — were tied to the program counters and these were concurrently scanned to determine the program counter that would be used for the next microinstruction. (We didn’t like or use “interrupts” … )
> This provided “zero-overhead tasking” at the lowest level of the machine, and allowed the Alto to emulate almost everything that used to be the province of wired hardware.
> This made the machine affordable enough that we were able to build almost 2000 of them, and fast enough to do the functionality of 10–15 years in the future.
> Key uses of the microcode were in making suitable “language machines” for the VHLLs we invented and used at Parc (including Smalltalk, Mesa, etc.), doing real time high quality graphical and auditory “animations/synthesis”, and to provide important systems functions (e.g. certain kinds of memory management) as they were invented.
> It’s worth looking at what could have been done with the early 16 bit VLSI CPUs such as the Intel 8086 or the Motorola 68K. These were CISC architectures and were fast enough internally to allow a kind of microcoding to support higher level language processing. This is particularly important to separate what is a kind of interpreter from having its code fetched from the same RAM it is trying to emulate in.
> The 68K in fact, used a kind of “nano-coding”, which could have been directed to reloadability and language processing.
> The big problem back then was that neither Intel nor Motorola knew anything about software, and they didn’t want to learn (and they didn’t).
> The nature of microcode is that architectures which can do it resemble (and anticipated) the RISC architectures. And some of the early supercomputers — like the CDC 6600 — were essentially RISC architectures as well. So there was quite a bit of experience with this way of thinking.
> In the 80s, the ratio between RAM and CPU cycles was closing, and Moore’s Law was starting to allow more transistors per chip. Accessing a faster memory off CPU chip started to pay off less (because going off chip costs in various ways, including speed).
> Meanwhile, it was well known that caching could help most kinds of architectures (a landmark study by Gordon Bell helped this understanding greatly), and that — if you are going to cache — you should have separate caches for instructions and for data.
> Up to a point, an instruction cache can act like a microcode memory for emulating VHLLs. The keys are for it (a) to be large enough to hold the inner loops of the interpreter, (b) to not be flushed spuriously, and (c) for the machine instructions to execute quickly compared to the cache memory cycle.
> Just to point the finger at Intel again, they did a terrible job with their cached architectures, in part because they didn’t understand what could be gained with VHLLs.
> A really interesting design was the first ARM — which was a pretty clean RISC and tidy in size. It could have been used as an emulator by wrapping it with fast instruction memory, but wasn’t. I think this was a “point of view” disconnect. It was a very good design for the purpose of its designers, and there wasn’t enough of a VHLL culture to see how it could be used at levels much higher than C.
> If we cut to today, and look at the systems that could be much better done, we find that the general architectures are still much too much single level ones, that ultimately think that it is good to have the lowest levels in a kind of old style machine code programmed in a language like C.
> A very different way to look at it might be to say: well, we really want zillions of concurrent and safe processes with very fast intermessaging programmed at the highest levels — what kind of architecture would facilitate that? We certainly don’t want either “interrupts” or long latency process switching (that seems crazy to “old Parc people”. We probably want to have “data” and “processing” be really close to each other rather than separated in the early von Neumann ways.
> And so forth. We won’t be able to be perfect in our hardware designs or to anticipate every future need, so we must have ways to restructure the lowest levels when required. One way to do this these days is with FPGAs. And given what it costs to go off chips, microcoding is far from dead as another way to help make the systems that we desire.
> The simple sum up here is that “hardware is just software crystallized early”, and a good systems designer should be able to design at all levels needed, and have the chops to make any of the levels if they can’t be purchased …
Boss: Hey there, you like learning new things right?
Him (sensing a trap): Errr, yes.
Boss: But you don’t program in lisp do you?
Him (relieved, thinking he’s getting out of something): No.
Boss: Good thing they sent these (gesturing at a literal bookshelf full of manuals that came with the symbolics).
So he had to write a tcp stack. He said it was really cool because it had time travel debugging, the ability hit a breakpoint, walk the execution backwards, change variables and resume etc. This is in the 1980s. Way ahead of its time.
N_Lens•2mo ago
kazinator•2mo ago
pfdietz•2mo ago
DonHopkins•2mo ago