The Symbolics people refined this approach while the LMI people kept the original design until nearly the end when they tried to do a RISC+tags:
The Lisp Machine macroinstructions aren't that complicated, it is a basically stack based machine -- the most complicated thing was handling of the function arguments from Lisp (FEF); which has complicated semantics when it comes to handling optional, keywords, and what not arguments.
Those of us who used early versions of Windows without protected memory don't consider a "single address space" to be a feature.
Windows is programmed using machine-language executables written using an unsafe language, which makes all the difference. That's what necessitates hardware protection.
(One single address space still benefits from virtual memory, whose advantages go beyond protection.)
PaulHoule•8mo ago
An alternate OS is an appealing idea in many ways today but runs into the problem of "where do you get your userspace?" Make it POSIX compatible and you can run all kinds of C code like the GNU tools and other things you find in a Linux distribution. Make something radical and new and you have to write everything from scratch and so do all your users.
amszmidt•8mo ago
Common Lisp was definitely not designed with the intention of better performance, the intention was literally to have a _common_ Lisp language that multiple implementations could use and where porting programs would be much easier. One needs to remember that when Common Lisp was first drafted, there where dozens of Lisp dialects -- all of them different.
The list of CPUs is a very large span, some predating Common Lisp (CLtL1 is from 1984, CLtL2 from 1990, and ANSI Common Lisp from 1996) by close to several years (VAX, from 1977).
But other than that, the idea of a not-Unix system does fall into those two buckets ... make it Unix, or rewrite everything. One can see this in Oberon, Smalltalk-78, Mezzano, etc...