This is my favorite video by Newspeak's creator Gilad Bracha: https://youtu.be/BDwlEJGP3Mk?si=Z0ud1yRqIjVvT4oO
Were the users running, say, Windows and then the Smalltalk "OS" would be running on top of that, but in a sort of "kiosk mode" where its full OS-ness was suppressed and it was dedicated to showing a single interface?
Calling it "tree shaking" is web development term AFAIK.
When, as a dev, I use Smalltalk, it opens up what's effectively a virtual machine on my desktop. The whole Smalltalk GUI runs inside its own frame, none of the controls are native, etc. And it's a development environment - I have access to a class browser, a debugger, a REPL, and so on. I can drill down and read/modify the source code of everything. Which is great as a dev, but may be intimidating for an end user.
Is that what the end user experience is like as well? I think that's what OP is asking. I've never used a Smalltalk application as an end user to my knowledge, so I can't say myself.
What about this makes you think it's a rant? Is the author making an impassioned plea for people to use Smalltalk? Is he going off on a tirade about something?
I've experienced this a few different times: with Microsoft BASIC-80 (and GW-BASIC), with SBCL and SLIME, with LOGO, with GForth, with OpenFirmware, with MS-DOS DEBUG.COM, with Jupyter, and of course with Squeak. It really is nice.
It used to be the normal way of using computers; before memory protection, it was sort of the only way of using computers. There wasn't another memory space for the monitor to run in, and the monitor was what you used to do things like load programs and debug them. This approach continued as the default into many early timesharing systems like RT-11 and TENEX: there might be one virtual machine (memory space) per user, but the virtual machine you typed system commands into was the same one that ran your application. TENEX offered the alternative of running DDT (the debugger) in a different memory space so bugs in the application couldn't corrupt it, and that was the approach taken in ITS as well, where DDT was your normal shell user interface instead of an enhanced one.
All this seems very weird from the Unix/VMS/Win32 perspective where obviously the shell is a different process from your text editor, and it's designed for launching black-box programs rather than inspecting their internal memory state, but evolutionarily it was sort of the natural progression from a computer operator single-stepping a computer (with no memory protection) through their program with a toggle switch as they attempted to figure out why it wasn't working.
One of the nicest things about this way of working is halt-and-continue. Current versions of Microsoft Visual Studio offer sometimes halt and continue. In MBASIC you could always halt and continue. ^C halted the program, at which point you could examine variables, make arbitrary changes to the program, GOTO a line number, or just CONT to continue where you'd interrupted it. Smalltalk, SLIME, or ITS allows you to program in this way; if you like, you can refrain from defining each method (or function or subroutine) until the program tries to execute it, at which point it halts in the debugger, and you can write the code for the method and continue.
This is an extremely machine-efficient approach; you never waste cycles on restarting the program from the beginning unless you're going to debug program initialization. And in Smalltalk there isn't really a beginning at all, or rather, the beginning was something like 50 years ago.
Myself, though, I feel that the hard part of programming is debugging, which requires the experimental method. And the hard part of the experimental method is reproducibility. So I'm much more enthusiastic about making my program's execution reproducible so that I can debug faster, which conflicts with "you're in the running environment". (As Rappin says, "Code could depend on the state of the image in ways that were hard to replicate in deploys." I experience this today in Jupyter. It's annoying to spend a bunch of time trying to track down a bug that doesn't exist when you restart from scratch; worse is when the program works fine until you restart it from scratch.) So I'm much more excited about things like Hypothesis (https://news.ycombinator.com/item?id=45818562) than I am about edit-and-continue.
Paul Graham wrote somewhere (I can't find it now) about how in Viaweb's early days he would often fix a bug while still on the phone with the customer who was experiencing it, because he could just tweak the running CLisp process. But you can do the same thing in PHP or with CGI without sacrificing much reproducibility—your system's durable data lives in MariaDB or SQLite, which is much more inspectable and snapshottable than a soup of Smalltalk objects pointing to each other. (#CoddWasRight!) Especially since the broad adoption of the Rails model of building your database schema out of a sequence of "migrations".
PHP is similar, but not the same. You can't (or at least I can't) stop a request in progress and change its code; but you can rapidly change the code for the next request. Make a change in the editor, hit reload in the browser is a productive short loop, but stop at a breakpoint, inspect the state and change the code is a more powerful loop. Stopping at a breakpoint is challenging in systems with communication though, and I've learned to live without it for the most part.
Database transactions bridge some of the gap: as long as your request handler code keeps failing, it will abort the transaction, so the database is unchanged, so you can restart the transaction as many times as you want to get to the same point in execution, at least if your programming language is deterministic. By providing a snapshot you can deterministically replay from, it allows you to add log entries before the point where the problem occurred, which can be very useful.
Stopping at a breakpoint can be more productive, especially with edit-and-continue, but often it isn't. A breakpoint is a voltmeter, which you can use to see one value at every node in your circuit; logs are a digital storage oscilloscope with a spectrum analyzer, where you can analyze the history of millions or billions of values at a single node in your circuit.
postexitus•1h ago
criddell•24m ago
I was thinking that supporting a Smalltalk application must be a nightmare because it is so malleable. Users can inspect and modify the entire system, no?
supportengineer•13m ago
Scubabear68•10m ago
The image meant you basically got whatever state the developer ended up with, frozen in time, with no indication really of how they got there.
Think of today's modern systems and open source, with so many libraries easily downloadable and able to be incorporated in your system in a very reproducible way. Smalltalk folks derided this as a low tech, lowest-common-denominator approach. But in fact it gave us reusable components from disparate vendors and sources.
The image concept was a huge strength of Smalltalk but, really in the end in my opinion, one of the major areas that held it back.
Java in particular surged right past Smalltalk despite many shortcomings compared to it, partially because of this. The other part of course was being free at many levels. The other half of Smalltalk issues beyond the image one, was the cost of both developer licenses ($$$$!) and runtime licenses (ugh!).