Some lesson must surely be drawn from this about incremental adoption.
There's even more under the "Updates archive" expando in that post.
It was a pretty compelling prototype. But after I played with Polyglot Notebooks[1], I pretty much just abandoned that experiment. There's a _lot_ of UI that needs to be written to build a notebook-like experience. But the Polyglot notebooks took care of that by just converting the commandline backend to a jupyter kernel.
I've been writing more and more script-like experiments in those ever since. Just seems so much more natural to have a big-ol doc full of notes, that just so happens to also have play buttons to Do The Thing.
[1]: https://marketplace.visualstudio.com/items?itemName=ms-dotne...
https://commons.wikimedia.org/wiki/File:DEC_VT100_terminal.j...
I may disappoint you with the fact that IBM PC-compatible computers have replaced devices of that class. We can only observe certain terminal emulators in some operating systems. There have been many attempts to expand the functionality of these emulators. However, most features beyond the capabilities of VT100 have not caught on (except UTF-8 support). I do not believe that anything will change in the foreseeable future.
- Emacs (inherited from lisp machines?). A VM which is powered by lisp. The latter make it easy to redefine function, and commands are just annotated functions. As for output, we have the buffer, which can be displayed in windows, which are arranged in a tiling manner in a frame. And you can have several frames. As the buffer in a window as the same grid like basis as the terminal emulator, we can use cli as is, including like a terminal emulator (vterm, eat, ansi-term,...). You can eschew the terminal flow and use the REPL flow instead (shell-mode, eshell,...). There's support for graphics, but not a full 2d context.
- Acme: Kinda similar to emacs, but the whole thing is mostly about interactive text. Meaning any text can be a command. We also have the tiling/and stacking windows things that displays those texts.
I would add Smalltalk to that, but it's more of an IDE than a full computing environment. But to extend it to the latter would still be a lower effort than what is described in the article.
I open a poorly aligned, pixelated PDF scan of a 100+ year old Latin textbook in Emacs, mark a start page, end page, and Emacs lisp code shells out to qpdf to create a new smaller pdf from my page range to /tmp, and then adds the resulting PDF to my LLM context. Then my code calls gptel-request with a custom prompt and I get an async elisp callback with the OCR'd PDF now in Emacs' org-mode format, complete with italics, bold, nicely formatted tables, and with all the right macrons over the vowels, which I toss into a scratch buffer. Now that the chapter from my textbook in a markup format, I can select a word, immediately pop up a Latin-to-English dictionary entry or select a whole sentence to hand to an LLM to analyze with a full grammatical breakdown while I'm doing my homework exercises. This 1970s vintage text editor is also a futuristic language learning platform, it blows my mind.
one of the strange things to me about the terminal landscape is how little knowledge sharing there is compared to other domains i'm familiar with. iTerm has a bunch of things no one else has; kitty influenced wezterm but otherwise no one else seems to have valued reflection; there's a whole bunch of extensions to ANSI escapes but most of them are non-standard and mutually incompatible. it's weird. if i compare to something like build systems, there's a lot more cross-pollination of ideas there.
The last thing a command-line terminal needs is a Jupyter Notebook-like UI. It doesn't need to render HTML; it doesn't need rerun and undo/redo; and it definitely doesn't need structured RPC. Many of the mentioned features are already supported by various tooling, yet the author dismisses them because... bugs?
Yes, terminal emulators and shells have a lot of historical baggage that we may consider weird or clunky by today's standards. But many design decisions made 40 years ago are directly related to why some software has stood the test of time, and why we still use it today.
"Modernizing" this usually comes with very high maintenance or compatibility costs. So, let's say you want structured data exchange between programs ala PowerShell, Nushell, etc. Great, now you just need to build and maintain shims for every tool in existence, force your users to use your own custom tools that support these features, and ensure that everything interoperates smoothly. So now instead of creating an open standard that everyone can build within and around of, you've built a closed ecosystem that has to be maintained centrally. And yet the "archaic" unstructured data approach is what allows me to write a script with tools written decades ago interoperating seamlessly with tools written today, without either tool needing to directly support the other, or the shell and terminal needing to be aware of this. It all just works.
I'm not saying that this ecosystem couldn't be improved. But it needs broad community discussion, planning, and support, and not a brain dump from someone who feels inspired by Jupyter Notebooks.
Yes, this is the work. https://becca.ooo/blog/vertical-integration/
With lisp REPLs one types in the IDE/editor having full highlighting, completions and code intelligence. Then code is sent to REPL process for evaluation. For example Clojure has great REPL tooling.
A variation of REPL is the REBL (Read-Eval-Browse Loop) concept, where instead of the output being simply printed as text, it is treated as values that can be visualized and browsed using graphical viewers.
Existing editors can already cover the runbooks use case pretty well. Those can be just markdown files with key bindings to send code blocks to shell process for evaluation. It works great with instructions in markdown READMEs.
The main missing feature editor-centric command like workflow I can imagine is the history search. It could be interesting to see if it would be enough to add shell history as a completion source. Or perhaps have shell LSP server to provide history and other completions that could work across editors?
Atuin runbooks (mentioned in the article) do this! Pretty much anywhere we allow users to start typing a shell command we feed shell history into the editor
- https://arcan-fe.com/ which introduces a new protocol for TUI applications, which leads to better interactions across the different layers (hard to describe! but the website has nice videos and explanations of what is made possible)
- Shelter, a shell with reproducible operations and git-like branches of the filesystem https://patrick.sirref.org/shelter/index.xml
wredcoll•2h ago
add-sub-mul-div•1h ago
wredcoll•1h ago
Terminal emulators display grids of characters using all sorts of horrifying protocols.
Web browsers display html generated by other programs.
unixplumber•33m ago
What sort of "horrifying protocols"? The entire VT220 state machine diagram can be printed on a single letter- or A4-sized sheet of paper. That's the complete "protocol" of that particular terminal (and of any emulator of it). Implementing the VT220 with a few small extensions (e.g., 256 colors or 24-bit colors) wouldn't be too onerous. I implemented such a parser myself in probably a few hundred lines of code, plus a bit more to do all of the rendering (drawing glyphs directly to a bitmapped display, with no libraries) and handling user input from a keyboard. You'd have a difficult time properly parsing and rendering a significant subset of HTML in less than a few _thousand_ lines of code.
Edit to add: terminal emulators often implement other terminals like VT420, but the VT220 is enough for the vast majority of terminal needs.
shirro•1h ago
The web solves problems that are almost impossible to properly solve with a terminal, particularly with rendering of more complicated languages and display and interaction with sophisticated visualisations.
Pushing the terminal further while maintaining compatibility, performance and avoiding a terminal war with incompatible protocols is going to be a struggle.
hastamelo•1h ago
mrandish•22m ago
Don't get me wrong, I'd be quite interested in a vintage computing discussion on the evolution of VT-100/220 etc terminal protocols. There were some interesting things done into the 90s. That's actually what I clicked in expecting. Of course, those were all supplanted by either XWindows (which I never got to use much) or eventually HTML/CSS. And if we're talking more broadly about structured page description languages, there's no shortage of alternatives from NAPLPS to Display Postscript.