It’s weird because memory use for the same sorts of programs is not much worse than other languages. In Rust memory use seems comparable to C++. In Go there’s a bit more overhead but it’s still smaller than the binary. So all this is not being loaded.
I get the sense devs just don’t put a lot of effort into stripping dead code and data since “storage is cheap” but it shows next to C or even C++ programs that are a fraction of the size.
I see nothing about Rust’s safety or type system that should result in chonky binaries. All that gets turned into LLVM IR just like C or C++.
Go ships a runtime so that explains some, but not all, of its bloat.
Well, you can non-portably skip kernel32, and use ntdll, but then your program won't work in the next Windows version (same as on any platform really - you can include the topmost API layers in your code, but they won't match the layers underneath of the next version).
But system DLLs are DLLs, so also don't cause your .exe to get bloated.
On some systems, this is just not a supported configuration (like what you're talking about with Windows) and on some, they go further, and actually try and prevent you from doing so, even in assembly.)
Linux software is binary portable between distros as long as the binary was compiled using a Glibc version that is either the same or older than the distros you are trying to target. The lack of "portability" is because of symbol versioning so that the library can expose different versions of the same symbol, exactly so that it can preserve backwards compatibility without breaking working programs.
And this is not unique to Glibc, other libraries do the same thing too.
The solution is to build your software in the minimum version of libraries you are supposed to support. Nowadays with docker you can set it up in a matter of minutes (and automate it with a dockerfile) - e.g. you can use -say- Ubuntu 22 to build your program and it'll work in most modern Linux OSes (or at least glibc wont be the problem if it doesn't).
Another advantage is that at least for Rust you can do whole program optimization. The entire program tree is run through the optimizer resulting in all kinds of optimizations that are otherwise impossible.
The only other kinds of systems that can optimize this way are higher level JIT runtimes like the JVM and CLR. These can treat all code in the VM as a unit and optimize across everything.
I get why this might lead to big intermediate files, but why do the final binaries get so big?
The main issue is that Rust binaries typically only link to libc whereas C++ binaries link to everthing under the sun, making the actual executable look tiny because that's not where most of the code lives.
C++ has had whole program optimization since forever. And you can use static linking if you want, the same as Rust.
When I tried to compare Rust programs to their C(++) equivalents by adding the sizes of linked libraries recursively (at least on Linux, that's impossible for Windows), I still found Rust programs to have a rather large footprint. Especially considering Rust still links to glibc which is a significant chunk of any other program as well.
I believe many of Rust's statically linked libraries do more than their equivalents in other languages, so I think some more optimisation in stripping unused code paths could significantly reduce the size of some Rust applications.
They finally got good enough in the late 90s. I think it helped that computers finally had enough memory to run both the editor and the program itself.
I first used emacs on terminals that were hooked to Sun workstations and you were either going to use a serial terminal which was very slow, or the terminal emulator on the Sun which was a GUI program that had to do a lot of work to draw the characters into the bitmap. So that’s your reason TUIs went away.
Remember, the video hardware rendered text mode full-screen, and it had to be reconfigured to change to a different number of lines and columns. Only specific sizes were supported.
Most games at that time used mode 13h which was 320x200 with 8-bits per pixel which therefore indexed into a 256-colour palette (which could itself be redefined on the fly via reading from and writing to a couple of special registers - allowing for easy colour-cycling effects that were popular at that time). Here's a list of the modes: https://www.minuszerodegrees.net/video/bios_video_modes.htm
If you made a custom font you could also have more diversity in the number of rows too but this was rarely done.
Eventually different text modes became available with higher resolution video cards and monitors. 132 columns of text were common but there were others.
I used Turbo Pascal 2 as late as 1991, if not later, because that was the version we had. It was really fast on a 386 40 MHz or whatever exact type of PC we had then. A bit limiting perhaps that it only came with a library for CGA graphics, but on the other hand it made everything simpler and it was good for learning.
A few years ago I wanted to run my old Turbo Pascal games and decided to port to Free Pascal. Sadly Free Pascal turned out to only ship with the graphics library introduced in Turbo Pascal 4, but on the other hand I got a few hours of fun figuring out how to implement the Turbo Pascal 1-3 graphics API using inclined assembler to draw CGA graphics, and then my games worked (not very fun games to be honest; more fun to implement that API).
I used to have a copy of a Turbo Pascal graphics book with a blue-purple Porsche (not pg's hah) on the cover that included code for a raytracer. It would take about a minute to render one line at 320x200x256 colors, depending on the number of scene objects and light sources.
Zed has remote editing support and is open source. Resource consumption is a bizarre proposition, considering what abstractions the terminal has to be forced into to behave something like a normal window.
Really, TUIs are not very good. I get it, I use the terminal all the time and I will edit files with vim in it, but it is a pointless exercise to try to turn the terminal into something it was never meant to be and try to have it emulate something which would be trivial on a normal OS window. To be honest it makes me cringe when people talk about how much they perform tasks in the terminal, which would be much easier done in a graphical environment with proper tools.
TUIs are bizarre legacy technology, which are full of dirty hacks to somewhat emulate features every other desktop has. Why would any developer use them, when superior alternatives, not based on this legacy technology, exist and freely available?
User experience is inconsistent with features varying wildly between terminals, creating a frustrating user experience. It is also making customization difficult. E.g. in a TUI IDE you can not have font settings. Short cuts are also terminal dependent, an IDE can only use those shortcuts the terminal isn't using itself.
Something as basic as color is extremely hard to do right on a terminal. Where in a normal GUI you can give any element a simple RGB color, you can not replicate that across TUIs. The same goes for text styling, the terminal decides what an italic font it wants to use and the IDE can not modify this.
They are also very limited in graphical ability. Many features users expect in a GUI can not be replicated or can only be replicated poorly. E.g. modern data science IDEs feature inline graphics, such as plots. This is (almost) not replicable on a Terminal. If you are using profiler you might want to plot, preferably with live data. Why arbitrarily limit what an IDE can do to some character grid?
The terminal is just a very poor graphical abstraction. It arbitrarily limits what an IDE can do. Can you tell me why anybody would seriously try to use a terminal as an IDE? Terminals UIs are more complex, because they need to handle the bizarre underlying terminal, they are often less responsive, since they rely on the terminal to be responsive. There might be some very marginal improvement in resource usage, do you think that is even relevant compared to the much increased dev experience of a normal GUI?
There absolutely is no real advantage of TUIs. And generally I have found people obsessing over them to be mostly less tech literate and wanting to "show off" how cool their computer skills are. All serious developers I have ever known used graphical dev tools.
>> need an enormous array of hacks to emulate basic features
What are those hacks. As far as I can remember, TUIs ran faster on ancient hardware then anything else on today's modern computers.
People know perfectly well that I am talking about the way in which a terminal emulator can be used to display 2D graphics. By utilizing specific escape sequences to draw arbitrary glyphs on the terminal grid.
>What are those hacks.
Everything is a hack. TUIs work by sending escape sequences, which the terminal emulator then interprets in some way and if everything goes right you get 2D glyph based graphics. Literally everything is a hack to turn something which functions like a character printer into arbitrary 2D glyphs. Actually look at how bad this whole thing is. Look at the ANSI escape sequence you need to make any of this work, does that look like a sane graphics API to you? Obviously not.
>As far as I can remember, TUIs ran faster on ancient hardware then anything else on today's modern computers.
This is just delusional. Modern 2D graphics are extremely capable and deliver better performance in every metric.
>> This is just delusional.
That is a bit uncalled for.
We are not talking about DOS, we are talking about "modern" TUIs you would use on a modern Linux/Windows/MacOS system.
I even made that explicit in my first paragraph.
How is graphical vim even different from TUI vim? At least Emacs can render images.
We don't need to go back to the 66MHz era, but it's embarrassing that programs running on a dozen computer cores all executing at several gigahertz feel less responsive than software written half a century ago. Sure, compiling half a gigabyte of source code now finishes before the end of the year, but I rarely compile more than a hundred or new lines at a time and the process of kickstarting the compiler takes much longer than actual compilation.
A terminal is no more than a rendering environment. With some workarounds (a custom renderer and input loop most likely), you can probably compile Zed to run in a FreeDOS in the same environment you use to run Turbo Pascal. I doubt you'll get the same responsiveness, though.
Today, Python, Rlang, PHP, Java, and Lisp bring these features. But not C. Oh the irony.
At least that's the theory, in reality make has a lot of warts and implementing a good solid make file is an art. Don't even get me started on the horrors of automake, perhaps I just need to use it in one of my own projects but as someone who primarily ports others code, I hate it with a passion. It is so much easier when a project just sticks with a hand crafted makefile.
For completeness: The other half of make is to implement the rest of the build process.
And yes, efficient separate and incremental compilation is major advantage of C. I do not understand why people criticize this. It works beautifully. I also think it is good that the language and build system are separate.
Borland C++ had the compiler as part of the IDE (there was also a separate command-line version, but it was also compiled as part of the IDE). This allowed the IDE to not spawn separate processes for each file nor even need to hit the disk - the compiler (which was already in RAM as part of the IDE's process) would read the source code from the editor's buffer (instead of a file, so again, no hitting the disk) and would also keep a bunch of other stuff in memory between builds instead of reading it.
This approach allows the compiler to reuse data not only between builds but also between files of the same build. Meanwhile make is just a program launcher, the program - the compiler - need to run for each file and load and parse everything it needs to work for every single source file it needs to compile, thus rebuilding and destroying its entire universe for each file separately. There is no reuse here - even when you use precompiled headers to speed up some things (which is something Borland C++ also supported and it did speed up things even more on an already fast system), the compiler still needs to build and destroy that universe.
It is not a coincidence that one of the ways nowadays to speed up compilation of large codebases is unity builds[0] which essentially combine multiple C/C++ files (the files need to be aware of it to avoid one file "polluting" the contents of another) to allow multiple compilation units reuse/share the compilation state (such as common header files) with a single compiler instance. E.g. it is a core feature of FASTbuild[1] which combines distributed builds, caching and unity builds.
Of course Borland C++'s approach wasn't perfect as it had to run with limited memory too (so it still had to hit the disk at some point - note though that the Pascal compilers could do everything in memory, including even the final linking, even the program could remain in memory). Also bugs in the compiler could linger, e.g. i remember having to restart Borland C++ Builder sometimes every few hours of using it because the compiler was confused about something and had cached it in memory between builds. Also Free Pascal's text mode IDE (shown in the article) has the Free Pascal compiler as part of the IDE itself, but in the last release (i think) there is a memory leak and the IDE's use keeps increasing little by little every time you build, which is something that wouldn't matter with a separate program (and most people use FPC as a separate program via Lazarus these days, which is most likely why nobody noticed the leak).
Why? Yes, VSCode is slow. But Zed and many neovim GUIs are extremely responsive. Why would achieving that even be impossible or even that hard? You "just" need software which is fast enough to render the correct output the frame after the input. In an age where gaming is already extremely latency sensitive, why would having a text editor with similar latency performance be so hard?
Do you have any actual evidence that zed or neovide are suffering from latency problems? And why would putting a terminal in the middle help in any way in reducing that latency?
The problem is the entire software stack between the keyboard and the display. From USB polling to driver loops and GPU callbacks, the entire software stack has become incredibly asynchronous, making it trivial for computers to miss a frame boundary. Compared to DOS or similar environments, where applications basically took control over the entire CPU and whatever peripherals it knew to access, there are millions of small points where inefficiencies can creep in. Compare that to the hardware interrupts and basic processor I/O earlier generations of computers used, where entered keys were in a CPU buffer before the operating system even knew what was happening.
VSCode isn't even that slow, really. I don't find it to be any slower than Zed, for instance. Given the technology stack underneath VSCode, that's an impressive feat by the Microsoft programmers. But the kind of performance TUI programs of yore got for free just isn't available to user space applications anymore without digging into low-level input APIs and writing custom GPU shaders.
In small part, CRTs running at 70Hz or 85Hz back in the mid-80s, as well as the much smoother display output of CRTs versus even modern LCDs, made for a much better typing experience.
I think what TUIs get right is that they are optimized for use by the keyboard.
I don’t care if they are a pain for devs to write vs OS APIs, they have the best keyboard control so I use them. I despise the mouse due to RSI issues in the past.
>I think what TUIs get right is that they are optimized for use by the keyboard.
Neovim is just as much a GUI as a TUI. You can even use it as a backend for VSCode. Nothing about the keyboard controls have anything to do with this.
I use neovim like that and the selling point for me is that it's 1 less program that I have to install and learn with the added (crucial) benefit that it doesn't update on its own, changing UI and setting that I was used to.
It ships with your OS?
This exact thing remains true though, you are using the exact same neovim, but instead of it being wrapped inside a totally bizarre piece legacy software, it is rendered inside a modern graphical frontend. It looks mostly the same, except it handles fonts better, it is independent of weird terminal quirks and likely faster. There is no dowside.
And again, your point about using TUI stuff because of the input method or whatever is just false. Neovide has the exact same input method, yet has a complete GUI. Using the terminal makes no sense it all, it is the worst neovim experience there is.
Heck, on modern terminals there's even pretty great mouse integration if you want.
Unless your window full of text is GPU-accelerated, tear-free and composited, with raytraced syntax highlighting and AI-powered antialiasing, what is even the point?
TUIs are great if your structure them around keyboard input. There's more of a learning curve, but people develop a muscle memory for them that lets them fly through operations. I think the utility of this is sorely underestimated and it makes me think of my poor mom, whose career came to an end as she struggled with the new mouse-driven, web-enabled custoner service software that replaced the old mainframe stuff.
The late 80s/early 90s trend of building GUI-like TUIs was really more to get users on board with the standard conventions of GUIs at a time when they weren't yet ubiquitous (among PC users). Unifying the UI paradigms across traditional DOS and Windows apps, with standard mouse interactions, standard pull-down menus, and standard keyboard shortcuts was a good thing at the time. Today it's less useful. Things like Free Pascal have UIs like this mainly for nostalgia and consistency with the thing they're substituting for (Turbo Pascal).
Neovim and it's frontends prove that if you remove terminal emulators the applications become better. The terminal emulator is just in the way.
There is absolutely no reason to build that keyboard focused interface around the terminal. Just drop the terminal and keep the interface, just like neovim did.
For me the best textual interface I've ever used remains Magit in Emacs: https://magit.vc/ I wish more of Emacs was like it.
I actually use emacs as my git clients even when I'm using a different IDE for whatever reason.
Some other packages also use it. Most notably for my personal usage is the gptel package.
The real neat thing about Emacs' text interface is that it is just text that you can consistently manipulate and interact with. It is precisely the fact that I can isearch, use Occur write out a region to a file, diff two buffers, use find-file-at-point, etc. that makes it so interesting to me at least.
A far more interesting example than Magit is the compile buffer (from M-x compile): This is just a regular text buffer with a specific major mode that highlights compiler errors so that you can follow them to the referenced files (thereby relegating line-numbers to an implementation detail that you don't have to show the user at all times). But you can also save the buffer, with the output from whatever the command was onto disk. If you then decide to re-open the buffer again at whatever point, it still all looks just as highlighted as before (where the point is not that it just uses color for it's own sake, but to semantically highlight what different parts of the buffer signify) and you can even just press "g" -- the conventional "revert" key -- to run the compile job again, with the same command as you ran the last time. This works because all the state is syntactically present in the file (from the file local variable that indicates the major mode to the error messages that Emacs can recognize), and doesn't have to be stored outside of the file in in-memory data structures that are lost when you close Emacs/reboot your system. The same applies to grepping btw, as M-x grep uses a major mode that inherits the compile-mode.
For people who can look at a list of key bindings once and have them memorized, maybe. Turns out most people are not like that, and appreciate an interface that accounts for that.
You also completely ignore that the menus are used to set arguments to be used by the command subsequently invoked, and that the enabled/disabled arguments and their values can be remembered for future invocations.
> The fact that Transient hooks into the MVC and breaks elementary navigation such as using isearch
Not true. (Try it.) This was true for very early versions; it hasn't been true for years.
> or switching around buffers
Since you earlier said that transient menus could be replaced with regular prefix keys, it seems appropriate to point out that transient menus share this "defect" with regular prefix keys, see https://github.com/magit/transient/issues/17#issuecomment-46.... (Except that in the case of transient you actually can enable such buffer switching, it's just strongly discouraged because you are going to shoot yourself in the foot if you do that, but if you really want to you can, see https://github.com/magit/transient/issues/114#issuecomment-8....
> has irritated me ever since Magit adopted the new interface.
I usually do not respond to posts like this (anymore), but sometimes the urge is just too strong.
I have grown increasingly irritated by your behavior over the last few weeks. Your suggestion to add my cond-let* to Emacs had a list of things "you are doing wrong" attached. You followed that up on Mastodon with (paraphrasing) "I'm gonna stop using Magit because it's got a sick new dependency". Not satisfied with throwing out my unconventional syntax suggestion, you are now actively working on making cond-let* as bad as possible. And now you are recycling some old misconceptions about Transient, which can at best be described as half-truths.
To clarify, the "custom buffer" can list the bindings. Think of Ediff and the control buffer at the bottom of the frame.
I am not saying that transient offers nothing over regular prefix keys, there is a common design pattern that has some definitive and useful value. My objection is that the implementation is more complex than it should be and this complexity affects UX issues.
> Not true. (Try it.) This was true for very early versions; it hasn't been true for years.
Then I was mistaken about the implementation, but on master C-s breaks transient buffers for me on master and I cannot use C-h k as usual to find out what a key-press execute. These are the annoyances I constantly run into that break what I tried to describe in my previous comment.
> Except that in the case of transient you actually can enable such buffer switching, it's just strongly discouraged because you are going to shoot yourself in the foot if you do that
I did not know about this, so thank you for the link. I will probably have to take a closer look, but from a quick glance over the issue, I believe that the problem that you are describing indicates that the fear I mentioned above w.r.t. the complexity of transient might be true.
> I usually do not respond to posts like this (anymore), but sometimes the urge is just too strong.
I understand your irritation and don't want to deny its validity. We do not have to discuss this publicly in a subthread about DOS IDEs, but I am ready to chat any time. I just want you to know that if I am not saying anything to personally insult you. Comments I make on cond-let and Magit sound the way they do because I am also genuinely irritated and concerned about developments in the Emacs package space. To be honest, it often doesn't occur to me that you would read my remarks, and I say this without any malicious or ulterior motives, in my eyes you are still a much more influential big-shot in the Emacs space, while I see myself as just a junior janitor, who's opinions nobody cares about. But these self-image and articulation problems are mine, as are their consequences, so I will do better to try to remember that the internet is a public space where anyone can see anything.
The `C-h` override is pretty cool there too, e.g. if from magit-status I do `C-h -D` (because I'm wondering what "-D Simplify by decoration" means), then it drops me straight into Man git-log with point at
--simplify-by-decoration
Commits that are referred by some branch or tag are selected.
(Ooh, I learnt a new trick from writing a comment, who say social media is a waste of time)- Search for something using C-s - Exit isearch by moving the point (e.g. C-n) - Is the transient buffer still usable for you? In my case it becomes just a text buffer and all the shortcuts just got mapped to self-insert-command.
(I'm the author of Magit and Transient. (Though not the original author of Magit.))
The transient menus certainly play an important role but I think other characteristics are equally important.
A few years ago I tried to provide an abstract overview of Magit's "interface concepts": https://emacsair.me/2017/09/01/the-magical-git-interface/. (If it sounds a bit like a sales pitch, that's because it is; I wrote it for the Kickstarter campain.)
...and everyone else, including everyone who is also using a GUI on Linux - even if they use the GUI version of Emacs.
Any non-trivial use of emacs ends up involving a pile of customizations.
Also, another user said it has a tutorial when opened which should teach the basics in “10 to 15 min” but I have a feeling I would need 0 minutes to learn the basics of turbo c++.
I get that there are diehard eMacs and vim fans and honestly I’m happy for them. But at the end of the day scientifically speaking ease of use is not JUST down to familiarity alone. You can objectively measure this stuff and some things are just harder to use than others even with preloaded info.
Well, Turbo C++ (at least the one in the article) does use common conventions but those were conventions of 1992 :-P. So Copy is Ctrl+Ins, Paste is Shift+Ins, save is F2, open is F3, etc. Some stuff are similar to modern editing like Shift+motion to select, F1 for help, F10 to activate the menu bar, etc. And all shortcut keys are displayed on the menu bar commands so it is easy to learn them (some of the more intricate editor shortcut keys are not displayed in the menus, but are mentioned in the help you get if you press F1 with an editor window active).
That and lack of a decent visual debugger situation.
So I have this weird thing where I use emacs for interactive git rebasing, writing commit messages, editing text files and munging text... and then RustRover for everything else.
It's sorta like the saying, "I wish I was the person my dogs think I am"... "I wish emacs was actually the thing that I think it is" ?
Since it has no dependencies, I wouldn't be surprised if it gets merged into Emacs core at some point.
I think that after 25+ years of usage, I'm "used to it" by now.
How did the magit guy or people even come up with the data model? Always had the feeling that it went beyond the git data model. And git porcelain is just a pile of shards.
I moved to NeoVim many years ago and have been using NeoGit (a supposed Magit clone) the entire time. It's good but I'm missing the "mind blowing" part. I'd love to learn more though! What features are you using that you consider amazing?
If you want to do some really advanced stuff, sure it's a little arcane, but the vast majority of stuff that people use in git is easy enough. Branching and committing and merging never seemed that hard to me.
the big thing i am missing from it is a branch history. a record for every commit to which branch it once belonged to. no improved interface can fix that. that would have to be added to the core of git.
It's not all that different from a typical TUI interface.
Magit isn't great because of the interface. It's great because the alternative (plain git) has such a crappy interface. Contrast principle and all.
I never really liked any of the typical late-MS-DOS era TUI applications and have no nostalgia for those. I think a small TUI like a OS installer is fine, but I realised it is the command-line I like. Launching into a TUI is not much different from opening a GUI, and both break out of the flow of just typing commands on a prompt. I use DOSbox and FreeDOS all the time, but I almost never spend time in any of the TUI applications.
Despite that, I am currently working on a DOS application running in 40x25 CGA text mode. I guess technically it is a TUI, but at least it does not look much like a typical TUI.
After IDEs finally started being a common thing in UNIX systems, I left Emacs behind back to IDEs.
Still I have almost a decade where Emacs variants and vi were the only option, ignoring stuff like joe, nano, ed, even more limited.
Like in the GUI analogy, you can then choose to remember and use the displayed keyboard shortcuts for frequently used operations, but you don’t have to.
You can even see the menu atop the screen shot in the article, with the familiar names etc.
Really, compared to what I see here, the chief difficulty with emacs is the sheer volume of possible commands, and the heterogeneity of their names and patterns, which I believe is all a result of its development history. But the basics are just as you describe.
Emacs has Elisp commands first, then keyboard shortcuts for them, then maybe (not as a rule) menu items, and rarely dialog boxes. The Turbo Vision approach, from its design philosophy, has menus and dialogs first, then keyboard shortcuts for them.
One approach isn’t strictly better than the other, nor are they mutually exclusive. Ideally you’d always have both. My disagreement is with the “I think Emacs still does all of this” above. Emacs is substantially different in its emphasis, presentation, and its use of dialogs.
Of course, I must say there is a trade off here: you can design for novices or for advanced users, but very often not both.
Also OP apparently has no knowledge of the far better IDEs we had 30-40 years ago including but not limited to:
- Apple MPW, 1986. GUI editor where every window is (potentially) a Unix-like shell, running commands if you hit Enter (or cmd-Return) instead of Return. Also the shell scripting has commands for manipulating windows, running editing actions inside them etc. Kind of like elisp but with shell syntax. There's an integrated source code management system called Projector. If you type a command name, with or without arguments and switches, and then hit option-Return then it pops up a "Commando" window with a GUI with checkboxes and menus etc for all options for that command, with anything you'd already typed already filled out. It was easy to set up Commando for your own programs too.
- Apple Dylan, 1992-1995. Incredible Lisp/Smalltalk-like IDE for Apple's Dylan language
- THINK Pascal and C, 1986. The Pascal version was orginaly an interpreter, I think written for Apple, but then became a lightning-fast compiler, similar to Borland on CP/M and MS-DOS but better (and GUI). The C IDE later became a Symantec product.
- Metrowerks Codewarrior, 1993. Ex THINK/Symantec people starting a Mac IDE from scratch, incorporating first Metrowerks' M68000 compilers for the Amiga, then a new PowerPC back end. Great IDE, great compilers -- the first anywhere to compile Stepanov's STL with zero overhead -- and with a groundbreaking application framework called PowerPlant that heavily leaned on new C++ features. It was THE PowerPC development environment, especially after Symantec's buggy PoS version 6.
- Macintosh Allegro Common Lisp (later dropped the "Allegro"), 1987. A great Mac IDE. A great Lisp compiler and environment. Combined in one place. It was expensive but allowed amazing productivity in custom native Mac application development, far ahead of the Pascal / C / C++ environments. Absolutely perfect for consultants.
Really, it is absolutely incredible how slick and sophisticated a lot of these were, developed on 8 MHz to 33 or 40 MHz M68000s with from 2-4 MB RAM up to maybe 16-32 MB. (A lot of the Mac II line (and SE/30) theoretically supported 128 MB RAM, but no one could afford that much even once big enough SIMs were were available.)
It just came up with conventions few others adopted later when they reinvented the wheel.
Unfortunately not true. I've fired up emacs once or twice, and couldn't even figure out how to save a document because it didn't show me how to do that. It might be more documented than vi (but that bar is *on the floor, vi has one of the most discovery-hostile user interfaces ever made), but it's not self-documented enough to just pick up and use with no instruction.
GUI is different because there is no tool bar, but in Emacs 31 `xterm-mouse-mode' will be enabled by default so you can use the menu bar like a TUI.
Problem is most people start it the first time by providing a text file instead of firing it on its own and be greeted by the tutorial. I guess that is because they are blindly following another tutorial instead of trying to understand what they are doing.
My opinion is that "self documentation", "Getting started" pages and "tutorials" is a disease. People would actually get up to speed quicker by reading real manuals instead. They are just lured into thinking they will learn faster with tutorials because they get their first concrete results quicker but at this stage the harsh reality is thay they still usually don't know anything.
First time I used vi, I just had my operating system manual on my desk and I quickly learned to open man pages in a separate tty.
You can rail against its defaults, but do not make misleading claims.
Interview with an Emacs Enthusiast [Colorized]
If you for whatever reason absolutely need to run it in the terminal, then you'll have to either learn that F10 toggles the menu bar, but then it still looks like a real menu bar that you can navigate with the arrows and enter: https://i.imgur.com/ETA2Qhs.png (or you can `M-x xterm-mouse-mode` to use the mouse in the terminal).
(That said, I'm sure the out of the box experience with Borland was quite a bit better back in the day, if you only needed Pascal or C++ support. And emacs really could do with a better default-theme; e.g. simply changing to the built-in modus-vivendi-tinted and it looks like https://i.imgur.com/lRAWzJK.png instead. Doesn't help with the tool-bar icons from 1999 or whatever though)
Its nothing bloated, its not embedded ide like vscode and all
Rather scite is just gui frontend for cli based binaries, it uses programming environments you have installed on your os
Just like old time on cmd or sh, we used compilers eg javac or cpp
Now scite just make it easy, its exactly same as borland turbo, you add path to your compiler binaries in scite and done you click compile or run,
Plus its lightweight and portable, carry it in usb and run on any computer by just setting paths to compiler and executor binaries
Anyway, does anyone remember Metrowerks CodeWarrior? I see it still exists, but I mean back from the 90s. I got a T-shirt from them at MacWorld '99 and still had it until not too long ago. High quality merch.
Are you really saying that you don’t see any utility in modern IDEs? Even back in 1999 I thought Visual Studio was a breath of fresh air let alone R# with all of the built in refactors in 2008.
But going further back, to the Turbo days in college and my first few years working, breakpoints, conditional breakpoints, watches etc were a godsend
> But going further back, to the Turbo days in college and my first few years working, breakpoints, conditional breakpoints, watches etc were a godsend
gdb does all of that.
GDB does guaranteed safe refactors over large code bases?
Why do you need a terminal for if you can do all that with flipping switches and looking at LEDs?
didn't it have a cute little re-distributable header file that had a bunch of useful containers in it? (linked lists, hash tables, etc)
i didn't work with it much but once worked with a mac guy who added it to our project. sometimes i'd have to build his stuff, i remember lots of yellow road construction icons!
I also use Sam and Acme from Plan 9 (technically from the excellent plan9port), but let’s be honest: those aren’t IDEs. They’re editors. Tools that let me think instead of wrestle.
There’s a lot we could (and probably should) learn from the old TUIs. For example, it’s perfectly acceptable, even heroic, to spawn a shell from the File menu and run something before returning. Seems people are afraid of losing style points with such grievous actions.
And the keybindings! So many of those classic TUIs adopted WordStar’s sacred keystrokes. They’re burned into my muscle memory so thoroughly that using EMACS feels like trying to type with oven mitts. For years, joe (with the blessed jstar alias) was my editor of choice.
Anyway! Time to boot the Dr. DOS VM, spin the wheel of Advent of Code, and be nostalgically inefficient on purpose.
I've been working on my free time on a tui code editor in the same vein eventually with make and lldb built in.
You were also expected to learn it; which meant you became "one with the machine" in a way similar to an organ player.
I remember watching Fry's Electronics employees fly through their TUI, so fast that they'd walk away while it was still loading screens, and eventually a printout would come out for the cage.
This why I like to use the full screen mode of my editors and IDEs.
It surprises a lot of people who see my screen. Full screen features are everywhere but rarely used.
The agency was happy to have something new and modern but more important to them was that new employees could be trained on the system far faster. Even though there were a small number of long term employees, they had high turnover with the frontline CSRs, which made training a major issue for them.
But I’ve also written larger applications and, frankly, a ridiculous amount of documentation in Acme. That 9P protocol was my backstage pass: every window, every label, was accessible and programmable. I could, for example, hook into a save event and automatically format, lint, and compile ten or fifteen years before most IDEs figured out how to fake that kind of integration.
Sure, the system demands precision. It doesn't coddle. But for me, that was the feature, not the bug. The rigor sharpened my thinking. It taught me to be exact or be silent, forcing me to pause when I usually would not.
What are the WordStar bindings and what do you like about them?
I have a general interest in the history of how these patterns emerge and what the benefits of them are relative to each other.
I highly recommend reading this:
You have to understand: my first DOS machine was a Tandy 1000, acquired before I had a driver’s license. It was upgraded over the years and not retired until the grunge was well underway and I had already been married and divorced.
MS-DOS’s edit had WordStar keybindings; Ctrl-S to move back, Ctrl-E to move up, and so on. My dad "brought" home a copy of WordStar from work, and oh, the things that trio, WordStar, me, and a dot matrix printer conspired to create.
Borland carried those keybindings into Turbo Pascal, which I learned in college, having finally escaped the Fortran 77 gulag that was my high school’s TRS-80 Model III/IV lab. The investment into the Apple II lab didn't happen until AFTER they gave me my exit papers at a spring awards ceremony.
Why do I still prefer these tools?
Because they’re what I know. They don’t get in my way. We have history, a better and longer history that I have with my first wife. Those keybinds helped me write my first sorting algorithms, my first papers on circuit design, and the cover letters that got me my first jobs. They’re not just efficient. They’re familiar. They’re home.
You had full control of the cursor without the need for dedicated arrow keys or page up and down keys. It worked on a normal terminal keyboard. I first used it on an Apple ][ with a Z80 add-on that ran CP/M.
Programmers are trying to bring them back bc nostalgia I guess?
I floated the idea of TUIs to our data engineering team and got very negative responses. (My nostalgia for undergrad turbo pascal TUI I guess lol)
TUIs are like shovels. A perfectly rational tool for doing a little bit of digging. Visual Studio 2022 is like Bagger 293.
I suppose a lot of it is also relative. When I started with TUIs decades ago, we didn't have too many options. Turbo Pascal 5.5 or 6.0 was extremely nice to use back in the day.
Compared to what available at that time?
Both of these IDE’s gave me a huge productivity boost, and it used to be a no-brainer to give customers a realizable estimate for getting the UI done, then wiring up logic, and get things ready to ship, etc.
I really miss these IDE’s, because of the integration factor. It was fun to wire up a database table and generate a form and immediately have something that could be used for data input and validation on the project - then spend a few weeks refining the UI to be more robust and cater to the application use case in focus.
These days, it feels like a lot more careful planning is needed to make sure the UI/API/backend realms can play properly together.
It would be nice to see some more progress on this level of tooling. It’s one thing to have an AI generate UI code - but I still think there is room for painting the code and using a UI to build a UI.
(The moment someone produces a properly WYSIWYG tool for JUCE, the glory days will begin again ..)
The need for tui argument is vague outside of muscle memory. Lots of beautiful poetry though.
That age of computing the author is romanticizing was expensive and corporate fed stupid (RIP Mr Bollenbach my hs cs teacher who gave us weekly insider tech reports).
I feel like tui folk need their stack/os/integrated environment...oh wait. Nevermind.
"Is FreeDos the Moderate Libertarian TempleOS?"
It took many decades for me to get that kind of flow back for mainstream programming languages on modern computers. And modern IDEs still have higher latency than they should.
Finally! Someone who still remembers the best software ever written. I looooved Sidekick and we used it throughout our small company. It's so long ago. I remember only parts of it now but it was such a useful tool.
IDEs do all that in one huge program because if you exited your editor to run the compiler or run your program, when you went back to the editor it was starting up cold again.
TSR programs like Sidekick avoided some of this but were a poor substutute for real multitasking.
In real mode, it's possible to have a TSR that swaps the entire contents of RAM from disk. As long as such a hypothetical TSR is always loaded into a fixed location, it's possible to save and restore the entire DOS, program, and/or EMS/XMS session.
You could throw together a CRUD app in under an hour interactively.
I often say Deteflex is Ruby on Rails for the VT100.
Moved to Dev-C++
Nowadays just any editor and using GCC directly
Eternally greatful for open source, Microsoft charged thousands for Visual C++ back then.
IDEs we had 30 years ago - https://news.ycombinator.com/item?id=38792446 - Dec 2023 (603 comments)
Despite VB to be a little bit shitty, I think that a big loss happened in the GUI software development world since web apps became the norm.
Not many remember this world where you could easily graphically create your UIs by placing components and that reactive interface were a given without effort.
I really miss the original Delphi before things went DotNet shitty...
I'd love to see some modern environment replicate that somehow. Let us pretend everything is simple and synchronous even if it very much isn't.
I'm more of a GUI guy who is contend with VSCode. I'm intrigued to learn Emacs but don't have the time for it.
Back in the 90s, however, Borland TUI was indeed the pinnacle. I remember I played with Turbo C for a while but did not learn anything, but it was fun just to use the IDE.
The actual "edit.com" is a tiny stub that launches QBasic in edit mode, equivalent to "qbasic /edit".
Can build native apps for Windows, Linux, macOS, iOS, and Android.
However, the quality and reliability of the Delphi experience together with mobile support overcome the VCL/FMX trade off in my books.
I just googled and Wikipedia seems to confirm my memory: https://en.wikipedia.org/wiki/Borland_Kylix#Features
LCL (Lazarus' equivalent of VCL) took another approach where the base stuff are very Windows-y (due to the VCL heritage) but the backends have to essentially implement not only the backend-specific (Gtk, Qt, etc) widget functionality but also a small subset of the Windows API.
While this makes porting harder for the Lazarus developers, it makes it easier to port stuff between OSes and even port stuff from Delphi to Lazarus (some developers can also use both Delphi and Lazarus - e.g. AFAIK Total Commander uses Delphi for the 32bit builds and Lazarus for the 64bit builds).
1. The DOS screenshots in the article are in any way reflective of a designer’s input
2. That Windows was a visually pleasing design?
Personally I didn’t find Windows visually pleasing before Windows 95, but much of that can again be attributed to the PC video hardware limitations of the time.
[0] https://en.wikipedia.org/wiki/Color_Graphics_Adapter#Color_p...
DOS: FoxPro 2.x, dBASE III Plus, dBASE IV, Turbo Pascal 5.5/6.0 was probably the pinnacle for me
OS/2: Watcom VX-REXX - extremely powerful and productive
Windows: Delphi before .NET
I'd like to be able to develop in other languages the way I do when I dabble in Pharo, i.e. mostly windows and widgets and dialogs that abstract away boilerplate, file and directory management, and allows me to relatively easily extend the environment when I feel like it.
Instead I tend to complement the editor or IDE with a rather large set of Linux and Unix programs in terminal emulators. It's not nice or easy to teach, but nicer than trying to figure out whatever module protocol used by the editor. Perhaps I could have stayed with Emacs and been content, but when I arrived at this methodology Emacs was still single threaded and quite sluggish in comparison.
I'm hoping Glamorous Toolkit might be the thing that eventually grows into what I'd like to have.
I wrote recently a bit about my conclusions after ten years of developing it: https://github.com/alefore/weblog/blob/master/edge/README.md
Emacs actually is friendly! apropos and all of the describe commands make it /discoverable/.
Literate configs and tangling?! I finally feel the end game.
Yes, you probably should read a book at the same time on the side to give you a higher perspective on fundamentals. Sure, some other tools are simpler to get started.
If I could drop everything I'd make a simple emacs config for kids with like a turtles mode and maybe a sound sequencer, then teach them functional programming first. Hah
Back then was it common to have a split or interleaved view of high level and assembly at the same time?
I'm aware that you could do something like the following, but did IDEs help visualize in a unified UI?:
$ cc -S program.c
$ cat program.s # look at the assembly
$ vi program.c # edit the C code
A quick search shows that Borland Turbo C (1987) had in-line assembly: myfunc ()
{
int i;
int x;
if (i > 0)
asm mov x,4
else
i = 7;
}
From the 1987 Borland Turbo C User's Guide [0] "This construct is a valid C if statement. Note that no semicolon was needed after the mov x, 4 instruction. asm statements are the only statements in C which depend upon the occurrence of a newline. OK, so this is not in keeping with the rest of the C language, but this is the convention adopted by several UNIX-based compilers."[0]: http://bitsavers.informatik.uni-stuttgart.de/pdf/borland/tur...
You could "dump" your OBJ file for assembly.
Later C compilers got some better inline assembler support but this was towards the 32-bit era already.
Also Borland had its own compiler, linker and such as separate binaries you could run with a Makefile but you really never had to, as why would you when you can do that in the IDE in a single keypress.
I'd like to see a companion article about the IDEs from the 80's.
I remember 64FORTH had a multi-pane IDE, but I could only find this low-res picture of it: https://www.c64-wiki.de/images/thumb/2/24/Forth64-audiogenic...
There were others, though, including one I remember that was all text at the bottom half of the screen, and then graphic output at the top.
And, of course, the most famous one of all: the Atari 2600 BASIC Programming IDE which fit in just 4K.
Today's ragebait bloggers like to say how awful it was, but if you're patient and thoughtful, the way people were when it came out, you can do quite a lot.
An entire Pong game in six lines, from Wikipedia:
1 Hor2←2+Key
2 IfVer1>90ThenVer1←88
3 IfHitThenVer1←9
4 Ver1←Ver1+IfVer1Mod2Then8Else92
5 Hor1←Hor1+7
6 Goto1
The 2600 graphics were centered around 5 sprites dedicated to two players, two missiles, and a ball. Completely understandable, they were trying to make a toy computer affordable enough for everyone in 1975. but their design process was basically "what is the bare minimum video hardware required to make the games "combat" and "pong". Every single game found on the 2600 that is not a combat or pong clone is probably a masterwork example of making the hardware do something it was not intended for.
https://en.wikipedia.org/wiki/Television_Interface_Adaptor
Footnote: yes I know it was released in 1977, but it was designed in 1975.
The biggest loss in TUIs is the latest wave of asynchronous frameworks, which bring the joy of dropped keypresses to the terminal.
In any TUI released before the year 2000, if you press a key when the system wasn't ready, the key would just wait until the system was ready. Many TUIs today still do this, but increasingly frequently (with the modern "web-inspired" TUI frameworks), the system will be ready to take your keypress, and discard it because the async dialog box hasn't registered its event listener yet.
Other than that antipattern, TUIs are doing great these days. As for terminal IDEs, Neovim has never been more featureful, with LSPs and other plugins giving all the features this article discusses. I guess it isn't a mouse-driven TUI, so the author wouldn't be interested, but still.
so even 41 years seems to be in the scope.
I was expecting
- early projects that ended in Visual Studio 1.0 or NetBeans soon after, (2 to 9 years too early for them)
not
- "vim (1991) was not out yet" (not-a-quote, but my feeiling upon looking at ncurses instead of floating windows)
A few years before, it was very different - VisualAge and Rational Application Developer were the big names in the early 90s in "professional" IDEs. Interface Builder for university spin-outs or funky startups (and SunWorks / Forte Studio for the less-funky ones). CodeWarrior on the Mac (perhaps with THINK! hanging on too). I think Softbench was popular for scientific software, but I never actually saw it myself.
And then just a few years later, the rise of Java turned things upside down again and we got Jbuilder, Visual Cafe, & NetBeans as the beginning of yet another new wave. The Visual Studio suite really began to take off around then, too.
In short, the 90s were a time of huge change and the author seems to have missed most of it!
So disappointing to expect a GUI Smalltalk System Browser and seeing DOS TUIs.
And then delight recalling Turbo C/Pascal and MS C 4.0 with CodeView that even worked in 43 or 50 line modes.
Having said that, some old TUIs were clearer and faster even on weaker hardware. This should be a lesson for us today. Color transitions and animated icons flying over the desktop are NOT what I need, but speed, clarity, and discoverability of more rarely used functionality are vital.
"INTRODUCTION TO THE SMALLTALK/V 286 ENVIRONMENT"
http://stephane.ducasse.free.fr/FreeBooks/SmalltalkVTutorial...
So much better than the TUI Smalltalk/V
> As for terminal IDEs
The GNU/Linux terminal is the killer app. Multiple terminals in a tiling window manager is peak productivity for me. (Browser in a separate virtual workspace.)
And modern scaling for a big display is unbeatable for developer ergonomics.
I think you are wrong.
https://en.wikipedia.org/wiki/Muscle_memory
Being extremely good at something increases the gap between said something and everything else. That doesn't mean being extremely good at the first thing is "over-specialization to detriment". If someone is equally mediocre at everything, they have no such gap, so no "over-specialization to detriment"; but is that really worth desiring? I think not.
You're also potentially over-specializing at one level while at the same time neglecting other levels.
Musicians run into this problem when, for example, they rely solely on muscle memory to make it through a performance. Throw enough stress and complicated music at them and they quickly buckle.
Meanwhile, a more seasoned performer remembers the exact fingers they used when drilling the measure after their mistake, what pitch is in the bass, what chord they are playing, what inversion that chord is in, the context of that chord in the greater harmonic progression, what section of the piece that harmonic progression is in, and so forth.
A friend of mine was able to improvise a different chord progression after a small mistake. He could do this because he knew where he was in the piece/section/chord progression and where he needed to go in the next measure.
In short, I'm fairly certain OP is talking about these levels of comprehension in computer programming. It's fine if someone is immensely comfortable in one IDE and grumpy in another. But it's not so fine if changing a shortcut reveals that they don't understand what a header file is.
Example Notepad versus Turbo C++ described on the article.
We used text terminals because that is what we could afford, and I gladly only start a terminal window when I have to.
It is only cross platform as long as it pretends to be a VT100.
Costco Canada vision shops still use a terminal connected to an AS/400 machine as I snooped around last month.
As the network ebbed and flowed, email too-often became unreadable without a GUI, and what was once a good time of learning things on usenet became browsing web forums instead. It sucked. (It still sucks.)
In the greater world, I saw it happen first at auto parts stores.
One day, the person behind the counter would key in make/model/year/engine and requested part in a blur of familiar keystrokes on a dumb terminal. It was very, very fast for someone who was skilled -- and still pretty quick for those who hadn't yet gotten the rhythm of it.
But then, seemingly the next day: The terminals were replaced by PCs with a web browser and a mouse. Rather than a predictable (repeatable!) series of keystrokes to enter to get things done, it was all tedious pointing, clicking, and scrolling.
It was slow. (And it's still slow today.)
I have observed countless times how many people fill in a field, than move their hand to the mouse to move the focus to the next field or button, than move their hand back to the keyboard, instead of just pressing tab to move the focus. It's painful to watch. Knowing just a few keyboard shortcuts makes filling in forms so much faster.
Things are getting worse, unfortunately. Modern user interfaces, especially in web interfaces, are made by people who have no idea about those efficient ways of using them, and are starting to make it more and more difficult to use any other method than keyboard -> mouse -> keyboard -> mouse -> ... . Tab and shift-tab often don't work, or don't work right. You can't expand comboboxes with F4, only the mouse. You can't type dates, but have to painstakingly select all the parts in inefficient pickers. You can't toggle options with the spacebar. You can't commit with enter or cancel with esc.
https://en.wikipedia.org/wiki/IBM_i#Technology_Independent_M...
We probably still have a couple of PGM objects kicking around on our modern POWER hardware that were originally compiled on an old AS/400 system, but they run as native 64-bit POWER code like everything else on the machine.
The IBM midrange line gets a lot of undue disgust these days, it's not sexy by any means, sure, but just like anything running on modern day Z/OS you know that anything you write for it is going to continue to run decades down the line. Well, as long as you limit the amount of stuff you have running on 'modern' languages; because Java, Node, Python, Ruby, etc. are all going to need upgrades while anything written in 'native' languages (RPG, COBOL, C/C++, CL) compiles right down to MI and will keep working forever without changes.
The Machine Interface dates back to AS/400's predecessor, the System/38.
It's still updated by IBM and runs on POWER. It's just called "i" now.
I believe the naming went something like AS/400->iSeries->System i->System i5->i
Weird question, but I accidentally ended up with one of those in my hands that ran in probably non-blockbuster place from 1996 to 2000 :)
I remember.
This, unfortunately, killed people: Therac-25. Granted, the underlying cause was a race condition, but the trigger was the flying fingers of experts typing ahead, unknowingly having been trained to rely on the hardware interlock present in older models.
>As for terminal IDEs, Neovim has never been more featureful, with LSPs and other plugins giving all the features this article discusses. I guess it isn't a mouse-driven TUI, so the author wouldn't be interested, but still.
Neovim instantly becomes a better piece of software if you use a GUI frontend. The terminal puts in arbitrary limitation and inconsistencies. Why anyone would use and interface paradigm which can not do color in a sane way is beyond me. Just use the GUI, it is better in every way.
Company I work for has a great browser based IDE but that’s something I would never setup and maintain for a personal project.
There's nothing wrong with graphical IDEs... or text user interfaces. Great developers use both. Low effort troll is low effort.
People on better systems like the Amiga and Atari were already past that.
Also, people should stop playing 2D games. It is pretty silly to base your entertainment on ancient technology when modern GPUs can render super-complex 3D scenes.
And don't make me start on people who still buy vinyl...
It's still quite easy to end up with a terminal you need to reset your way out of (eg with a misguided cat), not to mention annoying term mismatches when using remix/screen over SSH, across OSes, or (and this is self inflicted) in containers.
For UI there exists a straight up superior alternative, which keeps all of the benefits of the old solution. Neovim is just straight up better when used outside of a terminal emulator.
What is true for TUI vs. GUI is not true for CLI vs. GUI (or TUI for that matter) pretending the argument I made applies to the later is just dishonest. You can not replace CLI interfaces adequately by GUI or TUI interfaces, you can totally replace TUI Interfaces by GUI. See neovim as an example. It is superior software when used outside of the terminal.
It is a total joke to call something which depends on how the underlying terminal emulator interprets specific ANSI escape sequences "multi platform".
TEXT EDITOR.
I hate it when sites do this. I don't want my window decorations to be restyled in the name of aesthetics. I need them to be usable.
[1]: https://github.com/microsoft/edit [2]: https://news.ycombinator.com/item?id=44031529
It's also a complete reimplementation, it shares only the name with original edit.com.
And yeah, it is a reimplemantion. But it is a TUI and very minimal. Keepining it minimal, with no dependendecies and software bloat seems to be one of the guidelines. So very much something adding to the article at case.
But yeah, you are right on both counts.
This right here was the key to super flow state. Lightning fast help (F1), very terse and straightforward manuals. I have tried to replicate this with things like Dash (https://kapeli.com/dash), to some degree of success.
The closest thing I had to this in windows was probably Visual Studio 6 before the MSDN added everything that wasn't C/C++ to the help docs. After that, the docs got much harder to use due to their not being single purpose anymore. The IDE was a little more complex, but you at least felt like you got something for it. After that, too many languages, too many features, overall not great experience.
The keybindings were so simple and fast, Borland IDE on DOS was a very nice tool. Yes, easier than vim and emacs. The reason is because of mouse in TUI so things like complex selection/blocks/text manipulation are not keybindings in the same way so the key combos are more "programming meta"(build, debug, etc) rather than "text meta".
EDIT: also, I feel like this needs to be mentioned: compilers were not free (as in beer) at that time!
In order to develop on my own machine as a teen, I had to sneakily copy the floppy disks the teacher used to install this on the school computers so I could have more than 1h using it at home! COPY THAT FLOPPY
These early IDEs were fantastic at their job and worked so well given the constraints of the DOS environment of the time. It’s a shame that Borland the company eventually faded to black in 2015, but that’s how these things go. I wonder where all the geniuses behind the Borland IDEs ended up.
I would have appreciated a breakdown of what specific individual features those crummy old ides are offering.
I suspect the one the author wants most is a time machine to go be 12yo again, but software can’t do that. Yet.
2. there is no limit on resources.
The consequences can be left to the reader (re: the article in this thread), but these two postulates are the source of all ills in commodity and open-source software today.
I was wondering, is there a way to get VS code to look like this? Maybe neoVim?
Borland C++ 3.1 & Application Frameworks for DOS and Windows 3.1 came with an entire library of paper books. It was probably the heaviest and largest boxed retail software package ever because 4.0 skimped on paper books and didn't include real mode versions of the IDE for DOS.
The Pascal almost equivalent was Borland Pascal 7.0 with Objects.
It was possible to link assembly, C++, and Pascal in the same executable assuming the memory model and function calling convention were set correctly.
Ba dum tss.
I don't remember Borland buying it
Do you think there’s anything like that out there today? The only ones I can think of that are closed are nano and micro editors, but I wouldn’t really call them IDEs.
As for modern IDEs, Intellij has been orders of magnitude better than any competition for more than 25 years (I think). I have stayed away from Microsoft products for a very long time, so I can't comment on VSCode and its predecessors. The main competition I remember was Eclipse, which I always found to be sluggish, unintuitive, and buggy. The fact that it wasn't even mentioned in this article is telling.
JetBrains, the company that created Intellij (and then PyCharm, CLion and many others) is one of those extremely rare companies that defined a mission, has stuck to it, and excelled at it for many years, and has not strayed from the path, or compromised, or sold out. It is so impressive to me that they maintain this high level of excellence as they support a vast and ever-growing collection of languages, coding standards and styles, and tools.
I chose it becaue I don't have access to neovim on my cloud desktop and ideavim is a superior solution to any vim like plugins for vscode. It is struggling with 4 cores and 16GB of ram with only a few projects open at a time. Some of it is due to being win11 with the amount of security malware installed by my company but still vscode doesn't seem to make it suffer that much.
(“real” deployments would use systemd-networkd and config files but for simple things…who cares)
No matter how good computers and networking get, text-based tools always seem to win for remote administration. I’ve tried forwarding X servers, mounting remote file systems with sshfs, vscode’s remote features, VNC, RDP, but I always seem to revert back to just tmux and TUI tools.
To give a random example, I use Neovim with SuperCollider, and music programming language. This involves launching a runtime, sending text to the runtime, which in turn sends commands to a server. The server generates a log, which is piped back into a Neovim buffer. There are all sorts of quirks to getting this functional, and it's a somewhat different workflow from any traditional programming model.
I'm not sure there's an easy solution to keeping things simple while also supporting the unimaginable variety of personalities, skill-levels, environments, and tasks people get up to. I do, however, think it's worth continued imagination and effort.
It seems a little humorous now that professionals were stuck for several years doing their day to day word processing, spreadsheets and databases in text mode, where getting different sized text or different fonts was almost impossible. This also wasn't just in the 80s, it was still somewhat true in the early 90s, not very long before the beginning of the internet as we know it.
Still, I wonder if things are really any better now, as we're all using software interfaces built on something else that's not really appropriate for the job. HTML.
> have we advanced much in 30 years?
IDEs have changed a lot, specially with AI-assisted ones. The author kind of acknowledges it, but imho it's a paradigm shift. Not just "a major difference".
> The only major difference that we are starting to see might be AI-assisted coding, but this is a feature mostly provided by a remote service, not even by the installed code!
Then I realized it’s a post from 2023. IDEs have changed a lot since then. Autocompletion has evolved from merely suggesting function names to completing 20 lines of code in the blink of an eye. It's great for productivity, but it also makes you lazy, to the point where you can't live without it.
In my opinion, software engineers should “disable the autopilot” from time to time, just like airline pilots must occasionally land without it. Otherwise, you end up becoming too dependent on it.
As for autocompletion, not sure about every tool but CLion and other IDEs I have from JetBrains are genius. Yes they can autocomplete multiple lines of code with a single keystrokes and no I do not really want to write it myself as it mostly boilerplate code I've written many times and autocompletion just predicts it.
Maybe because I have 40+ years of programming under my belt starting with machine codes and every type of software one can imagine.
I like these programs, mostly for that sweet low latency which is just gone today, but I wouldn't romanticize them as dev experiences. To experience it you can download free pascal today and use theirs which is just like turbo pascal (may even based on turbo pascal?). Its pretty clunky compared to what you get today, although, the debugger works which is more than what you can say for the majority of languages today.
When I saw Visual Studio years later, or Visual Basic, these IDEs were doing so much more, but I‘d lose the ability to fully control the bare text. These MS tools wouldn’t allow me to write my code in my favourite text editor and version it. So they were nice and a curse at the same time.
While it was truly amazing that Borland managed to stuff a full text editor into a TSR under MS-DOS, and every new version of Turbo Pascal was faster and had more features, it all culminated somewhere around Delphi for me, and Visual Basic 6 for almost everyone else.
Then the world ended... Anders Hejlsberg was lost to Microsoft, and everyone went collectively crazy in at least two orthogonal ways.
First there was the obsession with C++ as "higher level" than Pascal and the view that it was for "adults", which was delusional. C++ generated a f*ckton more boilerplate and was brittle for the same functionality, at least when generating a GUI program.
Then there was Microsoft's obsession with .NET, which they never recovered from. They crammed all the bloat of an interpreter into everything imaginable, even the operating system. You were always having to get the latest .NET libraries to make things work. They destroyed Visual Basic over this, and it never recovered.
It's not like they're particularly easier to write these.
But since there's no remote GUI option, much less a portable remote GUI option, particularly one that's not just a video of an entire desktop, we're stuck with these.
WHo wants to fire up an entire desktop to get open a simple utility app?
Obviously the Web satisfies much of the demand for this, but clearly not all.
Remote X is, essentially, dead. It's obviously "really hard", since "no one does that any more". Or, folks just don't miss having rootless windows peering into their remote server enclaves.
It's just too bad, full circle, here we are again. "Progress."
RDP works great and GUI tooling for Windows and macOS is quite comparable to using VB, Delphi, Smalltalk like experiences.
Requiring me to have a cloud account to format my machine (mac) and requiring me to have a cloud account on only pre-authorized hardware (Windows 11), only to open up Notepad and see they slapped AI inside of it; now that is quite comparable to me slapping Linux on it.
Just sayin'
ssh -X and waypipe both work perfectly fine.
And to your point about portability, if you're stuck on an OS other than Linux, VNC/RDP aren't pretty but they'll get the job done.
If you can make them working. Sorry you can't connect if user is logged on this computer. Whoops RDP session is active, so I will show you this black screen after typing your username and password until user disconnects (Why not kick out the user?). VNC is even bigger pain when you need to boot up server from SSH and sometimes restart it when it gets stuck.
While on Windows you can just install TightVNC and it works. No screwing with screens. On MacOS you can just tick Remote Screen Sharing, put your VNC password and it just works. Even Android can do that droidVNC-NG, But Linux is such a PITA to make VNC or RDP working.
And RDP also assumes that you are running X11 and not Wayland.
Right, that's why I said RDP isn't pretty if you aren't on Linux. Windows insists on creating a separate desktop for each session. IIRC it has something to do with licensing, they don't like simultaneous users using one Windows license.
> While on Windows you can just install TightVNC and it works.
If you're resorting to installing third-party apps, you can install TightVNC on Linux too, and it just works. Though I found krfb performs better on my network, ymmv.
> And RDP also assumes that you are running X11 and not Wayland.
RDP is just a protocol that describes the bytes going over the network. Why would it care about your display server? There are VNC and RDP servers for both X11 and Wayland. Just install one that's supported by your system.
Though if you're on linux, you don't have to deal with the VNC/RDP jank at all. Just use ssh -X or waypipe and it's way snappier.
This is the biggest thing I miss in modern GUIs, especially Windows, macOS, or mobile.
Tabbing across every single possible inputs, with alt or control keys for quick access is insanely powerful compared to "click here, scroll this, click click"
But this brings up something I think about every now and again: resource bloat.
When Turbo Pascal was currently it'd be common for PCs to have 1MB of RAM. In fact with the DOS memory model you had to do weird stuff to use more memory than that (IIRC it was called "large mode").
Obviously running in a graphical environment is going to use more memory but we had pretty capable Windows environments with Win 95/98/SE/NT3.5/NT4/XP with not much RAM (256MB to 1GB depending on year).
Now with modern windowing systems we have capabilities that didn't exist in early windowing OSs like scalable rather than bitmapped fonts, UI scaling, etc. But we've had those things for 20+ years now and the resource requirements still keep going up.
Yes we have Javascript UIs running in a browser now and that will never be as cheap as native apps but we've also had those for ~20 years now (GMail is ~20 years old).
In the 90s we had graphical X Windows systems on Linux with 4-16MB of RAM. I know. I ran them.
Why do resource requirements keep going up? Is there demand for a low resource OS that could be user-facing? I know hardware is particularly cheap with Raspberry Pis and similar. We have ARM CPUs for a few dollars now that would've cost millions in the 1990s. So maybe that's why there's no demand.
But this is really something I expected to top out at some point and it just hasn't.
You could sit someone down from 1991 (Visual Basic 1.0) in front of Visual Studio 2026 and they would immediately know where everything is. (it still has BASIC in there too)
- A RAD TUI like Microsoft Visual Basic 1.0 for MS-DOS
- DolDoc from TempleOS, with diagrams and sprites
- Clipper, DBase, FoxPro, etc
I understand that software requires people maintaining it, but my point is more that I still don't understand the why.
We have some functionality, a lot of which could be re-used in editors and IDEs. But people rarely share stuff. They like to re-implement things. Again and again and again. This is not logical to me.
I'd like to have one editor that does EVERYTHING, but in a modular way so people decide what that editor can do. People could then just maintain one cohesive, feature-rich code base rather than each one duplicating what is already available in another IDE/editor.
These days Far Manager (via far2l) or MC kind of scratch the itch for quick TUI edits.
I think such a IDE for Python would really be helpful for beginners. Not text-based, but more like Visual Basic. But everything integrated, everything easily discoverable (that's very important!). Maybe also with such a GUI builder as in VB. And a built-in debugger. I think for the code editor, as long as it has some syntax highlighting and tab auto-complete, that would already be enough. But some easy code navigation would also be good. E.g. when you place some button on your window, and double click that button in the GUI editor, you get to the call handler code of that button.
Some time ago, a small group of people (me included) (I think from some similar HN post?) got together and we brainstormed a bit on how to implement this. But we got lost in the discussion around what GUI framework to base this on. I already forgot the options. I think we discussed about PySide, Dear PyGui, or similar. Or maybe also web-based. We couldn't really decide. And then we got distracted, and the discussion died off.
Note, Free Pascal was mentioned here. But then you should also mention Lazarus (https://www.lazarus-ide.org/), which is the same as Free Pascal but cloning Delphi. Lazarus is really great. And it is actively developed. But Object Pascal is too little known nowadays, maybe also a bit outdated.
UI/UX of those tools is pretty close to Borland IDEs, they have steep learning curve (at least 10x easier than vi/emacs).
I am still shocked how no tool since has managed to come even close to VB. You could easily develop a moderately complex GUI application that felt snappy in an afternoon. C# with WinForms is the second closest to that. All other iterations since have not been designed with individual developers in mind, sadly.
A powerful developing alternative to this paradigm could be what I’m calling speech/voice driven development (SDD or VDD). It takes some pain of typing so much away - makes interactions with AI feel a bit more natural. Like talking to a colleague. But for it to really work well, AI models will need to become even faster.
For actual work, though, I’ve been using VS Code exclusively since its inception. Electron might be a bloated mess, but spending time on alternatives doesn’t feel worth it. Maybe that’s because I didn’t grow up in the golden era of computing and can’t make the vim workflow stick no matter how hard I try.
I’m pretty sure twenty years from now, this generation of developers will get blurry-eyed reminiscing about how fast and feature-packed VS Code was, and how Microsoft built the best GUI text editor of its time.
As for TUI editors, I love micro because it has mouse support and doesn’t make you memorize a spellbook just to move around.
Yes, Visual Basic was indeed the pinnacle, and today, it is QT, for what it's worth. But no, let's go write HTML and CSS, and when the Stockholm syndrome gets us bad enough, why not some React or Angular to get the party of pain going again ?
sph•7h ago
I know there's Emacs and vim, but they're far too programmable and bloated compared to the elegance of TC++, which did one job, and one job only, very well. Also, despite being an Emacs power user at this point, it's never going to be as ergonomic and well thought out with its arcane chords, while TC++ conveniently shows all possible keybinds throughout its UI.
AlexeyBrin•7h ago
coolcoder613•7h ago
[0] https://github.com/magiblot/tvision [1] https://github.com/magiblot/turbo
badsectoracula•7h ago
[0] https://i.imgur.com/Qvkt3W0.png
[1] https://www.gnu.org/software/texinfo/manual/texinfo/html_nod...
fithisux•6h ago
It works fine with Yori too, not only CMD.
Brian_K_White•6h ago
(That doesn't imply I went with VS or similar fat ide, just that I didn't end up using xwpe for real. I tried code::blocks for a while but mostly just use geany or a plain editor.)
Dwedit•6h ago
Linux Terminal programs are running in an emulated terminal, and are bound by keyboard input restrictions that DOS programs did not have.