WSL9x takes quite a different approach. Windows boots first, but once Linux starts both kernels are running side-by-side in ring 0 with full privileges. They are supposed to cooperate, but if either crashes then both go down.
It can work either way though.
OpenOffice XML [1] -> Office Open XML [2]
> Because we cannot name something leading with a trademark owned by someone else.
https://xcancel.com/richturn_ms/status/1245481405947076610?s...
And this WSL project is going to run into the same problem.
The "for Linux" is added because it's a subsystem for Linux applications (originally not leveraging a VM).
Microsoft also had the "Microsoft POSIX subsystem" (1993) and "Windows Services for UNIX" (1999) which were built on the "Subsystem for Unix-based Applications" (rather than "Unix-based Application Subsystem"). That chain of subsystems died at the end of Windows 8, though.
There are many reasons not to put "Linux" in front, but the naming is consistent with Microsoft's naming inconsistencies. It's not the first time they used "subsystem for" and it's not the first time they used "Windows x for y" either.
The naming is ambiguous, you could interpret the Windows subsystem for Linux as a subsystem of Linux (if it had such a thing) that runs Windows, or as a Windows subsystem for use with Linux. Swapping the order doesn't change that.
In other languages, the difference would be clearer.
I do agree it's an issue of English being an imprecise language.
And this is a poor example, because Microsoft wants to be Microsoft.
The name we shipped was even worse than Windows Subsystem for Linux, honestly. At least Microsoft spent some time on it.
I am going to run this in Windows 95 on a Sun PC card under Solaris 7.
from the same commenter who effused jesus fucking christ this is an abomination of epic proportions that has no right to exist in a just universe and I love it so muchI built a Win9x compatibility mode for BrowserBox that does exactly this (https://github.com/BrowserBox/BrowserBox/blob/main/readme-fi...). Ur modern server does all the rendering, and it outputs a client link specifically designed for legacy browsers like IE5, IE6, and Netscape running on Windows 95/98/NT, streaming them the pixels. It's definitely an abomination, but there's something magical and retro that I like about viewing the 2026 internet through an IE6 window ;) ;p xx
- Retrozilla with some about:config flags disabling old SSL cyphers and new keys to enable newer ones
- Iron TCL maybe with KernelEx and BFGXP from https://luxferre.top reading gopher and gemini sites such as gemini://gemi.dev proxying all the web bloat and slimming it down like crazy
- Same Gemini URL, but thru http://portal.mozz.us/gemini . Double proxy in the end, but it will be readable.
https://github.com/wishstudio/flinux
flinux essentially had the architecture of WSL1, while CoLinux was more like WSL2 with a Linux kernel side-loaded.
Cygwin was technically the correct approach: native POSIX binaries on Windows rather than hacking in some foreign Linux plumbing. Since it was merely a lightweight DLL to link to (or a bunch of them), it also kept the cruft low without messing with ring 0.
However, it lacked the convenience of a CLI package manager back then, and I remember being hooked on CoLinux when I had to work on Windows.
The problem with Cygwin as I remember it was DLL hell. You'd have applications (such as a OpenSSH port for Windows) which would include their own cygwin1.dll and then you'd have issues with different versions of said DLL.
Cygwin had less overhead which mattered in a world of limited RAM and heavy, limited swapping (x86-32, limited I/O, PATA, ...).
Those constraints also meant native applications instead of Web 2.0 NodeJS and what not. Java specifically had a bad name, and back then not even a coherent UI toolkit.
As always: two steps forward, one step back.
The single biggest problem it has is slow forking. I learned to write my scripts in pure bash as much as possible, or as a composition of streaming executables, and avoid executing an executable per line of input or similar.
If you're curious, I believe the issue was discussed at length in the Go GitHub issues years ago. Also on the mailing lists of many other languages.
https://frippery.org/busybox/index.html
It has a subset of bash implemented on Ash/Dash. Arrays are not supported, but it is quite fast.
The forking problem is still present, though.
For example, someone might do something like this (completely ignoring the need to quote in the interests of illustrating the actual issue, forking):
for x in *; do
new_name=$(echo $x | sed 's/old/new/')
mv $x $new_name
done
Instead of something like this: for x in *; do
echo $x
done | sed -r 's|(.*)old(.*)|mv \1old\2 \1new\2|' | grep '^mv ' | bash
This avoids a sed invocation per loop and eliminates self-renames, but it's harder to work with.Of course the code as written is completely unusuable in the presence of spaces or other weird characters in filenames, do not use this.
Dash has been benchmarked as 4x faster than bash. The bash manpage ends by stating that "bash is too big, and too slow."
To solve the problem or because you saw "slow" and "bash" and wanted to bring up something cool but unrelated?
If I go from 10 seconds of forking and .04 seconds of shell to 10 seconds of forking and .01 seconds of shell, I don't actually care about how cool and fast the shell is. And I've never had the speed of bash itself be a problem.
$ parameter='fisholdbits'
$ echo ${parameter/old/new}
fishnewbits> slow forking
There isn't much that can be done about that: starting up and tearing down a process on Windows is much more resource intensive operation than most other OSs because there is a lot going on by default that on other OSs a process ops into, only if it needs to, by interacting with GUI libraries and such. This is why threads were much more popular on Windows: while they are faster than forking on other OSs too, especially of course if data needs to be shared between the tasks because IPC is a lot more expensive than just sharing in-process memory, the difference is not as stark as seen under Windows so the potential difficulties of threaded development wasn't always worth the effort.
Cygwin can't do anything about the cost of forking processes, unfortunately.
As a dependency of a shipping Windows application that needs to cleanly coexist side-by-side with existing Cygwin installations and optionally support silent install/upgrade/uninstall through mechanisms like SCCM, Intune, and Group Policy?
Not so much.
I do use the setup program to build the self-contained Cygwin root that's ultimately bundled into my program's MSI package and installed as a subdirectory of its Program Files directory, however.
Maybe so, but my memory of Cygwin was waiting multiple seconds just for the Cygwin CLI prompt to load. It was very slow on my machines.
Java was ahead of its time, now nothing has a coherent UI toolkit.
You don’t get an app that looks the same across platforms. You do get apps that look like they belong on your platform, even though the code is cross-platform. It uses the native toolkit no matter where you run it across Windows, GTK, Qt, Motif, macOS/Carbon, macOS/Cocoa, and X11 with generic widgets.
Older platforms are also supported, like OS/2, Irix, and OSF/1.
https://wiki.wxwidgets.org/Supported_Platforms
It’s a C++ project, but it has bindings for most of the languages you’d use to build an application. Ada? Go? Delphi? Ruby? Python? Rust? Yes, and more. https://wiki.wxwidgets.org/Bindings
So, if you use wxWidgets, you probably have to use either C++ or Python version, others are unlikely to be supported.
Among actively developed bindings, there is also wxRust at https://crates.io/crates/wxdragon
What I usually do is make sure my code builds with both Cygwin and MingW, and distribute the binaries built with MingW.
I'm sure they did the best they could ... it was just really painful to use.
I was a Cygwin user from about 1999 to 2022 or so, spent a little time on wsl2 (and it's what I still use on my laptop) but I'm fully Linux on the desktop since last year.
It doesn't help that the package simply named "gcc" is for the MSYS2 target.
Instead, you either want UCRT64 or CLANG64, depending on whether you want to build with the GNU or LLVM toolchains, as it uses the newer, fully-supported Universal C Runtime instead.
As for UTF-8 support, it's the manifest file that determines whether Windows sets the ANSI code page to UTF-8. (There's also an undocumented API function that resets the code page for GetACP and the Rtl functions that convert ANSI into Unicode. But this would run after all the other DLLs have finished loading.) Having the code page correct is enough to support Unicode filenames and Unicode text in the GUI.
It just won't provide UTF-8 locale support for the standard C library.
I can certainly relate to this: I'm currently sitting on a request for an enhancement to a product (currently running on a 32-bit Windows 10 VM) with a build system that has never been updated to support any Microsoft platform other than MS-DOS, or toolchain newer than Microsoft C 5.1.
This made me laugh. It reminded me of course work I did in university that was clearly written many years before I took the course as it recommended we manually enable the "new" c99 standard in our compiler, which I guess survived in the documentation up through when I took the course, at which point it was still relevant since GCC was otherwise defaulting to C11 by the time I was using it.
So now even today, compiling any GNU package means probing every last feature from scratch and spitting out obscenely rococo scripts and Makefiles tens of thousands of lines long. We can do better, and have, but damn are there a lot of active codebases out there that still haven't caught up.
Interix was implemented as proper NT kernel "subsystem". It was just another build target for GNU automake, for example.
(Being that Interix was a real kernel subsystem I have this fever dream idea of a text-mode "distribution" of NT running w/o any Win32 subsystem.)
Requiring every single Linux app developer to recompile their app using Cygwin and account for quirks that it may have is not the correct approach. Having Microsoft handle all of the compatibility concerns scales much better.
Cygwin started in 1995. Microsoft wasn't cooperative with FOSS at all at that point. They were practicing EEE, and eating some expensive Unix/VMS machines with WNT.
Cygwin doesn't work at all in Windows AppContainer package isolation; too many voodoo hacks. MSYS2 uses it to this day, and as a result you can't run any MSYS2 binaries in an AppContainer. Had to take a completely different route for Claude Code sandboxing because of this: Claude Code wants Git for Windows, and Git for Windows distributes MSYS2-built binaries of bash.exe and friends. Truly native Windows builds don't do all the unusual compatibility hacks that cygwin1.dll requires; I found non-MSYS2-built binaries of the same programs all ran fine in AppContainer.
A lot of this is issues Microsoft could fix if they were sufficiently motivated
e.g. Windows lacks a fork() API so cygwin has to emulate it with all these hacks
Well, technically the NT API does have the equivalent of fork, but the Win32 layer (CSRSS.EXE) gets fatally confused by it. Which again is something Microsoft could potentially fix, but I don’t believe it has ever been a priority for them
Similarly, Windows lacks exec(), as in replace the current process with new executable. Windows only supports creating a brand new process, which means a brand new PID. So Cygwin hacks it by keeping its own PID numbers; exec() changes your Windows PID but not your Cygwin PID. Again, something Microsoft arguably could fix if they were motivated
They did fix it, in a sense, with WSL1 picoprocesses. Faster and more compatible than Cygwin. Real fork and exec on the Windows NT kernel. Sadly, WSL2 is even faster and more compatible while being much less interesting. WSL1 was pretty neat, at least, and is still available.
In any event, this diversion doesn't change my analysis of Cygwin. Cygwin still sucks regardless of whose fault it is. I intentionally left this stuff out of my post because I thought it was obvious that Cygwin is working around Windows limitations to hack in POSIX semantics; it's the whole point of the project. None of us can change Windows or Cygwin and they're both ossified from age and lack of attention. We have to live with the options we've actually got.
If you need a Windows build of a Linux tool in 2026 and can't use WSL, try just building it natively (UCRT64, CLANG64, MSVC, your choice) without a compatibility layer. Lots of tools from the Linux ecosystem actually have Windows source compatibility today. Things were different in the 90s when Cygwin was created.
Cygwin still lacks that to this day, you have to fire up to GUI installer to update packages.
MSYS2 is cygwin with pacman cli.
Back when I was still using windows (probably XP era), I used to run colinux, it was kind of amazing, setting up something like LAMP stack on the linux side was a lot easier and then using windows editors for editing made for quite nice local dev env, I think! Could even try some of the X11 servers on windows and use a linux desktop on top of windows.
When I noticed I kept inching towards more and more unixy enviornment on the windows, I eventually switched to macOS.
Apart from the obvious hack-value, I can't quite imagine even pretend use-case, with some 486 era machine, you would be limited by memory quite quickly!
What I especially like about this Windows 9x Subsystem project, is that it proves that coLinux could have been written way earlier. Now imagine how less dual booting we would have needed in 1996 if that happened, and how it would have affected VMware which only existed since 1998.
If you want to run your windows software in Linux, you could try Wine. Wine seems to have support for WNASPI so it's possible your software would just work. (You might have to run Wine as root I guess, to get access to the SCSI devices.)
If Wine doesn't work, Windows in QEMU with PCI passthrough to the SCSI controller might have better chances to work.
Wines WNASPI32.dll is really just a facade - it doesn't provide actual SCSI services, its just there for SCSI-using apps to think they have ASPI onboard - so for my case I would need to write a shim to pass through SCSI IO requests to a Linux service - or loopback file? - to actually process the requests. I've been meaning to do this for a long time, but if there is some way I can set up a loopback file under Linux to 'pretend' to be a SCSI block device for a Windows app, I'd sure like to know if its possible ..
I mean it's like trying to balance a cybetruck into 4 skateboards and flunging it over a hill cool
This trickery is called binfmt_misc , which is a linux kernel system to associate random binary files with custom userspace 'interpreters'
I have had it working in the past. And while it is kinda neat I prefer manually running 'wine program.exe' to have a bit more control.
I have seen reports that a binfmt_misc setup + wine is good enough to get infected by certain windows viruses ;-P
To get around that, I recently added a legacy compatibility mode to BrowserBox (bbx win9x-run). It basically lets you run the server on your modern daily driver, and access it via IE 5, IE 6, or Netscape on the Pentium box. It strips away the modern TLS/JS rendering issues and lets you actually browse the modern web from Windows 9x. Highly recommend giving it a spin if you get that machine built!
> looks inside
> virtual 8086 mode
Per wongarsu's post, something like the OS/2 Subsystem is an OS/2 system with Windows beneath it, but the OS/2 Subsystem is much smaller and less consequential, thus subsidiary (in the auxiliary sense) to Windows as a whole.
Isn't marketing fun?
This is how we end up with hundreds of products that provide "solutions" to your business problems and "retain customers" and upwards of a dozen other similar phrases they all slather on their frontpages, even though one is a distributed database, one is a metrics analysis system, one handles usage-based billing, one is a consulting service, one is a hosted provider for authentication... so frustrating trying to figure out just what a product is sometimes with naming conventions that make "Windows Subsystem for Linux" look like a paragon of clarity. At least "Linux" was directly referenced and it wasn't Windows Subsystem for Alternate Binary Formats or something.
(Edited: mixed it up on the last sentence.)
Which is exactly what the post says this is. It's running Windows 9x on Linux kernel. It's strangely worded, but from the follow up comment, and the readme in the repo it seems clear that it's running on the Linux kernel.
Pretty sure it's the other way around. But I haven't had my coffee yet ;)
Microsoft's naming scheme confuses me every single time though: "Windows Subsystem for Linux" actually runs Linux on Windows...
I missed the part that it runs on the bloody Windows 9x kernel, I was to busy thinking about modern Windows.
These days especially, it's better to just use 86Box for those operating systems.
Unfortunately this is ambiguous, as there's an AI product called Zero AI.
Looks like it's been updated now to be more clear. Amazing though.
The Wikipedia page is not verify informative and presents it as a regular VM (possibly mixing up 9x and later versions that run the NT line of kernels). The manual is a bit more informative about the tech:
ilab.usc.edu/packages/special/Win4Lin-3.0.1-manual.pdf
I’m a bit surprised it hasn’t been mentioned a lot in the comments. Maybe it’s a bit too old for most people here (Linux in the late 90ies/early 00s was a much smaller community)?
Maybe there's some detail I don't quite follow, like is has support for 486, but only those with a built in FPU?
To me, this seems an impossible feat.
But I wonder how it seems to people who understand how it works?
I'm reminded of this joke:
Two mathematicians are talking. One says a theorem is trivial. After two hours of explanation, the other agrees that it is indeed trivial.
As someone who mostly understands what's going on - It does not seem like wizardry to me, but I am very impressed that the author figured out the long list of arcane details needed to make it work.
What makes you think so?
In the math space it's not even quite as silly as it sounds. Something can be both "obvious" and "true", but it can take some substantial analysis to make sure the obvious thing is true by hitting it with the corner cases and possibly exceptions. There is a long history of obvious-yet-false statements. It's also completely sensible for something to be trivially true, yet be worth some substantial analysis to be sure that it really is true, because there's also a history of trivial-yet-false statements.
I could analogize it in our space to "code so simple it is obviously bug free" [1]... even code that is so simple that it is obviously bug free could still stand to be analyzed for bugs. If it stands up to that analysis, it is still "so simple it is obviously bug free"... but that doesn't mean you couldn't spend hours carefully verifying that, especially if you were deeply dependent on it for some reason.
Heck I've got a non-trivial number of unit tests that arguably fit that classification, making sure that the code that is so simple it is bug free really is... because it's distressing how many times I've discovered I was wrong about that.
[1]: In reference to Tony Hoare's "There are two ways to write code: write code so simple there are obviously no bugs in it, or write code so complex that there are no obvious bugs in it."
That makes sense, I was just born yesterday.
In a literal sense, it very well may have been trivial, even if neither you nor the professor would have been able to easily show it.
The one I've always flown with is, trivial means (1) a special case of a more general theory (2) which flattens many of the extra frills and considerations of the general theory and (3) is intuitively clear ("easy") to appreciate and compute.
From this perspective, everything is trivial from the relative perspective of a god. I know of no absolute definition of trivial.
Have the model spit out example programs to study the API
There's special hardware in a processor, for the operating system to limit each programs access to memory and processing time, which Windows 9x leaves unused. This means that the Windows 9x Subsystem for Linux can say "look at me i'm the operating system now" and take over that hardware to run a modern operating system.
What Windows 9x didn't have was security. A program could interfere with these mechanisms, but usually only if it was designed to do that, not as a result of a random bug (if the entire machine crashed, it was usually because of a buggy driver).
Windows 3.11 was a hypervisor running virtual machines. The 16-bit Windows virtual machine (within which everything was cooperatively multitasking), the 32-bit headless VM that ran 32-bit drivers, and any number of V86 DOS virtual machines.
Win9x was similar in the sense that it had the Windows virtual machine running 32-bit and 16-bit Windows software along with V86 DOS VMs. It did some bananas things by having KERNEL, USER, and GDI "thunk" between the environments to not just let 16-bit programs run but let them continue interacting with 32-bit programs. So no, Win9x was in fact 32-bit protected mode with pre-emptive multitasking.
What Win9x prioritized was compatibility. That meant it supported old 16-bit drivers and DOS TSRs among other things. It also did not have any of the modern notions of security or protection. Any program could read any other program's memory or inject code into it. As you might expect a combination of awful DOS drivers and constant 3rd party code injection was not a recipe for stability even absent bad intentions or incompetence.
Windows 2000/XP went further and degraded the original Windows NT design by pulling stuff into kernel mode for performance. GDI and the Window Manager were all kernel mode - see the many many security vulnerabilities resulting from that.
WSL9x uses the same Win9x memory protection APIs to set up the mappings for Linux processes, and the memory protection in this context is solid. The difference is simply that there is no need to subvert it for compatibility.
Windows 9x, by contrast, was DOS-derived. Running Linux inside it would require fundamentally different (and messier) hacks - which is probably why nobody did it at the time. The very fact that this works at all is a testament to how ahead-of-its-time NT's architecture was.
For a practical answer: you'd need something like this for legacy locked-in situations. Old medical or industrial software that only runs on Windows 98, or specialized hardware without modern drivers. That said, if you have a 486 handy in 2026, running Linux natively is almost certainly more useful than running it inside a 30-year-old DOS derivative.
Some interesting reading:
1. What was the role of MS-DOS in Windows 95? (https://devblogs.microsoft.com/oldnewthing/20071224-00/?p=24...)
2. Why doesn’t Windows 95 format floppy disks smoothly? (https://devblogs.microsoft.com/oldnewthing/20090102-00/?p=19...)
3. Running MS-DOS programs in a window alongside other Windows programs (https://devblogs.microsoft.com/oldnewthing/20231127-00/?p=10...)
I have not tested this yet, so I have no idea - but if he managed to pull this off then this may be one of the greatest achievements this year. Or perhaps there are some restrictions to it? Does compiling stuff work in it? So many questions ... who has the answers?
note: i'm not saying author did not improve his skills overall, but also last '6 years' perhaps also means - fair amount of digging the web with search engines, which are... like AI 0.1
https://github.com/jwilk/zygolophodon
I've been working on a WebExtension that calls out to zygolophodon and returns plain HTML to the browser. In the process of rebasing it over recent changes but here is the working webext-old branch:
https://github.com/jwilk/zygolophodon/compare/master...pabs3...
Edit: to the people who downvoted, I corrected my spelling mistake. I will perform due penance.
But CoLinux - IIRC - required the NT branch of Windows. I can only imagine the level of hackery it takes to make this happen on Windows 9x.
Part of me wants to weep at the sheer perversity, part of me wants to burst into manic laughter. It is indeed a world of endless wonders.
(To clarify: while the kernel is actually running at ring 0, to act as a driver it seems to use the usermode profile.)
Does it live there irrespective of this project? Or is that part of the patching?
Do any screen editors work in the command prompt windows? Try with "export TERM=ansi".
I wonder how similar this project is to "BSD on Windows": https://archive.org/details/bsd-on-windows
Also, I know about https://en.wikipedia.org/wiki/Architecture_of_Windows_9x, but it's not really meaty enough for my taste. :)
It's got lots of very thorough documentation and sample code to dig through
Other well known anecdotes and pieces:
https://jacobfilipp.com/msj-index/
https://web.archive.org/web/20240318233231/https://bytepoint...
ErroneousBosh•1d ago
Yes, I have weird problems. I get to look after some very weird shit.
defrost•1d ago
Still got those in this part of the world sharing space with state of the art autonomous 100+ tonne robo trucks.
ourmandave•1d ago
keepamovin•1d ago
I actually built a win9x compatibility mode into BrowserBox specifically for this kind of weirdness. You run the server on a modern system and launch it with bbx win9x-run, and it proxies the modern web to legacy clients. It works surprisingly well with IE5, IE6, and old Netscape on Windows 95/98/NT. Might be a fun addition to your retro utility belt!
anthk•1d ago
Or better, ditch the web completely and head to Gopher/Gemini.
keepamovin•1d ago
I get the degradation to Gopher as a way to solve many issues, but many things just don't work there. And it may have its own vulnerabilities.
ErroneousBosh•1d ago
Run it in Windows XP, in a VM.
Now here's the clever bit - qemu will allow you to expose the keyboard, mouse, and framebuffer as a VNC server. So you set up Apache Guacamole to point a VNC client at the VM, and then "normal people" can log in, operate the transmitter, and log out again.
You can do a lot of sneaky things with that, including setting up headless X, running VNC on it pointed at your qemu VM, and then streaming the headless X servers's framebuffer out with ffmpeg.
Yes sometimes work can be a bit boring with not much to do, why do you ask?
keepamovin•1d ago
BrowserBox is basically the same pattern as this setup (streaming graphics from some browsing substrate somewhere to web clients) except architecture is different: a modern box on the same private network runs the BrowserBox server, and the win box (QEMUd or otherwise) connects to its http endpoint, using whatever browser it has (tested back to IE5 even, tho that's a way more buggy browser than IE6. IE8 should be golden). That way you get the full modern web, no compromises. But crucially the web is not actually accessing your legacy box. So, no comrp0mises, ie., no easy vulns. Especially a concern for older browsers. Plus, we've got policy controls to lock down capabilities (copy, paste, URL lists, internal IP access controls, etc).
In your case it sounds like you are running the webby servers on the XP box, too, so BrowserBox would link back into those over the same private network, render them on the modern box, send it back to the XP box, then clients can connect over the QEMU VNC bridge you already have.
Alternately you could just do away with the Win XP browsing, and have BrowserBox connect to your webby endpoints for transmitters wherever you run them, and then expose that browsing graphics stream to clients over whatever endpoint you want. Many options!
I like your ffmpeg out setup. How did that go? Share more about that? Pretty interesting, I love this old architectures, and legacy systems compatibility quests.
ErroneousBosh•1d ago
> In your case it sounds like you are running the webby servers on the XP box, too,
Yeah - it's a web front end to some specialised software written on I guess Microsoft C++ (if I had time, enthusiasm, and a copy of it lying around I suppose I'd wave Ghidra at it and see what happens).
I'll look into BrowserBox, that sounds handy.
> I like your ffmpeg out setup. How did that go? Share more about that? Pretty interesting, I love this old architectures, and legacy systems compatibility quests.
Surprisingly well. I think you could probably stream it straight to Twitch or something if you wanted. Yeah, this sounds like a blog post.
keepamovin•1d ago
ErroneousBosh•21h ago
ErroneousBosh•1d ago
There is a section of the Forties Pipeline where they have a huge amount of gas handling plant in central Scotland. Last time I was on site (admittedly 15 years ago but I don't see this changing soon) the SCADA outstations were run by absolutely minty box fresh VAXStation 3100s. Plastic not even peeled off the front panel badges fresh.
thijsvandien•1d ago
dnnddidiej•1d ago
jesterson•1d ago
Just few months ago seen windows 95 error message on HSBC ATM.
ErroneousBosh•1d ago
I recently saw that running on special 16 channel DAT recorders used by the 999 service, recently as in "within the past five years". I believe they've been retired but kept around in case they need to recover tapes off them.
I kept my mouth ABSOLUTELY FUCKING SHUT about knowing my way round OS/2 Warp 4.
tracker1•16h ago
Except it would be nice if computers were as snappy as back then... so many layers of cruft that nothing really pops. I remember I used to just disable all the NT/2K animations and it felt so nice. I built a career on web apps, but most are just really poorly written it pisses me off.
ErroneousBosh•1h ago
You should try Haiku: https://www.haiku-os.org/
It's actually pretty usable these days, at least on Thinkpads and other "generic Intel chipset" machines.
ErroneousBosh•1d ago