Better to consider is the Proton verified count, which has been rocketing upwards.
Just target Windows, business as usual, and let Valve do the hard work.
But they do test their Windows games on Linux now and fix issues as needed. I read that CDProjekt does that, at least.
Maybe Valve can play the reverse switcheroo out of Microsoft's playbook and, once enough people are on Linux, force the developers' hand by not supporting Proton anymore.
How many game studios were bothering with native Linux clients before Proton became known?
That goes back to address the original question of "But would you want to run these Win32 software on Linux for daily use?"
Emulation does not mean that the CPU must be interpreted. For example, the DOSEMU emulator for Linux from the early 90s ran DOS programs natively using the 386's virtual 8086 mode, and reimplemented the DOS API. This worked similarly to Microsoft's Virtual DOS Machine on Windows NT. For a more recent example, the ShadPS4 PS4 emulator runs the game code natively on your amd64 CPU and reimplements the PS4 API in the emulator source code for graphics/audio/input/etc calls.
But if you liked that, consider that C# was in many ways a spiritual successor to Delphi, and MS still supports native GUI development with it.
The web was a big step backwards for UI design. It was a 30 year detour whose results still suck compared to pre-web UIs.
Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Wait you can make Android applications with Golang without too much sorcery??
I just wanted to convert some Golang CLI applications to GUI's for Android and I instead ended up giving up on the project and just started recommending people to use termux.
Please tell me if there is a simple method for Golang which can "just work" for basically being the Visualbasic-alike glue code to just glue CLI and GUI mostly.
Why don't you try it out: https://www.remobjects.com/elements/gold/
One of the key principles of f-droid is that it must be reproducible (I think) or open source with it being able to be built by f-droid servers but I suppose reproducibility must require having this software which is paid in this case.
We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.
I might unironically use this. The Windows 2000 era desktop was light and practical.
I wonder how well it performs with modern high-resolution, high-dpi displays.
But you can use group policy etc. freely. I don't know how Win 11 is though
I used to be a pretty happy Windows camper (I even got through Me without much complaint), but I'm so glad I moved to Linux and KDE for my private desktops before 11 hit.
Things started going downhill after that.
The answer to maintaining a highly functional and stable OS is piles and piles of backwards compatibility misery on the devs.
You want Windows 9? Sorry, some code checks the string for Windows 9 to determine if the OS is Windows 95 or 98.
Competition. In the first half of the 90s Windows faced a lot more of it. Then they didn't, and standards slipped. Why invest in Windows when people will buy it anyway?
Upgrades. In the first half of the 90s Windows was mostly software bought by PC users directly, rather than getting it with the hardware. So, if you could make Windows 95 run in 4mb of RAM rather than 8mb of RAM, you'd make way more sales on release day. As the industry matured, this model disappeared in favor of one where users got the OS with their hardware purchase and rarely bought upgrades, then never bought them, then never even upgraded when offered them for free. This inverted the incentive to optimize because now the customer was the OEMs, not the end user. Not optimizing as aggressively naturally came out of that because the only new sales of Windows would be on new machines with the newest specs, and OEMs wanted MS to give users reasons to buy new hardware anyway.
UI testing. In the 1990s the desktop GUI paradigm was new and Apple's competitive advantage was UI quality, so Microsoft ran lots of usability studies to figure out what worked. It wasn't a cultural problem because most UI was designed by programmers who freely admitted they didn't really know what worked. The reason the start button had "Start" written on it was because of these tests. After Windows 95 the culture of usability studies disappeared, as they might imply that the professional designers didn't know what they were doing, and those designers came to compete on looks. Also it just got a lot harder to change the basic desktop UI designs anyway.
The web. When people mostly wrote Windows apps, investing in Windows itself made sense. Once everyone migrated to web apps it made much less sense. Data is no longer stored in files locally so making Explorer more powerful doesn't help, it makes more sense to simplify it. There's no longer any concept of a Windows app so adding new APIs is low ROI outside of gaming, as the only consumer is the browser. As a consequence all the people with ambition abandoned the Windows team to work on web-related stuff like Azure, where you could have actual impact. The 90s Windows/MacOS teams were full of people thinking big thoughts about how to write better software hence stuff like DCOM, OpenDoc, QuickTime, DirectMusic and so on. The overwhelming preference of developers for making websites regardless of the preferences of the users meant developing new OS ideas was a waste of time; browsers would not expose these features, so devs wouldn't use them, so apps wouldn't require them, so users would buy new computers to get access to them.
And that's why MS threw Windows away. It simply isn't a valuable asset anymore.
This is largely true in North America, UK and AUS/NZ, less true in Europe, a mixed bag in the Middle East and mostly untrue everywhere else.
And failing everything else, Microsoft is in the position to put WSL center and front, and yet again, that is the laptops that normies will buy.
It's not a moving target. Proton and Wine have shown it can be achieved with greater comparability than even what Microsoft offers.
It is a moving target, Proton is mostly stuck on Windows XP world, before most new APIs started being a mix of COM and WinRT.
Even if that isn't the case, almost no company would bother with GNU/Linux to develop with Win32, instead of Windows, Visual Studio, business as usual.
It's a start.
(That and Linux doesn't implement win32 and wine doesn't exclusively run on Linux.)
If you make a piece of software today and want to package it for Linux its an absolute mess. I mean, look at flatpack or docker, a common solution for this is to ship your own userspace, thats just insane.
It's much more bloated than it should be, but the best way to reliably run old/new software in any given Linux.
What are some examples?
One more popular example is Grid 2, another is Morrowind. Both crash on launch, unless you tweak a lot of things, and even then it won't always succeed.
Need for Speed II: SE is "platinum" on Wine, and pretty much unable to be run at all on Windows 11.
[0] https://learn.microsoft.com/en-us/windows/win32/direct3darti...
I see there are guides on Steam forums on how to get it to run under Windows 11 [0], and they are quite involved for someone not overly familiar with computers outside of gaming.
0: https://steamcommunity.com/sharedfiles/filedetails/?id=29344...
A recent example is that in San Andreas, the seaplane never spawns if you're running Windows 11 24H2 or newer. All of it due to a bug that's always been in the game, but only the recent changes in Windows caused it to show up. If anybody's interested, you can read the investigation on it here: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
It's a great game, unfortunately right now I am not able to play it anymore :( even though I have the original CD.
Unfortunately, Wine is of no help here :(
Also original Commandos games.
Windows kept logging down the system trying to download a dozen different language versions of word (for which I didn't have a licence and didn't want regardless). Steam kept going into a crash restart cycle. Virus scanner was ... being difficult.
Everything just works on Linux except some games on proton have some sound issues that I still need to work out.
Is this 1998? Linux is forever having sound issues. Why is sound so hard?
As always It is Not Linux Fault, but it is Linux Problem.
It's one of the reasons why I moved to OSX + Linux virtual machine. I get the best of both worlds. Plus, the hardware quality of a 128GB unified RAM MacBookPro M4 Max is way beyond anything else in the market.
It doesn’t help that they only officially support rocky Linux. I use mint. I assume there’s some magic pipewire / alsa / pulseaudio commands I can run that would glue everything together properly. But I can’t figure it out. It just seems so complicated.
glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.
patchelf --set-interpreter /lib/ld-linux-x86-64.so.2 "$APP"
patchelf --set-rpath /lib "$APP"Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.
Who needs ABI compatibility when your software is OSS? You only need API compatibility at that point.
Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.
Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;
"I followed your instructions and it doesn't run".
Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.
Distribution as source fails because there are too many unknown, and dependent parts.
Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.
I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.
People don’t seem to mind downloading a 30mb executable, so long as it actually works.
Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.
You can still get firefox as a .deb though.
It makes sense. Every distribution wants to be in charge of what set of libraries are available on their platform. And they all have their own way to manage software. Developing applications on Linux that can be widely used across distributions is way more complex than it needs to be. I can just ship a binary for windows and macOS. For Linux, you need an rpm and a dpkg and so on.
I use davinci resolve on Linux. The resolve developers only officially support Rocky Linux because anything else is too hard. I use it in Linux mint anyway. The application has no title bar and recording audio doesn’t work properly. Bleh.
Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old.
I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls.
(Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?)
AppImage have some issues/restrictions like it cant run on older linux than one it was compiled on, so people compile it on the oldest pc's and a little bit of more quirks
AppImage are really good but zapps are good too, I had once tried to do something on top of zapp but shame that zapp went into the route of crypto ipfs or smth and then I don't really see any development of that now but it would be interesting if someone can add the features of zapp perhaps into appimage or pick up the project and build something similar perhaps.
At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).
To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions:
multi-processing, context switching, tree-like file systems, multiple users, access privileges,
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like tree-like file systems, WIMP GUIs, per-user privileges, the fuzziness of what an
"operating system" even is and its role,
are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as: ZFS (which implements in a very well-engineered manner a tree-like data storage that's
been standard since the '60s) can serve as a founation for
Postgres (which implements a better-conceived relational design)
I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced an anarchy of packaging systems, breaking upgrades and updates,
unstable glibc, desktop environments that are different and changing seemingly
for the sake of it, sound that's kept breaking, power management iffiness, etc.I wish either of those systems had the same hardware & software support. I’d swap my desktop over in a heartbeat if I could.
'unfortunate rough edges that people only tolerate because they use WINE as a last resort'
Whether those rough edges will ever be ironed out is a matter I'll leave to other people. But I love that someone is attempting this just because of the tenacity it shows. This reminds me of projects like asahi and cosmopolitan c.
Now if we're to do something to actually solve for Gnu/Linux Desktops not having a stable ABI I think one solution would be to make a compatibility layer like Wine's but using Ubuntu's ABIs. Then as long as the app runs on supported Ubuntu releases it will run on a system with this layer. I just hope it wouldn't be a buggy mess like flatpak is.
It's my strong opinion that Windows 2000 Server, SP4 was the best desktop OS ever.
I wanted to be nice and entered a genuine Windows key still in my laptop's firmware somewhere.
As a thank you Microsoft pulled dozens of the features out of my OS, including remote desktop.
As soon as these latest FSR drivers are ported over I will swap to Linux. What a racket, lol.
1. The exact problem with the Linux ABI
2. What causes it (the issues that makes it such a challenge)
3. How it changed over the years, and its current state
4. Any serious attempts to resolve it
I've been on Linux for may be 2 decades at this point. I haven't noticed any issues with ABI so far, perhaps because I use everything from the distro repo or build and install them using the package manager. If I don't understand it, there are surely others who want to know it too. (Not trying to brag here. I'm referring to the time I've spent on it.)
I know that this is a big ask. The best course for me is of course to research it myself. But those who know the whole history tend to have a well organized perspective of it, as well as some invaluable insights that are not recorded anywhere else. So if this describes you, please consider writing it down for others. Blog is probably the best format for this.
My understanding is that very old statically linked Linux images still run today because paraphrasing Linus: "we don't break user space".
Also, if you happened to have linked that image to a.out it wouldn't work if you're using a kernel from this year, but that's probably not the case ;)
The kernel doesn't break user space. User space breaks on its own.
Good operating systems should:
1. Allow users to obtain software from anywhere.
2. Execute all programs that were written for previous versions reliably.
3. Not insert themselves as middlemen into user/developer transactions.
Judged from this perspective, Windows is a good OS. It doesn't nail all three all the time, but it gets the closest. Linux is a bad OS.
The answers to your questions are:
(1) It isn't backwards compatible for sophisticated GUI apps. Core APIs like the widget toolkits change their API all the time (GTK 1->2->3->4, Qt also does this). It's also not forwards compatible. Compiling the same program on a new release may yield binaries that don't run on an old release. Linux library authors don't consider this a problem, Microsoft/Apple/everyone else does. This is the origin of the glibc symbol versioning errors everyone experiences sometimes.
(2) Maintaining a stable API/ABI is not fun and requires a capitalist who says "keep app X working or else I'll fire you". The capitalist Fights For The User. Linux is a socialist/collectivist project with nobody playing this role. Distros like Red Hat clone the software ecosystem into a private space that's semi-capitalist again, and do offer stable ABIs, but their releases are just ecosystem forks and the wider issue remains.
(3) It hasn't change and it's still bad.
(4) Docker: "solves" the problem on servers by shipping the entire userspace with every app, and being itself developed by a for-profit company. Only works because servers don't need any shared services from the computer beyond opening sockets and reading/writing files, so the kernel is good enough and the kernel does maintain a stable ABI. Docker obviously doesn't help the moment you move outside the server space and coordination requirements are larger.
Never happens for me on Arch, which I've run as my primary desktop for 15 years.
Perhaps that could be mitigated if someone could come up with an awesome OSS machine code translation layer like Apple's Rosetta.
This will never work, because it isn't a radical enough departure from Linux.
Linux occupies the bottom of a well in the cartesian space. Any deviation is an uphill battle. You'll die trying to reach escape velocity.
The forcing factors that pull you back down:
1. Battles-testedness. The mainstream Linux distros just have more eyeballs on them. That means your WINE-first distro (which I'll call "Lindows" in honor of the dead OS from 2003) will have bugs that make people consider abandoning the dream and going back to Gnome Fedora.
2. Cool factor. Nobody wants to open up their riced-out Linux laptop in class and have their classmate look over and go "yo this n** running windows 85!" (So, you're going to have to port XMonad to WINE. I don't make the rules!)
3. Kernel churn. People will want to run this thing on their brand-new gaming laptop. That likely means they'll need a recent kernel. And while they "never break userspace" in theory, in practice you'll need a new set of drivers and MESA and other add-ons that WILL breaks things. Especially things like 3D apps running through WINE (not to mention audio). Google can throw engineers at the problem of keeping Chromium working across graphics stacks. But can you?
If you could plant your flag in the dirt and say "we fork here" and make a radical left turn from mainline Linux, and get a cohort of kernel devs and app developers to follow you, you'd have a chance.
I might seriously recommend it to newbies and like there is just this love I have for windows 7 even though I really didn't use it for much but its so much more elegant in its own way than windows 10
like it can be a really fun experiment and I would be interested to see how that would pan out.
Rough approximations have been possible since the early 2000s, but they’re exactly that: rough approximations. Details matter, and when I boot up an old XP/7 box there are aspects in which they feel more polished and… I don’t know, finished? Complete? Compared to even the big popular DEs like KDE.
Building a DE explicitly as a clone of a specific environment would also do wonders to prevent feature creep and encourage focus on fixing bugs and optimization instead bells and whistles, which is something that modern software across the board could use an Everest sized helping of.
lproven•9h ago