Better to consider is the Proton verified count, which has been rocketing upwards.
(also Microsoft has been heavily embracing Linux and open source in the last decade)
Nowadays, with the Windows team barely able to produce a functional UI, what's happening with the NT kernel? Is it all graybeards back there? When they retire, the stability of Windows going to be in trouble, which is important for the things that really pull in the money. It'll get real bad, then they'll give up and move to an open source base, just like Edge.
No reason to dump a very good kernel.
Just target Windows, business as usual, and let Valve do the hard work.
But they do test their Windows games on Linux now and fix issues as needed. I read that CDProjekt does that, at least.
Maybe Valve can play the reverse switcheroo out of Microsoft's playbook and, once enough people are on Linux, force the developers' hand by not supporting Proton anymore.
How many game studios were bothering with native Linux clients before Proton became known?
That goes back to address the original question of "But would you want to run these Win32 software on Linux for daily use?"
From a quick glance at the feature lists it looks quite comparable.
More generally though it's not about one specific type of tool, it's that windows and linux have been different ecosystems for decades and that has encouraged different strengths and weaknesses. To catch up would mean a lot of effort even if you're just aiming to be equivalent, or use projects like WINE to blur the lines and use the win32 tool as though the specific platform doesn't matter so much.
It's been a long time since I last used Mp3tag, so I tried the latest Mp3tag in WINE (seems to work nicely) for comparison. I think the basic operations (editing tags) actually do work similarly: in both you select file(s), edit the tag you want to in the GUI and changes get applied to any selected file(s) when you press save.
Renaming filenames based on tags also works according to that principle in kid3, you select the files you want to change (rename) and then use the `Format (arrow pointing from tag fields to filename field)` to specify what the filename pattern should look like and then use the `Tag 1` or `Tag 2` button to fill the placeholders from the (e.g.) ID3v1/ID3v2 tag, and click save to apply the changes.
In Mp3tag you'd also highlight the files, but unlike other tag editing operations you use the `convert->tag to filename` menu item/button, which pops up a wizard asking for the pattern and confirmation.
I'm guessing coming from Mp3tag you tried to use kid3's `Tools->Apply filename format` option, which I believe ensures the filename doesn't include special characters by doing string replacements (these are configured in the settings under `Files->Filname format`). I was wondering if that was perhaps confusingly named, so I had a look in Mp3tag to see what this functionality was called there, but I couldn't find it. I'm sure it's possible somehow, but it probably involves scripting [1].
I noticed that Mp3tag seems to be able to automatically fetch album art whereas in kid3 you need to get the image yourself. I suspect more advanced functionality (scripting etc) will work differently in the two tools.
[1] https://community.mp3tag.de/t/character-replacement-for-tag-...
Emulation does not mean that the CPU must be interpreted. For example, the DOSEMU emulator for Linux from the early 90s ran DOS programs natively using the 386's virtual 8086 mode, and reimplemented the DOS API. This worked similarly to Microsoft's Virtual DOS Machine on Windows NT. For a more recent example, the ShadPS4 PS4 emulator runs the game code natively on your amd64 CPU and reimplements the PS4 API in the emulator source code for graphics/audio/input/etc calls.
But if you liked that, consider that C# was in many ways a spiritual successor to Delphi, and MS still supports native GUI development with it.
The web was a big step backwards for UI design. It was a 30 year detour whose results still suck compared to pre-web UIs.
Maybe one day something like Lazarus or Avalonia would catch up but today I feel that Electron is best at what it does.
Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Wait you can make Android applications with Golang without too much sorcery??
I just wanted to convert some Golang CLI applications to GUI's for Android and I instead ended up giving up on the project and just started recommending people to use termux.
Please tell me if there is a simple method for Golang which can "just work" for basically being the Visualbasic-alike glue code to just glue CLI and GUI mostly.
Why don't you try it out: https://www.remobjects.com/elements/gold/
One of the key principles of f-droid is that it must be reproducible (I think) or open source with it being able to be built by f-droid servers but I suppose reproducibility must require having this software which is paid in this case.
Java Build code for any of the billions of devices, PCs and servers that run JavaSE, JavaEE or the OpenJVM.
.NET Core
The cross-platform .NET Core runtime is the future of .NET and will fully replace the current classic .NET 4.x framework when .NET Core 5 ships in late 2020.
It really seems like it was last updated sometime in the last decade. Not sure I want to base a future project on it.
We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.
Personally I much rather the approach taken by solidjs / svelte.
React’s approach is very inefficient - the entire view tree is rerendered when any change happens. Then they need to diff the new UI state with the old state and do reconciliation. This works well enough for tiny examples, but it’s clunky at scale. And the code to do diffing and reconciliation is insanely complicated. Hello world in react is like 200kb of javascript or something like that. (Smaller gzipped, but the browser still needs to parse it all at startup). And all of that diffing is also pure overhead. It’s simply not needed.
The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree. Those variables are wrapped up as “observed state”. As a result, you can just update those variables and exactly and only the parts of the UI that need to be changed will be redrawn. No overrendering. No diffing. No virtual Dom and no reconciliation. Hello world in solid or svelte is minuscule - 2kb or something.
Unfortunately, swiftui has copied react. And not the superior approach of newer libraries.
The rust “Leptos” library implements this same fine grained reactivity, but it’s still married to the web. I’m really hoping someone takes the same idea and ports it to desktop / native UI.
That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints. Although with React Compiler it's actually pretty good at automatically adding those so in practice it mostly re-renders along the actually changed path.
>And the code to do diffing and reconciliation is insanely complicated.
It's really not, the "diffing" is relatively simple and is maybe ~2kloc of repetitive functions (one per component kind) in the React source code. Most of complexity of React is elsewhere.
>The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree.
I actually count those as "React-like" because it's still declarative componentized top-down model unlike say VB6.
React only skips over stuff that's provably unchanged. But in many - most? web apps, it rerenders a lot. Yeah, you can add memoization hints. But how many people actually do that? I've worked on several react projects, and I don't think I've ever seen anyone manually add memoization hints.
To be honest it seems a bit like Electron. People who really know what they're doing can get decent performance. But the average person working with react doesn't understand how react works very well at all. And the average react website ends up feeling slow.
> Most of complexity of React is elsewhere.
Where is the rest of the complexity of react? The uncompressed JS bundle is huge. What does all that code even do?
> I actually count [solidjs / svelte] as "React-like" because it's still declarative componentized top-down model unlike say VB6.
Yeah, in the sense that Solidjs and svelte iterate on react's approach to application development. They're kinda React 2.0. Its fair to say they borrow a lot of ideas from react. And they wouldn't exist without react. But there's also a lot of differences. SolidJS and Svelte implement react's developer ergonomics, while having better performance and a web app download size that is many times smaller. Automatic fine grained reactivity means no virtual dom, no vdom diffing and no manual memoization or anything like that.
They also have a trick that react is missing: Your component can just have variables again. SolidJS looks like react, but your component is only executed once per instance in the page. Updates don't throw anything away. As a result, you don't need special react state / hooks / context / redux / whatever. You can mostly just use actual variables. Its lovely. (Though you will need a solidjs store if you want your page to react to variables being updated).
Even without any hints, it doesn't re-render "the entire view tree" like your parent comment claims, but only stuff below the place that's updated. E.g. if you're updating a text box, only stuff under the component owning that text box's state is considered for reconciliation.
Re: manual memoization hints, I'm not sure what you mean — `useMemo` and `useCallback` are used all over the place in React projects, often unnecessarily. It's definitely something that people do a lot. But also, React Compiler does this automatically, so assuming it gets wider adoption, in the longer run manual hints aren't necessary anyway.
>Where is the rest of the complexity of react?
It's kind of spread around, I wouldn't say it's one specific piece. There's some complexity in hydration (for reviving HTML), declarative loading states (Suspense), interruptible updates (Transitions), error recovery (Error Boundaries), soon animations (View Transitions), and having all these features work with each other cohesively.
I used to work on React, so I'm familiar with what those other libraries do. I understand the things you enjoy about Solid. My bigger point is just that it's still a very different programming model as VB6 and such.
UI libraries aside, I’d really love to see the same reactive programming pattern applied to a compiler. Done well, I’m convinced we should be able to implement sub-millisecond patching of a binary as I chance my code.
It's more the other way around, this model started on desktop (eg WPF) and then React popularized it on the web.
On later versions they had animated cursor positions which felt slow, the spellcheck squiglies were lethargic and menus convoluted.
That said, I've given up and mostly use Google Docs/Sheets now because of the features and cross platform support.
It runs fine under WINE, and you can install the 3 service packs for it too. As released, when you try to save a .RTF file it actually doesn't. Not that that matters, but it's nice to have all the known the bug fixes.
It runs inside the L2 cache on any modern-ish CPU. Even on a Core 2 Duo, it's fast.
It is hilarious and sad to recall that when it came out -- I was working for PC Pro magazine around then -- it was seen as big and bloated and sluggish compared to Office 95. The Mac version, Office 98, was a port of the Windows version, and Mac owners hated it.
Now, it's tiny and sleek.
I might unironically use this. The Windows 2000 era desktop was light and practical.
I wonder how well it performs with modern high-resolution, high-dpi displays.
But you can use group policy etc. freely. I don't know how Win 11 is though
I don't use Windows anymore but iirc the easiest way is to get the E3 or E5 licenses. The volume licensing is "Contact us" pricing
LTSC is also Enterprise, but it's meant for e.g. computers attached to an industrial machine/line that rarely gets updated and such. But it's used by many prosumers as a way to avoid bloat and e.g. keep Win10 for longer
I used to be a pretty happy Windows camper (I even got through Me without much complaint), but I'm so glad I moved to Linux and KDE for my private desktops before 11 hit.
Things started going downhill after that.
Things definitely went up-hill AFTER Windows 2000.
What on earth would cause someone to say Windows 2000 was a good release? It wasn't even a good release when it came out, and it definitely didn't stand the test of time.
An unsubstantiated insistence that a single human being is the bus factor for a thousand-engineer org?
I'm just as confused as you, everyone I've ever met who liked Windows 2000 went on to love XP SP3, usually with the W2K skin on it.
The answer to maintaining a highly functional and stable OS is piles and piles of backwards compatibility misery on the devs.
You want Windows 9? Sorry, some code checks the string for Windows 9 to determine if the OS is Windows 95 or 98.
Competition. In the first half of the 90s Windows faced a lot more of it. Then they didn't, and standards slipped. Why invest in Windows when people will buy it anyway?
Upgrades. In the first half of the 90s Windows was mostly software bought by PC users directly, rather than getting it with the hardware. So, if you could make Windows 95 run in 4mb of RAM rather than 8mb of RAM, you'd make way more sales on release day. As the industry matured, this model disappeared in favor of one where users got the OS with their hardware purchase and rarely bought upgrades, then never bought them, then never even upgraded when offered them for free. This inverted the incentive to optimize because now the customer was the OEMs, not the end user. Not optimizing as aggressively naturally came out of that because the only new sales of Windows would be on new machines with the newest specs, and OEMs wanted MS to give users reasons to buy new hardware anyway.
UI testing. In the 1990s the desktop GUI paradigm was new and Apple's competitive advantage was UI quality, so Microsoft ran lots of usability studies to figure out what worked. It wasn't a cultural problem because most UI was designed by programmers who freely admitted they didn't really know what worked. The reason the start button had "Start" written on it was because of these tests. After Windows 95 the culture of usability studies disappeared, as they might imply that the professional designers didn't know what they were doing, and those designers came to compete on looks. Also it just got a lot harder to change the basic desktop UI designs anyway.
The web. When people mostly wrote Windows apps, investing in Windows itself made sense. Once everyone migrated to web apps it made much less sense. Data is no longer stored in files locally so making Explorer more powerful doesn't help, it makes more sense to simplify it. There's no longer any concept of a Windows app so adding new APIs is low ROI outside of gaming, as the only consumer is the browser. As a consequence all the people with ambition abandoned the Windows team to work on web-related stuff like Azure, where you could have actual impact. The 90s Windows/MacOS teams were full of people thinking big thoughts about how to write better software hence stuff like DCOM, OpenDoc, QuickTime, DirectMusic and so on. The overwhelming preference of developers for making websites regardless of the preferences of the users meant developing new OS ideas was a waste of time; browsers would not expose these features, so devs wouldn't use them, so apps wouldn't require them, so users would buy new computers to get access to them.
And that's why MS threw Windows away. It simply isn't a valuable asset anymore.
This is largely true in North America, UK and AUS/NZ, less true in Europe, a mixed bag in the Middle East and mostly untrue everywhere else.
And failing everything else, Microsoft is in the position to put WSL center and front, and yet again, that is the laptops that normies will buy.
It's not a moving target. Proton and Wine have shown it can be achieved with greater comparability than even what Microsoft offers.
It is a moving target, Proton is mostly stuck on Windows XP world, before most new APIs started being a mix of COM and WinRT.
Even if that isn't the case, almost no company would bother with GNU/Linux to develop with Win32, instead of Windows, Visual Studio, business as usual.
It's a start.
(That and Linux doesn't implement win32 and wine doesn't exclusively run on Linux.)
If its a choice between downloading a binary that depends on a stable ABI and compiling the source. They way most Linux software gets installed is downloading a binary that has been compiled for your OS version (from repos), and the next most common way of installing is compiling source through a system that figures out the dependencies for you (source based distros and repos).
If you make a piece of software today and want to package it for Linux its an absolute mess. I mean, look at flatpack or docker, a common solution for this is to ship your own userspace, thats just insane.
It's much more bloated than it should be, but the best way to reliably run old/new software in any given Linux.
What are some examples?
One more popular example is Grid 2, another is Morrowind. Both crash on launch, unless you tweak a lot of things, and even then it won't always succeed.
Need for Speed II: SE is "platinum" on Wine, and pretty much unable to be run at all on Windows 11.
[0] https://learn.microsoft.com/en-us/windows/win32/direct3darti...
Whilst Morrowind is fine under Wine, there's a fantastic engine rebuild called OpenMW [0]. It runs on both Windows and Linux natively.
Some of the memory limits are also lifted, which meant I have a few mods that weren't possible without hacks in the past [1].
I see there are guides on Steam forums on how to get it to run under Windows 11 [0], and they are quite involved for someone not overly familiar with computers outside of gaming.
0: https://steamcommunity.com/sharedfiles/filedetails/?id=29344...
A recent example is that in San Andreas, the seaplane never spawns if you're running Windows 11 24H2 or newer. All of it due to a bug that's always been in the game, but only the recent changes in Windows caused it to show up. If anybody's interested, you can read the investigation on it here: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
It's a great game, unfortunately right now I am not able to play it anymore :( even though I have the original CD.
Unfortunately, Wine is of no help here :(
Also original Commandos games.
i got this working on my linux install (on risc-v nonetheless!) with a compile and some small tweaks it works well (getting 100+fps with filtering and ssao etc)
Whether that was a Windows compatibility issue or potentially some display driver thing, I'm not sure. (90's Windows games may have used some DirectDraw features that just don't get that much attention nowadays, which I think may have been the issue, but my memory's a bit spotty.)
Windows kept logging down the system trying to download a dozen different language versions of word (for which I didn't have a licence and didn't want regardless). Steam kept going into a crash restart cycle. Virus scanner was ... being difficult.
Everything just works on Linux except some games on proton have some sound issues that I still need to work out.
Is this 1998? Linux is forever having sound issues. Why is sound so hard?
The fix is:
mkdir -p ~/.config/pipewire/pipewire.conf.d && echo "context.properties = {default.clock.min-quantum = 1024}" | tee ~/.config/pipewire/pipewire.conf.d/pipewire.conf
Basically, just force the quantum to be higher. Often it defaults to 64, which is around 1ms.As always It is Not Linux Fault, but it is Linux Problem.
It's one of the reasons why I moved to OSX + Linux virtual machine. I get the best of both worlds. Plus, the hardware quality of a 128GB unified RAM MacBookPro M4 Max is way beyond anything else in the market.
It doesn’t help that they only officially support rocky Linux. I use mint. I assume there’s some magic pipewire / alsa / pulseaudio commands I can run that would glue everything together properly. But I can’t figure it out. It just seems so complicated.
Similarly, Bluetooth on my Thinkpad T14 is slightly wonky, and it sometimes fails to register a Bluetooth mouse on wake-up (I have to switch the mouse off and back on). This mouse registers fine on my other Linux machines. The logs show a report from a kernel driver saying that the BT chip behaved weirdly.
Binary-blob firmware, and physical hardware, do have bugs, and there's little an OS can do about that, Linux or otherwise. Macs have less hardware variety and higher prices, which makes their hardware errata lists shorter, but not empty.
I think it’s a software issue in how resolve uses the Linux audio stack. But I have no idea how to get started debugging it. I’ve never had any problems with the same hardware in windows, or the same software (resolve) on macOS.
FWIW I lost sound completely 3 times in the last 2 months on my works windows laptop and it would only come back after a reboot. I assumed it was a driver crash.
It depends on having a properly good implementation, which will come eventually for most apps.
Because they keep "updating" it every couple of years. Though, "updating" in latest years, meant just adding additional layers on top of ALSA. SW design and engineering is hard.
systemd/Linux maybe? Lots of things are more significant than GNU, either way.
https://distrowatch.com/table.php?distribution=gnomeos
https://distrowatch.com/table.php?distribution=kdelinux
The question is if either will catch any interest and if so, what will happen to regular distributions.
No, the academic literature makes the difference between the kernel and the OS as a whole. The OS is meant to provide hardware abstractions to both developers and the user. The Linux world shrugged and said 'okay, this is just the kernel for us, everyone else be damned'. In this view Linux is the complete outlier, because every other commercial OS comes with a full suite of user-mode libraries and applications.
glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.
patchelf --set-interpreter /lib/ld-linux-x86-64.so.2 "$APP"
patchelf --set-rpath /lib "$APP"1. Delete the shared symbol versioning as per https://stackoverflow.com/a/73388939 (patchelf --clear-symbol-version exp mybinary)
2. Replace libc.so with a fake library that has the right version symbol with a version script e.g. version.map GLIBC_2.29 { global: *; };
With an empty fake_libc.c `gcc -shared -fPIC -Wl,--version-script=version.map,-soname,libc.so.6 -o libc.so.6 fake_libc.c`
3. Hope that you can still point the symbols back to the real libc (either by writing a giant pile of dlsym C code, or some other way, I'm unclear on this part)
Ideally glibc would stop checking the version if it's not actually marked as needed by any symbol, not sure why it doesn't (technically it's the same thing normally, so performance?).
So you can do e.g. `patchelf --remove-needed-version libm.so.6 GLIBC_2.29 ./mybinary` instead of replacing glibc wholesale (step 2 and 3) and assuming all of used glibc by the executable is ABI compatible this will just work (it's worked for a small binary for me, YMMV).
Heed the above warning as down this rpath madness surely lies!
Exhibit A: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
Exhibit B: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
Exhibit C: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
Oh, sure, rpath/runpath shenanigans will work in some situations but then you'll be tempted to make such shenanigans work in all situations and then the madness will get you...
To save everyone a click here are the first two bullet points from Exhibit A:
* If an executable has `RPATH` (a.k.a. `DT_RPATH`) set but a shared library that is a (direct or indirect(?)) dependency of that executable has `RUNPATH` (a.k.a. `DT_RUNPATH`) set then the executable's `RPATH` is ignored!
* This means a shared library dependency can "force" loading of an incompatible [(for the executable)] dependency version in certain situations. [...]
Further nuances regarding LD_LIBRARY_PATH can be found in Exhibit B but I can feel the madness clawing at me again so will stop here. :)
The actual practical problem is not glibc but the constant GUI / desktop API changes.
Both are way more annoying than anything the platforms without symbol versioning suffer from because of its lack. I’ve never encountered anyone who has packaged binaries for both Linux and Windows (or macOS, or the BSDs) that missed anything about Linux userspace ABIs when working with another platform.
You generally have difficulty actually running contemporary build tools on such a thing, so the workaround is to use —-sysroot against what is basically a chroot of the old distro, as if cross-compiling. But there are still workarounds needed if the version is old enough. Chrome has a shorter support window than some Linux binaries, but you can see the gymnastics they do to create their sysroot in some Python scripts in the chromium repo.
On Windows, you install the latest SDK and pass a target version flag when setting up the compiler environment. That’s it. macOS is similar.
You can see what the best-in-class hoop jumping looks like in a bunch of open source projects that do binary releases — it’s nontrivial. Or you can see all the machinations that Flatpak goes through to get userspace Mesa drivers etc. working on a different glibc version than the base system. On every other major OS, including other free software ones, this isn’t a problem. Like at all. Windows’ infamous MSVC versioning is even mostly a non-issue at this point, and all you had to do before was bundle the right version in your installer. I’ll take a single compiler flag over the Linux mess every day of the week.
Do you distribute a commercial product to a large Linux userbase, without refusing to support anything that isn’t Ubuntu LTS? I’m kind of doubting that, because me and everyone I know who didn’t go with a pure Electron app (which mostly solves this for you with their own build process complexity) has wasted a bunch of time on this issue. Even statically linking with musl has its futziness, and that’s literally impossible for many apps (e.g. anything that touches a GPU). The Linux ecosystem could make a few pretty minor attitude adjustments and improve things with almost no downside, but it won’t. So the year of the Linux desktop remains illusive.
Again this same old FUD.
The situation would be no different if there was only a single distro - you would still need to build against the oldest version of glibc (and other dependencies) you want to support.
But compiling in a container is easier and also solves other problems.
The solution is simply to build against the oldest glibc version you want to support - we should focus on making that simpler, ideally just a compiler flag.
Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.
Who needs ABI compatibility when your software is OSS? You only need API compatibility at that point.
Because almost certainly someone out there will want to use it. And they should be able to, because that is the entire point of free software: user freedom.
Even if we ship as source, even if the user has the skills to build it, even if the make file supports every version of the kernel, plus all other material variety, plus who knows how many dependencies, what exactly am I supposed to do when a user reports;
"I followed your instructions and it doesn't run".
Linux Desktop fails because it's not 1 thing, it's 100 things. And to get anything to run reliably on 95 of them you need to be extremely competent.
Distribution as source fails because there are too many unknown, and dependent parts.
Distribution as binary containers (Docker et al) are popular because it gives the app a fighting chance. While at the same time being a really ugly hack.
I think Rob pike has the right idea with go just statically link everything wherever possible. These days I try to do the same, because so much less can go wrong for users.
People don’t seem to mind downloading a 30mb executable, so long as it actually works.
If you don’t want to configure this manually, use distrobox, which is a nice shell script wrapper that helps you set things up so graphical desktop apps just work.
Opinions could differ but personally I think a stable ABI wastes more time and energy than an unstable ABI because it forces code to be inefficient. Code is run more often than they are compiled. It’s better to allow code to run faster than to avoid extra compilations.
I've had so much trouble with package managers that I'm not even sure they are a good idea to begin with.
That was what most well packaged proprietary software used to do when installing into /opt.
People who are complaining would prefer a world of isolated apps downloaded from signed stores, but Linux was born at an optimistic time when the goal was software that cooperate and form a system, and which distribution does not depend on a central trusted platform.
I do not believe that there is any real technical issue discussed here, just drastically different goals.
Stable ABIs for certain critical pieces of independently-updatable software (libc, OpenSSL, etc.) is not even that big of a lift or a hard tradeoff. I’ve never run into any issues with macOS’s libc because it doesn’t version the symbol for fopen like glibc does. It just requires commitment and forethought.
But you're not entirely wrong -- as long as you have API compatibility then it's just a rebuild, right? Well, no, because something always breaks and requires attention. The fact is that in the world of open source the devs/maintainers can't be as disciplined about API compat as you want them to be, and sometimes they have to break backwards compatibility for reasons (security, or just too much tech debt and maint load for obsolete APIs). Because every upstream evolves at a different rate, keeping a distro updated is just hard.
I'm not saying that statically linking things and continuing to run the binaries for decades is a good answer though. I'm merely explaining why I think your comment got downvoted.
I want to go to the alternate timeline where they just stuck with a set of technologies... ideally KDE... and just matured them up until they were the idealized version of their original plan instead of always throwing things away to rewrite them for ideological or technical purity of design.
We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion.
You can still get firefox as a .deb though.
https://www.theregister.com/2023/11/01/official_mozilla_debi...
No need for any Canonical packages at all -- works fine on Debian; I use them myself.
It makes sense. Every distribution wants to be in charge of what set of libraries are available on their platform. And they all have their own way to manage software. Developing applications on Linux that can be widely used across distributions is way more complex than it needs to be. I can just ship a binary for windows and macOS. For Linux, you need an rpm and a dpkg and so on.
I use davinci resolve on Linux. The resolve developers only officially support Rocky Linux because anything else is too hard. I use it in Linux mint anyway. The application has no title bar and recording audio doesn’t work properly. Bleh.
* Except for non-glibc distributions of course.
Why doesn't the glibc use the version tag to do the appropriate mapping?
In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.
This was true in the 90s, not the 2020s.
There are enough moneyed interests that control the entirety of Linux now. If someone at Canonical or Red Hat thought a glibc version translation layer (think WINE, but for running software targeted for Linux systems made more than the last breaking glibc version) was a good enough idea, it could get implemented pretty rapidly. Instead of win32+wine being the only stable abi on Linux, Linux could have the most stable abi on Linux.
Of course, you both leave out that you could do it “on real hardware”.
But none of this matters. The real point is that you have to compile on an old distro. If he left out “in a VM”, you would have had nothing to correct.
But it's like complaining that you might need a VM or container to compile your software for Win16 or Win32s. Nobody is using those anymore. Nor really old Linux distributions. And if they do, they're not really going to complain about having to use a VM or container.
As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.
But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. Go figure.
But does the lack of a stable ABI have any (negative) effect on the reliability of the platform?
Like many others, I have Linux servers running over 2000-3000 days uptime. So I'm going to say no, it doesn't, not really.
You must really be behind the times. Arch and Gentoo users wouldn't complain because an old game doesn't run. In fact the exact opposite would happen. It's not implausible for an Arch or Gentoo user to end up compiling their code on a five hour old release of glibc and thereby maximize glibc incompatibility with every other distribution.
That never took off though, containers are easier. Wirh distrobox and other tools this is quite easy, too.
It's definitely not easy, but it's possible: using the `.symver` assembly (pseudo-)directive you can specify the version of the symbol you want to link against.
https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
Wrong.
> on a VM.
Wrong.
Cross-compiling is a thing.
Because Linux userland is an unmitigated clusterfuck of bad design that makes this really really really hard.
GCC/Clang and Glibc make it effectively impossible almost impossible to do this on their own. The only way you can actually do this is:
1. create a userland container from the past 2. use Zig which moved oceans and mountains to make it somewhat tractable
It's awful.
But I agree that this should just be a simple target SDK flag.
I think the issue is that the Linux community is generally hostile towards proprietary software and it’s less of an issue for FLOSS because they can always be compiled against the latest.
Or at least the oldest one made before glibc's latest backwards incompatible ABI break.
It allows (among other things) the glibc developers to change struct layouts while remaining backwards compatible. E.g. if function f1 takes a struct as argument, and its layout changes between v2 and v3, then glibc_v2_f1 and glibc_v3_f1 have different ABIs.
Other examples are keeping bug compatibility, when standards are revised to require incompatible behavior, or to introduce additional safety features that require additional (hidden) arguments automatically passed to functions which changes their ABI.
The only thing that is lacking is an SDK to easily target older glibc versions compared to the one you are using on your build server - but that's something that you can build yourself with some effort.
Correct me if I'm wrong but I don't think versioned symbols are a thing on Windows (i.e. they are non-portable). This is not a problem for glibc but it is very much a problem for a lot of open source libraries (which instead tend to just provide a stable C ABI if they care).
There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup like InitCommonControlsEx, and another API functions will DLL resolve differently or behave differently. A similar tactic, require an SDK defined magic number as a parameter to some initialization functions, different magic numbers switching symbols from the same library; examples are WSAStartup and MFStartup.
Around Win2k they did side by side assemblies or WinSxS. Include a special XML manifest into embedded resource of your EXE, and you can request specific version of a dependent API DLL. The OS now keeps multiple versions internally.
Then there’re compatibility mechanics, both OS builtin and user controllable (right click on EXE or LNK, compatibility tab). The compatibility mode is yet another way to control versions of DLLs used by the application.
Pretty sure there’s more and I forgot something.
Isn't the oldest one... to have the API/ABI version in the name of your DLL? Unlike on Linux which by default uses a flat namespace, on the Windows land imports are nearly always identified by a pair of the DLL name and the symbol name (or ordinal). You can even have multiple C runtimes (MSVCR71.DLL, MSVCR80.DLL, etc) linked together but working independently in the same executable.
IUnknown-based ABIs exposing methods of objects without any symbols exported from DLLs. Virtual method tables are internal implementation details, not public symbols. By testing SDK-defined magic numbers like SDKVersion argument of D3D11CreateDevice factory function, the DLL implementing the factory function may create very different objects for programs built against different versions of Windows SDK.
Iirc this is both for versioning, but also so some software can target windows and Xbox OS’s whilst “importing” the same api-set DLL? Caused me a lot of grief writing a PE dynamic linker once.
Any half-decent SDK should allow you to trivially target an older platform version, but apparently doing trivial-seeming things without suffering is not The Linux Way™.
Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old.
I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls.
(Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?)
This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs.
> makes me wonder how exactly Windows containers work
I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable.
https://thomasvanlaere.com/posts/2021/06/exploring-windows-c...
…actually, looks like it’s a bit looser these days. Version matrix incoming: https://learn.microsoft.com/en-us/virtualization/windowscont...
Can you give an example where a breaking change was introduced in NT kernel ABI?
(One example: hit "Show" on the table header for Win11, then use the form at the top of the page to highlight syscall 8c)
I argue that NT doesn't break its kernel ABI.
[0] as long as you don't use APIs they decided to add and remove in a very short period (longer read: https://virtuallyfun.com/2009/09/28/microsoft-fortran-powers...)
...and you go on to not disagree with me at all? Why comment then?
NTDLL isn’t some higher level library. It’s just a series of entry points into NT kernel.
Honestly I might buy a T-shirt with such a quote.
I think glibc is such a pain that it is the reason why we have so vastly different package management and I feel like non glibc things really would simplify the package management approach to linux which although feels solved, there are definitely still issues with the approach and I think we should still all definitely as such look for ways to solve the problem
Turns out that Nix is built against a different version of glibc than SteamOS, and for some reason, that matters. You have to make sure none of Steam's libraries are on the path before the Nix code will run. It seems impractical to expect every piece of software on your computer to be built against a specific version of a specific library, but I guess that's Linux for you.
NixOS actually is a bit better in this respect since most things are statically linked. The only thing is that glibc is not because it specifically requires being dynamically linked.
This issue also applies to macOS with their Dylibs and also Windows with their DLLs. So saying that this is an issue with Linux is a bit disingenuous.
Until everybody standardizes on one singular executable format that doesn't ever change, this will forever be an issue.
The flatpak ecosystem is problematic in that most packages are granted too much rights by default.
I wasn't directly involved, but the company I worked for has created its own set of runtimes too and I haven't heard any excessive complaints on internal chats, so I don't think it's as arcane as you make it sound either.
AppImage have some issues/restrictions like it cant run on older linux than one it was compiled on, so people compile it on the oldest pc's and a little bit of more quirks
AppImage are really good but zapps are good too, I had once tried to do something on top of zapp but shame that zapp went into the route of crypto ipfs or smth and then I don't really see any development of that now but it would be interesting if someone can add the features of zapp perhaps into appimage or pick up the project and build something similar perhaps.
At some point I've got to try this. I think it would be nice to have some tools to turn an existing programs into a zapps (there many such tools for making AppImages today).
Looks like you met the right guy because I have built this tool :)
Allow me to show my project, Appseed (https://nanotimestamps.org/appseed): It's a simple fish script which I had (prototyped with Claude) some 8-10 months ago I guess to solve exactly this.
I have a youtube video in the website and the repository is open source on github too.
So this actually worked fantastic for a lot of different binaries that I tested it on and I had uploaded it on hackernews as well but nobody really responded, perhaps this might change it :p
Now what appseed does is that you can think of it is that it can take a binary and convert it into two folders (one is the dynamic library part) and the other is the binary itself
So you can then use something like tar to package it up and run it anywhere. I can of course create it into a single elf-64 as well but I wanted to make it more flexible so that we can have more dynamic library like or perhaps caching or just some other ideas and this made things simple for me too
Ldshim is really good idea too although I think I am unable to understand it for the time being but I will try to understand it I suppose. I would really appreciate it if you can tell me more about Ldshim! Perhaps take a look at Appseed too and I think that there might be some similarities except I tried to just create a fish script which can just convert any dynamic binary usually into a static one of sorts
I just want more people to take ideas like appseed or zapp's and run with it to make linux's ecosystem better man. Because I just prototyped it with LLM's to see if it was possible or not since I don't have much expertise in the area. So I can only imagine what can be possible if people who have expertise do something about it and this was why I shared it originally/created it I guess.
Let me know if you are interested in discussing anything about appseed. My memory's a little rusty about how it worked but I would love to talk about it if I can be of any help :p
Have a nice new year man! :p
I suspect with combination of Detour & Zapps it could be possible.
To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions:
multi-processing, context switching, tree-like file systems, multiple users, access privileges,
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like tree-like file systems, WIMP GUIs, per-user privileges, the fuzziness of what an
"operating system" even is and its role,
are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as: ZFS (which implements in a very well-engineered manner a tree-like data storage that's
been standard since the '60s) can serve as a founation for
Postgres (which implements a better-conceived relational design)
I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced an anarchy of packaging systems, breaking upgrades and updates,
unstable glibc, desktop environments that are different and changing seemingly
for the sake of it, sound that's kept breaking, power management iffiness, etc.I wish either of those systems had the same hardware & software support. I’d swap my desktop over in a heartbeat if I could.
Why should everything pretend to be a 1970s minicomputer shared by multiple users connected via teletypes?
If there's one good idea in Unix-like systems that should be preserved, IMHO it's independent processes, possibly written in different languages, communicating with each other through file handles. These processes should be isolated from each other, and from access to arbitrary files and devices. But there should be a single privileged process, the "shell" (whether command line, TUI, or GUI), that is responsible for coordinating it all, by launching and passing handles to files/pipes to any other process, under control of the user.
Could be done by typing file names, or selecting from a drop-down list, or by drag-and-drop. Other program arguments should be defined in some standard format so that e.g. a text based shell could auto-complete them like in VMS, and a graphical one could build a dialog box from the definition.
I don't want to fiddle with permissions or user accounts, ever. It's my computer, and it should do what I tell it to, whether that's opening a text document in my home directory, or writing a disk image to the USB stick I just plugged in. Or even passing full control of some device to a VM running another operating system that has the appropriate drivers installed.
But it should all be controlled by the user. Normal programs of course shouldn't be able to open "/dev/sdb", but neither should they be able to open "/home/foo/bar.txt". Outside of the program's own private directory, the only way to access anything should be via handles passed from the launching process, or some other standard protocol.
And get rid of "everything is text". For a computer, parsing text is like for a human to read a book over the phone, with an illiterate person on the other end who can only describe the shape of each letter one by one. Every system-level language should support structs, and those are like telepathy in comparison. But no, that's scaaaary, hackers will overflow your buffers to turn your computer into a bomb and blow you to kingdom come! Yeah, not like there's ever been any vulnerability in text parsers, right? Making sure every special shell character is properly escaped is so easy! Sed and awk are the ideal way to manipulate structured data!
AmigaOS was the pinnacle of personal computing OS design. Everything since has been a regression. Fite me.
It would not solve the ABI problem, but it would give at least an opinionated end to end API that was at some point the official API of an OS. It has some praise on its design too.
I regularly install HaikuOS in a VM to test it and I think I could probably use it as a daily driver, but ported software often does not feel completely right.
There's a lot to like about BSD, and many reasons to prefer OpenBSD to Linux, but ABI backward-compatibility is not one of them!
One of Linux's main problems is that it's difficult to supply and link versions of library dependencies local to a program. Janky workarounds such as containerization, AppImage, etc. have been developed to combat this. But in the Windows world, applications literally ship, and link against, the libc they were built with (msvcrt, now ucrt I guess).
(OK, I have some experience with vendors where their latest month-old release has an distro support release where the most up-to-date option is still 6 months past EOL, and I have managed to hack something together which will get them to work on the newer release, but it's extremely painful and very much not what either the distros or the software vendors want to support)
surely forced versioning of GLIBC didn't help.
"This program requires GLIBC_2.33"
'unfortunate rough edges that people only tolerate because they use WINE as a last resort'
Whether those rough edges will ever be ironed out is a matter I'll leave to other people. But I love that someone is attempting this just because of the tenacity it shows. This reminds me of projects like asahi and cosmopolitan c.
Now if we're to do something to actually solve for Gnu/Linux Desktops not having a stable ABI I think one solution would be to make a compatibility layer like Wine's but using Ubuntu's ABIs. Then as long as the app runs on supported Ubuntu releases it will run on a system with this layer. I just hope it wouldn't be a buggy mess like flatpak is.
It's my strong opinion that Windows 2000 Server, SP4 was the best desktop OS ever.
Source: I reviewed Cutler's lock-free data structure changes in Vista/Longhorn to find bugs in them, failed to find any.
Meanwhile, in 2025, with 64GB RAM and solid state drives, we hear, "Windows 11 Task Manager really, really shouldn't be eating up 15% of my CPU and take multiple seconds to fire up."
I meant to agree entirely with the parent comment by showing one specific way in which Win2K SP4 is far superior to Windows 11.
In Win2K, Task Manager takes less than a second to start on a 200 MHz, single core Pentium II with 64MB of RAM and a 5400 RPM IDE HDD.
I haven't yet gone more than a decade in the past before, so I can't promise forever, and GPU-accelerated things probably still break, but X11 is very compatible backwards.
I wanted to be nice and entered a genuine Windows key still in my laptop's firmware somewhere.
As a thank you Microsoft pulled dozens of the features out of my OS, including remote desktop.
As soon as these latest FSR drivers are ported over I will swap to Linux. What a racket, lol.
1. The exact problem with the Linux ABI
2. What causes it (the issues that makes it such a challenge)
3. How it changed over the years, and its current state
4. Any serious attempts to resolve it
I've been on Linux for may be 2 decades at this point. I haven't noticed any issues with ABI so far, perhaps because I use everything from the distro repo or build and install them using the package manager. If I don't understand it, there are surely others who want to know it too. (Not trying to brag here. I'm referring to the time I've spent on it.)
I know that this is a big ask. The best course for me is of course to research it myself. But those who know the whole history tend to have a well organized perspective of it, as well as some invaluable insights that are not recorded anywhere else. So if this describes you, please consider writing it down for others. Blog is probably the best format for this.
My understanding is that very old statically linked Linux images still run today because paraphrasing Linus: "we don't break user space".
Also, if you happened to have linked that image to a.out it wouldn't work if you're using a kernel from this year, but that's probably not the case ;)
But is there any fundamental reason why not?
> Also, if you happened to have linked that image to a.out it wouldn't work if > you're using a kernel from this year, but that's probably not the case ;)
I assume you refer to the retirement of coff support (in favor of elf). I would argue that given how long this obsolete format was supported was actually quite impressive.
The kernel doesn't break user space. User space breaks on its own.
Good operating systems should:
1. Allow users to obtain software from anywhere.
2. Execute all programs that were written for previous versions reliably.
3. Not insert themselves as middlemen into user/developer transactions.
Judged from this perspective, Windows is a good OS. It doesn't nail all three all the time, but it gets the closest. Linux is a bad OS.
The answers to your questions are:
(1) It isn't backwards compatible for sophisticated GUI apps. Core APIs like the widget toolkits change their API all the time (GTK 1->2->3->4, Qt also does this). It's also not forwards compatible. Compiling the same program on a new release may yield binaries that don't run on an old release. Linux library authors don't consider this a problem, Microsoft/Apple/everyone else does. This is the origin of the glibc symbol versioning errors everyone experiences sometimes.
(2) Maintaining a stable API/ABI is not fun and requires a capitalist who says "keep app X working or else I'll fire you". The capitalist Fights For The User. Linux is a socialist/collectivist project with nobody playing this role. Distros like Red Hat clone the software ecosystem into a private space that's semi-capitalist again, and do offer stable ABIs, but their releases are just ecosystem forks and the wider issue remains.
(3) It hasn't change and it's still bad.
(4) Docker: "solves" the problem on servers by shipping the entire userspace with every app, and being itself developed by a for-profit company. Only works because servers don't need any shared services from the computer beyond opening sockets and reading/writing files, so the kernel is good enough and the kernel does maintain a stable ABI. Docker obviously doesn't help the moment you move outside the server space and coordination requirements are larger.
Never happens for me on Arch, which I've run as my primary desktop for 15 years.
And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.
> And Arch itself also needs manual interventions on package updates every so often, just a few weeks ago there was a major change to the NVidia driver packaging.
If you're running a proprietary driver on a 12 year old GPU architecture incapable of modern games or AI, yeah... so I actually haven't needed to care about many of these. Maybe 2 or 3 ever...
Together this means that basically nobody implements applications anymore. For commercial applications that market is too fragmented and it is too much effort. Open-source applications need time to grow and if all the underpinnings get changed all the time, this is too frustrating. Only a few projects survive this, and even those struggle. For example GIMP took a decade to be ported from GTK 2 to 3.
I wish websites weren't allowed to know what site a user is coming from.
Perhaps that could be mitigated if someone could come up with an awesome OSS machine code translation layer like Apple's Rosetta.
This will never work, because it isn't a radical enough departure from Linux.
Linux occupies the bottom of a well in the cartesian space. Any deviation is an uphill battle. You'll die trying to reach escape velocity.
The forcing factors that pull you back down:
1. Battles-testedness. The mainstream Linux distros just have more eyeballs on them. That means your WINE-first distro (which I'll call "Lindows" in honor of the dead OS from 2003) will have bugs that make people consider abandoning the dream and going back to Gnome Fedora.
2. Cool factor. Nobody wants to open up their riced-out Linux laptop in class and have their classmate look over and go "yo this n** running windows 85!" (So, you're going to have to port XMonad to WINE. I don't make the rules!)
3. Kernel churn. People will want to run this thing on their brand-new gaming laptop. That likely means they'll need a recent kernel. And while they "never break userspace" in theory, in practice you'll need a new set of drivers and MESA and other add-ons that WILL breaks things. Especially things like 3D apps running through WINE (not to mention audio). Google can throw engineers at the problem of keeping Chromium working across graphics stacks. But can you?
If you could plant your flag in the dirt and say "we fork here" and make a radical left turn from mainline Linux, and get a cohort of kernel devs and app developers to follow you, you'd have a chance.
This is something that is very much needed to make Linux much more user friendly for new users.
I might seriously recommend it to newbies and like there is just this love I have for windows 7 even though I really didn't use it for much but its so much more elegant in its own way than windows 10
like it can be a really fun experiment and I would be interested to see how that would pan out.
Pro tip but if someone wants to create their own iso as well, they can probably just customize things imperatively in MxLinux even by just booting them up in your ram and then they have the magnificient option of basically snapshotting it and converting that into an iso so its definitely possible to create an iso tweaked down to your configuration without any hassle (trust me but its the best way to create iso's without too much hassle and if one wants hassle, nix or bootc seems to be the way to go)
Regarding Why it wouldn't hit. I don't know, I already build some of my own iso's and I can build one for windows (on MxLinux principle) and upload it for free on huggingface perhaps but the idea is of mass appeal
Yes I can do that but I would prefer if there was an iso which could just do that and I could share it with a new person in linux. And yes I could have the new person do the changes themselves but (why?), there really is no reason perhaps imo and this just feels like a low hanging fruit which nobody touched perhaps and so this is why I was curious too.
But also as the other comment pointed out, I feel like sure we can do this thing, but that there is definitely a genuine reason why we can probably create this thing itself as well and they give some good reasons as well and I agree with them overall too.
Like if you ask me, it would be fun to have more options especially considering this is linux where freedom is celebrated :p
Rough approximations have been possible since the early 2000s, but they’re exactly that: rough approximations. Details matter, and when I boot up an old XP/7 box there are aspects in which they feel more polished and… I don’t know, finished? Complete? Compared to even the big popular DEs like KDE.
Building a DE explicitly as a clone of a specific fixed environment would also do wonders to prevent feature creep and encourage focus on fixing bugs and optimization instead of bells and whistles, which is something that modern software across the board could use an Everest sized helping of.
I think one of the friction could be ideological if not than anything since most linux'ers love Open source and hate windows so they might not want to build anything which even replicates the UI perhaps
Listen I hate windows just as much as the other guy but gotta give props that I feel nostalgic to windows 7, and if they provide both .exe perfect support and linux binary perfect support, things can be really good. I hope somebody does it and perhaps even adds it to loss32, would be an interesting update.
That said, even if the UI looking the same is an issue, it’s not that difficult to come up with a look and feel that is legally distinct but spiritually aligned and functionally identical… random amateurs posting msstyle themes for XP/Vista/7 on DeviantArt did that numerous times.
It would fail, and just be another corpse in the desktop OS graveyard.
https://en.wikipedia.org/wiki/Hitachi_Flora_Prius
https://www.osnews.com/story/136392/the-only-pc-ever-shipped...
https://en.wikipedia.org/wiki/Linspire
Unless you ship your own hardware or get a vendor to ship your OS (see the above), and set up so the user can actually use it, you have to get users to install it on Windows hardware. So now your company is debugging broken consumer hardware without the help of the OEM. So that hopefully someone will install it on exactly that configuration for free.
This is not a winning business model.
Loss32 is itself a linux distro and thus there should technically be nothing stopping it from shipping everywhere
I think you were assuming that I meant create a whole kernel from scratch or something but I am just merely asking a loss32 reskin which looks like windows 7 which is definitely possible without any of the company debugging consumer hardware or even the need of company for that matter I suppose considering that I was proposing an open source desktop environment which just behaved like windows 7 by default as an example.
I don't really understand why we need a winning business model out of it, there isn't really a winning model for niri,hyprland,sway,kde,xfce,lxqt,gnome etc., they are all open source projects who are run with help of donations
There might be a misunderstanding between us but I hope this clears up any misunderstanding.
> you were assuming that I meant create a whole kernel from scratch or something
No, making Linux run reliably on random laptops is already a monumental challenge.
Regarding successful, well they already are, ZorinOS is an OS which looks like windows 7 or has some similarities to it and its sort of recommended to beginners but usually linux mate is the most recommended distro
> No, making Linux run reliably on random laptops is already a monumental challenge.
Not sure about this but I ran linux in 15 year old dell mini like its no big deal so I can only assume that support has been better but I feel like I can assure you that linux support is really good for most laptops in my observation.
The problem is slapping Linux on some random bit of Windows kit and expecting it to work as though it had shipped with Linux, with support to back it. The more recent, the worse it will be.
If you want to run Linux, buy Linux computers that ship with Linux and have a support number you can call. Just like you'd not expect to be able to slap OSX on some random Dell and have it work.
This is how loss32 works and I am just saying that sir, instead of merely using the win95 design that loss32 uses, perhaps we can modernize the style a little towards something like windows 7 as a good balance?
Sir of course, if you are worried about the software emulation aspect of things, you are worried about loss32 itself and not my idea of "hey lets reskin it to look like win7", We can have a discussion itself on loss32 if you want and weigh in some pros and cons and it certainly isn't something that I will use as a main driver but I think as linux is certainly built on ideas of freedoms, having loss32 isn't really that bad. Its an experiment of sorts even right now and people will test it out because they are curious and we will hear about responses of people who try it out and what they think.
I love Linux just as much as you do but I would admit I never really gotten into windows ecosystem that much so I went to learning Linux really good and took it as a challenge to conquer (mission accomplished)
Many people might not go with that mindset and may come with the mindset that Microsoft is treating them really badly and moral dilemmas as well and so having something which can cater to them isn't bad.
I also want to say that something like this might be good because yes, people say for others to just linux mint but I never really found it good option, not for the gen-z. I think Zorin can be an answer or perhaps AnduinOS but we definitely need more young people in linux and I will tell you as young guy what's happening
People want to get the freedom but they aren't able to articulate it. They are worried about AI but they just can't do anything about it and to be honest they are right, how much can I or you do anything about ram crisis. Maybe there is something that we can do but we just don't know (like did you know that there is a way to convert laptop ram to desktop ram with its gatchas?)
They simply don't know about the open source side of things since they just weren't exposed to it. To us, it may be the core feature but to them its a word written between other words of features that they want to use.
So like I don't really know but pardon me, I don't understand your side of the discussion and I am trying to find a common point.
Do you find an issue within the loss32 architecture itself? Or with the idea of a re-skin towards win7.
I presume its the loss32 architecture but I don't know what to tell you except that it uses wine and wine just works, so much so that the original title of this i think might've been/was about how win32 was the most stable ABI even for linux and that's only possible due to wine.
Not sure what you meant by support there sir, perhaps you are red hat user for a company license or similar and of course this isn't targeted for that sector but for niche users at homes who just want to try out what's "linux" perhaps :) I find the idea of loss32 very interesting as I had thought of designing something similar so I am glad that it exists and I would probably look at it from afar.
I'd love a discussion about it because I think we are saying the same point from different angles and perhaps I can do a better rephrasing but what i mean is completely open source and all linux-y but just have windows applications run easily and have win7 like UI (really similar) and that's it. Everything's linux and these wine programs just convert them to posix syscalls but perhaps I am missing your point of concern and we can talk about it since clearly nothing's better than talking about linux (oh the joy) to another linux user! I think I may be misinterpreting somethings if so pardon me but I am unable to understand how hardware might take a role in wine/what I said and I would be interested if you can tell me more about it perhaps and (have a nice day sir, I got enough quota for the day or the year of talking about linux haha!)?
In beverages though, I just drink cole drink usually but in this time of the year, I'd rather froze to death if I did something like this (the cold is crazy out here) xD
But yea, there have been some instances where I talk about linux to people my age or maybe irl and its definitely frustrates that sometimes they don't understand it.
The screenshots could easily fool me into believing it actually is Windows 7 :p
There is also anduinos which I think doesn't try to replicate windows 7 but it definitely tries to look at windows 10 perhaps 11 iirc
Or maybe ReactOS - the actual windows clone - gets finished. Rumours put a first release date some time after Hurd.
It's a Unix underneath, though. A strange modernised Unix written in C++ but it's definitely Unix-like.
It's a Unix-like with a Win2K GUI, which is a pretty attractive combination, TBH...
In what way?
It sure was, if you were already bored by Windows 3.11/95 and were getting into Linux, it was fantastic. You were getting skills at the ground floor which could help keep you in good career for most of the rest of your life.
Love this idea. Love where it is coming from.
We have gone through one perceived reason after the other to try and explain why the year of the Linux desktop wasn’t this one.
Uncharitably, Linux is too busy breaking and deprecating itself to ever become more than a server OS, and that only works due to companies sponsoring most the testing and code that makes those parts work. Desktop in all its forms is an unmitigated shit show.
With linux, you’re always one kernel/systemd/$sound system/desktop upgrade away from a broken system.
Personal pains: nvidia drivers, oss->alsa, alsa->pulse audio, pulse audio->pipe wire, init.d to upstart to systemd, anything dkms ever, bash to dash, gtk2 to gtk3, kde3 to kde4 (basically a decade?), gnome 2 to gnome 3, some 10 gnome 3 releases breaking plugins I relied on.
It should be blindingly obvious; windows can shove ads everywhere from the tray bar to start menu and even the damned lock screen, on enterprise editions no less, and STILL have users. This should tell you that linux is missing something.
It’s not the install barrier (it’s never been lower, corporate IT could issue linux laptops, linux on laptops exist from several vendors).
It’s also not software, the world has never placed so many core apps in the browser (even office, these days).
It’s not gaming. Though its telling that, in the end, the solution from valve (proton) incidentally solves two issues - porting (stable) windows APIs to linux and packaging a complete mini-linux because we can’t interoperate between distros or even releases of the same distro.
I think the complete and utter disdain in linux for stability from libraries through subsystems to desktop servers, ui toolkits and the very desktops themselves is the core problem. And solving through package management and the ensuing fragmentation from distros a close second.
Wine and Proton should have levelled the playing field. But they haven't. Also, if you've only just started using Linux, I recommend you wait a few years before forming an opinion.
From there, popularity outside the organization is irrelevant, internal support and userbase is for and on some version of Linux.
As this would spread, we would eventually see global usage increase and global popularity become a non-issue.
Your average user might not even know its Linux.
Not talking about the cross-platform versions of .NET and VS-Code. I'm specifically talking about the Windows-specific software I mentioned above.
I don't see this happening, despite the fact that by now, these types of porting efforts were supposed to be trivial because of AI. Yeah, I'll wait.
There is a ton of useful FOSS for Windows and maybe it is a good push to modernize abandoned projects or make Win32 projects cross-compilable.
googles
Ah, no, that was FreeWin95. What on earth is Free95, it feels like history repeating itself…
https://github.com/versoft-software/free95/
They also forked Uinxed kernel so as to run their userland.
I believe they are the same. Still, it makes sense what they are trying to do.
As a little Yule gift, one of the creators wrote me a hatemail at Xmas telling me he still remembered and it still burned and what a bad person I was.
Well, he got his own back. I was hurt, too. Perhaps he thinks that makes us even. Share the pain.
The idea of "fuck it, let's do Windows everywhere" was introduced by Justine Tunney as an April Fools Joke in the Cosmopolitan repository.
That's it. An april fools joke.
What boggles my mind is why Google hasn't gotten more serious about making Android a desktop OS. Pay the money needed to get good hardware support, control the OS, and now you're a Microsoft/Apple competitor for devices. Yes there is the Chromebook, but ChromeOS is not a real desktop OS, it's a toy. Google could control both the browser market and the desktop computing market if they seriously tried. (But then again that would require listening to customers and providing support, so nevermind)
What are you talking about? The majority of hardware is supported by only Linux at this point.
> What boggles my mind is why Google hasn't gotten more serious about making Android a desktop OS. Google is seriously working on making Android a desktop OS, Android 16 is only the first steps towards it.
> Yes there is the Chromebook, but ChromeOS is not a real desktop OS, it's a toy. ChromeOS is very much not a toy, it's pretty great if it can facilitate your work.
> But then again that would require listening to customers and providing support, so nevermind Google has consistently provided good support for all their hardware products, listening to customers is not their cup of tea though.
Google is absolutely no saint, I don't like their business model, how they're closing more and more of Android, how they keep killing services, how GCP can nuke AI nuke you, that they "own" web standards, ... But they're not all bad, they've also contributed greatly to much of the web and surrounding technologies.
ChromeOS is a better development environment than macOS in many ways. When was the last time you actually used one of these things, 2013?
What are you talking about? Everything for desktops work out of the box unless you have something weird and proprietary, and even then most distros have support anyway.
Again, I question your experience in this regard. Do you actually use dGPUs on Linux, or are you repeating a 14-year-old meme?
GPU support on Linux is more comprehensive than macOS, and if you don't need DirectX it's arguably better than Windows too. Mesa drivers are unparalleled by Apple or Microsoft, in a myriad of ways.
But more than that, it's simple logic: hardware manufacturers often don't often release specs or proprietary firmware blobs, forcing kernel hackers to reverse engineer in order to support a device, which often is too difficult, not to mention there's only so many kernel hackers and a lot of devices and hw revisions. There's a famous YouTube video of the most famous kernel hacker telling Nvidia to go fuck itself for this very reason.
What are you talking about... The situation is the same as on Windows, an officially supported and maintained proprietary driver maintained by Nvidia. Unless you're trying to run a 12+ year old car, it'll work fine. AMD on the other hand is amazingland and works perfectly, officially supported and maintained open source driver. I LOVE it.
> bit-twiddling
Never happened.
A 1:1 recreation of the Windows XP or Windows 7 user experience with the classic theme would be killer.
I say this with love, I have used KDE extensively and I still find it more janky than Windows XP. Gnome is "better" (especially since v40) in that it's consistent and has a few nicer utilities, but it also has worse UX (at least for power users) than Windows XP.
The really cool thing about Win32 is it's also the world's stable ABI. There's lots of fields of software where the GNU/Linux and POSIX-y offerings available are quite limited and generally poor in quality, e.g. creative software and games. Win32 gives you access to a much larger slice of humanity's cultural inheritance. "
What a pile of bullshitting.
There were some great efforts to build these out in ReactOS a few years ago.
lproven•1mo ago