I think it's a fair exchange too, even as an individual I pay for plenty of smaller open-source SaaS services—even if they're more expensive than proprietary competitors—for the very reason that I could always selfhost it without interruption if SHTF and the provider goes under.
I've seen a number of theories online that boil down to young tech enthusiasts in the 2000's/early-2010's getting hands-on experience with open source projects and ecosystems since they're more accessible than enterprise tech that's typically gated behind paywalls, then translating into what they use when they enter the working world (where some naturally end up at M$).
This somewhat seems to track, as longtime M$ employees from the Ballmer-era still often hold stigmas against open source projects (Dave's garage, and similar), but it seems the current iteration of employees hold much more favorable views.
But who knows, perhaps it's all one long-winded goal from M$ of embracing, extending, and ultimately extinguishing.
My guess…
The same reason Rome didn’t fall. It simply turned into the Church.
MS isn’t battling software mfgs because they have the lock on hardware direction and operating systems so strongly that they can direct without having to hold the territory themselves.
- three years later it's left in the hands of the powerful community that was built around it with MS help
- MS doesn't have to provide support and it's not their problem anymore
strace shows that the sleep program uses clock_nanosleep, which is theoretically "passive." However, if the host suspends and then wakes up after the sleep period should have ended, it continues as if it were "active."
Of course, if you want the native integration WSL offers, you'll need to upgrade the Linux driver/daemon side to support whatever kernel you prefer to run if it's not supported already. Microsoft only supports a few specific kernels, but the code is out there for the Linux side so you can port the code to any OS, really.
With some work, this could even open up possibilities like running *BSD as a WSL backend.
Of course, there might be some regressions. They are usually only fixed (upstream) after WSL kernel gets upgraded and it starts to repro in WSL.
Edit: for clarity, by "multiple OS" I mean multiple Linux versions. Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
So I left - I am willing to do more work to be spied on less, to be used as a product less, and to fight with my computer about who owns it less.
This feature thing is really one of their strategies. At work they send us "adoption managers" that run reports to check whether people use feature xyz enough and set up stupid comms campaigns to push them to do so.
I really hate that. I decide how I use my computer. Not a vendor.
This is a great way of saying it and expresses the uneasy feeling windows has given me recently. I use Linux machines but I have 1 windows machine in my home as a media PC; and for the last several years windows has made me feel like I don’t own that computer but I’m just lucky to be along for the ride. Ramming ads on the task bar and start menu, forcing updates on me, forcing me to make a Microsoft account before I can login (or just having a dark UI pattern so I can’t figure out how to avoid it, for the pedantic).
With Linux I feel like the machine is a turing complete wonderbox of assistance and possibility, with windows it feels like Microsoft have forced their way into my home and are obnoxiously telling me they know best, while condescendingly telling me I’m lucky to be here at all. It’s a very different feeling.
However, for those of us that went Linux many years ago, and like our free open source, in 2025, is it better to go back to the dark side, to run Windows and have things like a LAMP stack and terminals run with WSL?
I don't play games or run Adobe products, I use Google Docs and I don't need lots of different Linux kernels. Hence, is it better to run Linux in Windows now? Genuinely asking.
> is it better to run Linux in Windows now? Genuinely asking.
definitely is. Servicing takes ~ 1 minute per month to click on "yeah, let's apply those updates and reboot". Peace of mind with no worrying on external hardware won't work or monitor will have issues or laptop won't sleep or during the call battery will discharge faster due to lack of hardware acceleration or noise cancellation not working or ...
not on a shitty wrapper running on an ad-platform.
Sorry but not sorry, it's not easier to run than on linux. It requires the Windows store to work, and to use Hyper-V (which breaks VMware workstation, among other things).
It's in a better package, to be sure, but it's not "easier to run multiple OS on the same computer". It's easier to use multiple OSes (no SSH, GUI forwarding, etc), as long as all those OSes are Linux flavors supported by WSL.
Want FreeBSD or Windows? Nope!
Well, it is windows subsystem for Linux :) not windows subsystem for windows or FreeBSD for that matter :)
Ps I wonder if you can make your own image? After all its really just Hyper-V with some config candy.
I'm pretty sure that with the opensourcing, we'll see freebsd or more exotic systems popping up quite quickly. Heck, macOS would be fun!
Especially in licensing! /sarcasm
That said, the kernel they distribute is open source and you're not limited to just the distros they're working with directly. There are a number of third party (e.g. there's no Arch from Arch or Microsoft, but there's a completely compatible third party package that gives you Arch in WSL2)
No longer true since last month.
https://lists.archlinux.org/archives/list/arch-dev-public@li...
Along with the glibc hacks needed by WSL1.
(I was part of the discussion and also very adamant about this not happening)
The big drawback to WSL to me is the slow filesystem access because NTFS sucks. And having to deal with Windows in the first place.
Ps I wouldn't worry about your karma. It's just a number :P
However it’s not perfect, for example I hit this bug when trying to run node a few days ago https://github.com/microsoft/WSL/issues/8219#issuecomment-10... and I don’t think they’re fixing bugs in WSL1 anymore
It’s fine for running small models but when you get to large training sets that don’t fit in RAM it becomes miserable.
There is a line where the convenience of training or developing locally gives way to a larger on demand cloud VM, but on WSL the line is much closer.
The culprit would be the plan9 bits (think of smb or nfs but .. wilder ? why are they using 9P again ?)
The problem is Windows IO filters and whatnot, Microsoft Defender trying to lazily intercept every file operation, and if you're crossing between windows and Linux land, possibly 9pfs network shares.
WSL2's own disk is just a VM image and fairly fast - you're just accessing a single file with some special optimizations. Usually far, far more responsive than anything done by windows itself. Don't do your work in your network-shared windows home folder.
Not the biggest issue of them, 'find' and 'git status' on WSL2 in a big project is still >100 times slower on windows dev drive which avoids those filters than it is with WSL 1 on dev drive.
WSL 1 on regular ntfs with defender disabled is about 4x slower than WSL1 on dev drive, so that stuff does cause some of it, but WSL2 feels hopelessly slow. And wsl 2 can't share memory as well or take as much advantage of the filesystem cache (doubling it if you use the windows drive in both places I think, unless the network drive representation of it doesn't get cached on the WSL2 drive.
WSL2 does not take less advantage of filesystem caches. Linux's block cache is perfectly capable. HyperV is a semi-serious hypervisor, so it should be using a direct I/O abstraction for writing to the disk image. Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.
Linux VM's is something Microsoft has poured a lot of money into optimizing as that's what the vast majority of Azure is. Cramming more out of a single machine, and therefore more things into a single machine, directly correlates with profits, so that's a heavy investment.
I wonder why you're seeing different results. I have no experience with WSL1, and looking into a proprietary legacy solution with known issues and limited features would be a purely academic exercise that I'm not sure is worth it.
(I personally don't use Windows, but I work with departments whose parent companies enforce it on their networks,
Files on the WSL2 disk image work great. They're complaining about accessing files that aren't on the disk image, where everything is relayed over a 9P network filesystem and not a block device. That's the part that gets really slow in WSL2, much slower than WSL1's nearly-native access.
> Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.
In my experience this works pretty badly.
> a proprietary legacy solution with known issues and limited features
Well at least at the launch of WSL2 they said WSL1 wasn't legacy, I'm not sure if that has changed.
But either way you're using a highly proprietary system, and both WSL1 and WSL2 have significant known issues and limited features, neither one clearly better than the other.
Watch https://www.youtube.com/watch?v=qbKGw8MQ0i8 please.
But in the end they had to get the OS vendor to bless their process name anyway, just so the OS would stop doing things that tank the performance for everybody else doing something similar but who haven't opened a direct line up with the OS vendor and got their process name on a list.
This seems like a pain point for the vendor to fix, rather than everybody shipping software to their OS
Thats if you are going from VM/host. If you use the allocated space for VM, its pretty fast.
Is VMWare more powerful than Linux?
I know… every year is the year of the Linux desktop… but seriously the AI spyware included was enough to get me gone for good.
Spyware and adware is a government policy / regulation problem. Thanks to GDPR and DMA, using Windows in EU is significantly better experience (try setting a Windows desktop with an EU image). You can remove almost all of the apps including Edge and Copilot. There are no ads in the UI. Neither in Explorer nor in Start menu.
This is why you pay karma tax. This statement is so clearly representative of a falsity.
My linux can run multiple linuxes as well without VM overhead. Something Windows can’t do. Furthermore WINE allows me to forgo running any vm to run windows applications.
I developed on WSL for 3 years and consistently the biggest issue was the lack of ability to use tooling across the shared OSes.
Your karma depleting statements are biased, unfounded, and it shows as you do not really provide counter evidence. That’s why you lose karma.
OP's statement remains incorrect, because their assumption is that the WSL experience can't be reproduced in Linux.
XAMPP did not work out of the box with me on Windows (skill issue on my part, I know), so my preferred setup was to run a Ubuntu Server VM (LAMP stack) and then develop whatever I had on a Windows IDE.
I could have done that under full Linux, I just did not want that. Then Vagrant came into existence, which I'd say was for my use case (but never came around to adopt it).
I'm really happy with my WSL2 setup. I stopped using VMware Workstation when WSL2 broke it, but WSL2 is exactly what I needed to match my use case.
Why wouldn't you have just spent 5 minutes to get XAMPP working?
Is it still broken?
That being said, there is a performance impact.
Also 1980s style X11 widgets on the Windows desktop in their own windows? Cool.
Wayland supports window managers ?
Cool because nothing about how Windows boots is intercepted; you can just nuke the new partitions (or overwrite them with a new Linux installation). I still prefer a native Linux boot with "just in case" Windows option to WSL.
I understand the "roll your own" argument very well. In my time, I've experienced quite the variety of configs and dotfiles, but I'm not young anymore so I've settled with using Regolith which is an opinionated set of tools, including my favourite i3wm, on top of Ubuntu, and I simply use defaults for the most things.
Anyway, it's much easier to use Linux as a daily driver than it's ever been. The choice of distro is simply which package manager to use, and everything else just works, as long as it's in the package manager's inventory.
I haven't compiled my own computer's kernel in 6 years (but I still cross compile for rpi and other IoT), and I haven't used my dotfiles in 3 years, just defaults.
A very big and very incorrect assumption. This reads like you asked the initial question without any actual curiosity behind it.
What gets you that on windows? The builtin stuff is far from cohesive.
I don't have a need to run multiple OSes though. All of my tools are Linux based, and in companies that don't let people run Linux, the actual tools of the trade are almost all in a Linux VM because it's the only reasonable way to use them, and everything else is cross-platform. The outer OS just creates needless issues so that you now need to be a power user with two operating systems and their weird interactions.
> extremely arcane things I had to fix when setting it up involving host DNS and VPN adapter priority not getting propagated into the VM so networking was completely broken
Are you sure you set up the VPN properly? Messing around with Linux configs is a good way to end up with "somehow" bugs like that.
OSX was a bit janky with docker filesystem slowness, homebrew being the generally recommended package manager despite being awful (why do I sometimes tap a cask and sometimes pour a bottle? Don't tell me; I don't care. Just make it be "install". Also, don't take "install" as a cue to go update all of my other programs with incompatible versions without asking), annoying 1+ second animations that you can't turn off that make it so the only reasonable way to use your computer is to never maximize a window (with no tiling support of course), and completely broken external monitor support (text is completely illegible IIRC), but Windows takes jank to another level.
By contrast, I never encounter the issues people complain about on Linux. Bluetooth works fine. Wifi works fine. nVidia GPUs and games work fine. Containers are easy to use because they're natively part of the OS. I prefer Linux exactly because I stopped enjoying "tinkering" with my computer like 10 years ago, and I want it to just quietly work without drawing attention to itself (and because Windows 8 and the flat themes that followed were hideous and I was never going to downgrade to that from Windows 7).
wsl works good enough.
You know what's even more convenient than a VM? Not needing a VM and still having the exact same functionality. And you don't need a bunch of janky wrapper scripts, there's more than one tool that gives you essentially the same thing; I have used both Distrobox and toolbx to quickly drop into a Ubuntu or Fedora shell. It's pretty handy on NixOS if I want to test building some software in a more typical Linux environment. As a bonus, you get working hardware acceleration, graphical applications work out of the box, there is no I/O tax for going over a 9p bridge because there is no 9p bridge, and there is no weird memory balloon issues to deal with because there is no VM and there is no guest kernel.
I get that WSL is revolutionary for Windows users, but I'm sorry, the reason why there's no WSL is because on Linux we don't need to use VMs to use Linux. It's that simple...
It requires a bit of work to setup to your liking of course, but hey, at least you have an option to set it up to your liking
Then I was forced to use a Mac for work, so I was using a floating WM again. On my personal machine, ion3 went away and I never fully got around to migrate to i3.
By the time I got enough free time to really work on my personal setup, it had accumulated two huge monitors and was a different machine. I found I was pretty happy just scattering windows around everywhere. Especially with a trackball's cursor throw. This was pretty surprising to me at first.
Anyway this is just my little personal anecdote. If I go back to a Linux install I'll definitely have to check out i3 again. Thanks for reminding me :)
Care to elaborate? I'm not sure I understand what you're saying here.
I make a .desktop file and shell script to move it to the right place. Double click the shell file. It opens a text editor. Search the right click menu; still no way. To the CLI we go; chmod +x, and launch if from the CLI. Then after adding the Desktop icon, I can launch it.
On windows, you just double click the identified-through-file-extension executable file. This, like most things in Linux, implies the UX is designed for workflows I don't use as a PC user. Likely servers?
If you're on KDE, you can right-click the start menu and add the application. Also, right-click menu should give you a run option.
That sounds like Wayland getting worse, but it's actually been slowly improving and it's pretty good now. Only took a decade+ to get there.
KDE is much more cohesive, stable, and has significantly more features.
I don't think it's silly. Sure, it's a VM, but it's so nice that I barely reboot into Linux. You get the best of both worlds with WSL.
If I were to run an OS on a VM it's gonna be windows, not Linux
No ridiculous start menu spam; a sane, non-bloated operating system (imagine being able to update user space libraries without a reboot, due to being able to delete files that other processes still have opened!); being able to back up my data at the file level without relying on weird block-level imaging shenanigans and so much more.
How is inverting the host/guest relationship an improvement on that?
Windows at its core just does not seem like a serious operating system to me. Whenever there are two ways to do something, its developers seem to have picked the non-reasonable one compared to Unix – and doing that for decades adds up.
But yes, first impressions undoubtedly matter too.
These days I'm avoiding booting into Windows unless I really have no choice. The ridiculousness of it is simply limitless. I would open a folder with a bunch of files in it and the Explorer shows me a progress bar for nearly a minute. Why? What the heck is it doing? I just want to see the list of files, I'm not even doing anything crazy. Why the heck not a single other file navigator does that — not in Linux, not on Mac, darn — even the specialized apps built for Windows work fine, but the built-in thing just doesn't. What gives? I would close the window and re-open the exact same folder, not even three minutes later and it shows the progress bar again. "WTF? Can't you fucker just cache it? Da fuk you doing?"
Or I would install an app. And seconds after installing it I would try to search for it in the Start menu, and guess what? Windows instead opens Edge and searches the web for it. wat? Why the heck I can't remove that Edge BS once and for all? Nope, not really possible. wat?
Or like why can't I ever rebind Cmd+L? I can disable it but can't rebind it, there's just no way. Is it trying to operate my computer, or 'S' in 'OS' stands for "soul"?
Or for whatever reason it can't even get the time right. Every single time I boot into it, my clock time is wrong. I have to manually re-sync it. It just doesn't do it, even with the location enabled. Stupid ass bitch.
And don't even let me rant about those pesky updates.
I dunno, I just cannot not hate Windows anymore. Even when I need to boot in it "for just a few minutes", it always ends up taking more time for some absolute fiddlesticks made of bullcrap. Screw Windows! Especially the 11 one.
I want a OS, not an entertainment center, meaning I want to launch a program, organize my files, and connect to other computers. Anything that hinders those is bad. I moved from macOS for the same reason, as they are trying to make those difficult too.
Exactomundo! I'm a software developer, not a florist. I don't care about all those animations, transitions, dancing emojis, styled sliding notifications, windings and dingleberries. If I want to rebind a fucking key I should be able to. If I want to replace the entire desktop with a tiling manager of my choosing — that should be possible. And definitely, absolutely, in no way, should just about any kind of app, especially a web-browser, be shoved in my face. "Edge is not that bad", they would say. And would be completely missing the whole point.
Dual booting will do that because linux & windows treat the system clock differently. From what I recall one of them will set it directly to the local time and the other always sets it to UTC and then applies the offset.
https://wiki.archlinux.org/title/System_time#UTC_in_Microsof...
But there's also the thing where Microsoft stops supporting older machines, creating a massive pile of insecure boxes and normie-generated e-waste; and the thing where it dials home constantly; and the thing where they try and force their browser on you, and the expensive and predatory software ecosystem, and the insane bloat, and the requiring a Microsoft account just to use my own computer. Oh yeah, and I gotta pay for this crap?!
I went full Linux back when Windows 11 came out and will only use it if a job requires. Utterly disgusting software.
That said, a distaste for advertising goes beyond OCD. Advertisers frequently have questionable ethics, ranging from intruding upon people's privacy (in the many senses of the word) to manipulating people. It is simply something that many of us would rather do without.
That's... a very weird criticism to level at Windows, considering that the advice I've seen for Linux is to reboot if you update glibc (which is very much a user space library).
Having to constantly reboot my computer, or risk missing important security patches, was very annoying to me on Windows.
I've never had to reboot after updating glibc in years of using Linux, as far as I can remember.
This is absolutely not true for Linux kernel updating. While you won't be using the new kernel before rebooting, there's 0 risk in not rebooting, because there's exactly 1 version of the kernel running on the machine -- it's loaded into memory when your computer starts.
There's of course rare exceptions, like when a dynamically linked library you just installed depends on a minimum specific version of the Linux kernel you also just installed, but this is extremely rare in Linux land, as backwards compatibility of programs with older kernels is generally a given. "We do not break userspace"
Most distros leave the current running kernel and boot into the new one next time.
Some, like Arch, overwrite the kernel on an update, so modules can’t be loaded. It is a shock the first time you plug in a USB drive and nothing happens.
Running programs will continue to use the libc version that was on disk when they started. They won't even know glibc was upgraded. If something is broken before rebooting, it'll stay broken after.
For a trivial change to glibc, it won't cause issues. But there's a lot of shared libraries and lots of different kinds of changes in different kinds of libraries that can happen.
I still haven't nailed if it was due to a shared library update, but just the other day, after running upgrades I was unable to su or sudo / authenticate as a user until after rebooting.
This is correct, but let's not pretend that linux is perfect. 99% of linux _for me_ is my terminal environment. WSL delivers on that _for me_.
I don't see any start menu spam because I rarely use it, when I do I type what I'm looking for before my eyes even move to look at that start menu.
oh, I can play destiny 2 and other games without shenanigans. Also don't need to figure out why Slack wants to open links in chromium, but discord in firefox (I have to deal with edge asking to be a default browser, but IMO it's less annoying).
Oh and multi-monitor with multiple DPI values works out of the box without looking up how to handle it in one of the frameworks this app uses.
Having Windows and Linux in the same desktop the way that WSL2 does obviously means that it does add a lot of value, but what you get in the box isn't exactly the same as the thing running natively. Rather than a strict superset or strict subset, it's a bit more like a Venn diagram of strengths.
You obviously don't. Maybe WSL is the best compromise for people who need both Windows and Linux.
But it's ridiculous to think that WSL is better than just Linux for people who don't need Windows at all. And that's kind of what the author of this thread seems to imply.
Where is the reverse WSL on Linux, where Windows is deeply embedded and you have all the Windows features in your hands?
You can use Wine/Crosseover, which is cool, but even now the number of software products it supports is tiny. Steam has a lot of games.
You can run a virtual machine with Windows on it. That is identical to what you can do on Windows with Linux.
WSL2-> is a virtual machine with unique tooling around it that makes it easier to use and integrates well with Windows.
Linux, on the other hand, barely supports Windows because the latter is closed, and not just closed, windows issues component updates which specifically check if they run in wine and stop running, being actively hostile to a potential Linux host.
The two are not equivalent, nobody in the Linux kernel team is actively sabotaging WSL, whereas Microsoft is actively sabotaging wine.
Do you have a link to where I can read more about this? My understanding is that Microsoft saw Wine as inconsequential to their business, even offloading the Mono runtime to them [1] when they dropped support for it.
I really don't see it happening any time in the next decade at least, though. While Windows might not be Microsoft's biggest focus any more it's still a huge income stream for them. They won't just give that up.
https://github.com/Fmstrat/winapps
Enjoy.
It also doesn't appear to be the case even now. I searched for laptops available in my country that fit my budget and for each laptop searched "<laptop name> linux reddit" on google and filtered for results <1 year old. Each laptop's reports included some or other bug.
https://www.reddit.com/r/linuxhardware/comments/1hfqptw/linu...
https://www.reddit.com/r/linuxhardware/comments/1esntt3/leno...
https://www.reddit.com/r/linuxhardware/comments/1j3983j/hp_o...
https://www.reddit.com/r/linuxhardware/comments/1k1nsm8/audi...
The laptop with the best reported linux support seemed to be Thinkpad P14s but even there users reported tweaking some config to get fans to run silently and to make the speakers sound acceptable.
https://www.reddit.com/r/thinkpad/comments/1c81rw4/thinkpad_...
And yeah, it's best to wait a bit for new models, as support is sorted out, if the manufacturer doesn't support Linux itself. Or pick a manufacturer that sells laptops with Linux preinstalled. That makes the comparison with a laptop with Windows preinstalled fair.
I wasn't cherry-picking things. I literally searched for laptops available in my budget in my country and looked up what was the linux support like for those laptops as reported by people on reddit.
> Or pick a manufacturer that sells laptops with Linux preinstalled
I suppose you are talking about System76, Tuxedo etc. These manufacturers don't ship to my country. Even if I am able to get it shipped, how am I supposed to get warranty?
HP, Dell and Lenovo also sell Linux laptops on which Linux runs well.
I sympathize with the more limited availability and budget restrictions, but comparisons must be fair: compare a preinstalled Windows and a preinstalled linux, or at least a linux installed on hardware whose manufacturer bothered to work on Linux support.
When the manufacturer did their homework, Linux doesn't have the issues listed earlier. I've seen several laptops of these three brands work flawlessly on Linux and it's been like this for a decade.
I certainly choose my laptops with Linux on mind and I know just picking random models would probably lead me to little issues here and there, and I don't want to deal with this. Although I have installed Linux on random laptops for other people and fortunately haven't run into issues.
> it's been like this for a decade
Again, depends on the definition of "flawlessly". Afaik, support for hardware accelerated videoplayback on browsers was broken across the board only three years ago.
You first option is to buy a laptop with linux preinstalled from one of the many manufacturers that provides this. This requires no particular knowledge or time. Admittedly, this may lead you to more expensive options, entry grade laptops won't be an option.
Your second best bet is to read tech reviews. Admittedly this requires time and knowledge, but often enough people turn to their tech literate acquaintance for advice when they want to buy hardware.
> Afaik, support for hardware accelerated videoplayback on browsers was broken across the board only three years ago.
Yes indeed, that's something we didn't have. I agree it sucks. Now, all the OSes have their flaws that others don't have, and it's not like the videos didn't play, in practice it was an issue if you wanted to watch 4K videos for hours on battery. Playing regular videos worked, and you can always lower the quality if your situation doesn't allow the higher qualities. Often enough, you could also get the video and play it outside the browser. I know, not ideal, but also way less annoying that the laptop not suspending when you close the lid because of a glitch or something like this.
I have earnestly tried for >20 minutes trying to find such a laptop with any reputed manufacturer in my country (India) and come up empty-handed. Please suggest any that you can find. Even with Thinkpads, the only options are "Windows" or "No Operating System".
>Your second best bet is to read tech reviews.
Which tech reviews specifically point out linux support?
>Playing regular videos worked, and you can always lower the quality if your situation doesn't allow the higher qualities
The issue was never about whether playing the video worked. CPU video decoding uses much more energy and leads to your laptop running hot and draining battery life.
Can we at least agree to reduce the timeframe for things working flawlessly to "less than two years" instead of "a decade"? Yes you were able to go to the toilet downstairs but the toilet upstairs was definitely broken.
I have to onboard a lot of students to work on our research. The software is all linux (of course), and mostly distribution-agnostic. Can't be too old, that's it.
If a student comes with a random laptop, I install WSL on it, mostly ubuntu. apt install <curated list of packets>. Done. Linux laptops are OK too, I think, but so far only had one student with that. Mac OS used to be easy, but gets harder with every release, and every new OS version breaks something (mainly, CERN root) and people have to wait until it's fixed.
Fair enough. I think the best way to run Linux if you want to be sure you won't have tweak to stuff is to buy hardware with linux preinstalled. That your choice is more limited is another matter than "linux can't suspend".
Comparing a preinstalled Windows with a linux installed on random laptop whose manufacturer can't be bothered to support is a bit unfair.
Linux on a laptop where the manufacturer did their work runs well.
This isn't really the case, and hasn't been for some years now, especially since Valve started investing heavily in Wine. The quality of Wine these days is absolutely stunning, to the point that some software runs better under Wine than it does on Win11. Then there's the breadth of support which has has moved the experience from there being a slight chance of something running on Wine, to now it being surprising when something doesn't.
I’ve been a software developer for 20 years and in _my_ opinion Windows is the best platform for professional software development. I only drop of to linux when need some of the excellent posix tools but my whole work ergonomy is based on Windows shortcuts and Visual Studio.
I’ve been forced to use Mac for the past 1.5y but would prefer not to.
Why would Windows be superior for me? Because that’s where the users are (for the work stuff I did before this latest gig). I started in real time graphics and then spent over a decade in CAD for AEC (developing components for various offerings including SketchUp). The most critical thing for the stuff I did was the need to develop on the same platform as users run the software - C++ is only theoretically platform independent.
Windows API:s are shit for sure for the most part.
But still, from this pov, WSL was and will be the best Linux for me as well.
YMMV.
WSL2 is really handy when you want to run other software though. For example, I use Solidworks, so I need to run windows. Forscan for Ford vehicles also has to run under Windows. Having WSL2 means that I can just have one laptop and run any software that I want.
I've successfully run it with WINE. Thought, my Forscan executable was 3 years old or so and that may have changed, but I doubt it.
That's always true, of course. But, compared to other options, relying on WINE increases the chances of it happening by an amount that someone could be forgiven for thinking isn't acceptable.
It's possible to see what Wine as a great product would look like. No offense to crossover because they do good work, but Valve's Steam Play shows what you can really do with Wine if you focus on delivering a product using Wine.
Steam offers two main things:
- It pins the version of Wine, providing a unified stable runtime. Apps don't just break with Wine updates, they're tested with specific Proton versions. You can manually override this and 9 times out of 10 it's totally fine. Often times it's better. But, if you want it to work 10 out of 10 times, you have to do what Valve does here.
- It manages the wineserver (the lifecycle of the running Wine instance) and wine prefix for you.
The latter is an interesting bit to me. I think desktop environments should in fact integrate with Wine. I think they should show a tray icon or something when a Wineserver is running and offer options like killing the wineserver or spawning task manager. (I actually experimented with a standalone program to do this.[1]) Wine processes should show up nested under a wineserver in system process views, with an option to go to the wineprefix, and there should be graphical tools to manage wine prefixes.
To be fair, some of that has existed forever in some forms, but it never really felt that great. I think to feel good, it needs to feel like it's all a part of the desktop system, like Wine can really integrate into GNOME and KDE as a first-class thing. Really it'd be nice if Wine could optionally expose a D-Bus interface to make it so that desktop environments could nicely integrate with it without needing to do very nasty things, but Wine really likes to just be as C/POSIX/XDG as possible so I have no idea if something like that would have a snowball's chance in hell of working either on the Wine or desktop environment side.
Still, it bums me out a bit.
One pet peeve of mine regarding using Wine on Linux is that EXE icons didn't work out of the box on Dolphin in NixOS; I found that the old EXE thumb creator in kio-extras was a bit gnarly and involved shelling out to an old weird C program that wasn't all that fast and parsing the command line output. NixOS was missing the runtime dependency, but I decided it'd be better to just write a new EXE parser to extract the icon, and thankfully KDE accepted this approach, so now KDE has its own PE/NE parser. Thumb creators are not sandboxed on KDE yet, so enable it at your own risk; it should be disabled by default but available if you have kio-extras installed. (Sidenote: I don't know anything about icons in OS/2 LX executables, but I think it'd be cool to make those work, too.) The next pet peeve I had is that over network shares, most EXE files I had wouldn't get icons... It's because of the file size limit for remote thumbnails. If you bump the limit up really high, you'll get EXE thumbnails, but at the cost of downloading every single EXE, every single time you browse a remote folder. Yes, no caching, due to another bug. The next KDE frameworks version fixes most of this: other people sorted out multiple PreviewJob issues with caching on remote files, and I finally merged an MR that makes KIO use kio-fuse when available to spawn thumb creators instead of always copying to a temporary file. With these improvements combined, not just EXE thumbnails, but also video thumbnails work great on remote shares provided you have kio-fuse running. There's still no mechanism to bypass the file size limit even if both the thumbcreator and kio-fuse remote can handle reading only a small portion of the file, but maybe some day. (This would require more work. Some kio slaves, like for example the mpt one, could support partially reading files but don't because it's complicated. Others can't but there's no way for a kio-fuse client to know that. Meanwhile thumb creators may sometimes be able to produce a thumbnail without reading most of the file and sometimes not, so it feels like you would need a way to bail out if it turns out you need to read a lot of data. Complicated...)
I could've left most of that detail out, but I want to keep the giant textwall. To me this little bit of polish actually matters. If you browse an SMB share on Linux you should see icons for the EXE files just like on Windows, without any need to configure anything. If you don't have that, then right from the very first double-click the first experience is a bad one. That sucks.
Linux has thousands of these papercuts everywhere and easily hundreds for Wine alone. They seem small, but when you try to fix them it's not actually that easy; you can make a quick hack, but what if we want to do things right, and make a robust integration? Not as easy. But if you don't do that work, you get where we're at today, where users just expect and somewhat tolerate mediocre user experience. I think we can do better, but it takes a lot more people doing some ultimately very boring groundwork. And the payoff is not something that feels amazing, it's the opposite: it's something boring, where the user never really has any hesitation because they already know it will work and never even think about the idea that it might not. Once you can get users into that mode you know you've done something right.
Thanks for coming to my TED talk. Next time you have a minor pet peeve on Linux, please try to file a bug. The maintainers may not care, and maybe there won't be anyone to work on it, and maybe it would be hard to coordinate a fix across multiple projects. But honestly, I think a huge component of the problem is literally complacency. Most of us Linux users have dealt with desktop Linux forever and don't even register the workarounds we do (anymore than Windows or Mac users, albeit they probably have a lot less of them.) To get to a better state, we've gotta confront those workarounds and attack them at the source.
[1]: https://github.com/jchv/winemon just an experiment though.
> To get to a better state, we've gotta confront those workarounds and attack them at the source.
To my eye, the biggest problem with Linux is that so few are willing to pony up for its support. From hardware to software.
Buy Linux computers and donate to the projects you use!
If you want a stable, repeatable way to wrangle a Windows tool: Wine is it. It's easy to deploy and repeat, requires no licenses, and has consistent behavior every time (unless you upgrade your Wine version or something). Great integration with Linux. No Windows Updates are going to come in and wreck your systems. No licensing, no IT issues, no active directory requirements, no forced reboots.
If I were to do it with a Windows VM, I'd need to:
1. Create the VM image and figure out how to build/deploy it.
2. Sort out the Windows licensing concerns.
3. Figure out how to launch my tool (maybe put an SSH server into the VM).
4. Figure out how to share the filesystem (maybe rsync-on-SSH? Or an SMB fileshare?).
If I do it with Wine instead, all I need to do is: 1. Install some pinned version of Wine.
2. Install my tool into Wine.
3. Run it directly.
* Wine is surprisingly good these days for a lot of software. If you only have an app or two that need Windows it is probably worth trying Wine to see if it meets your needs.
* Similarly, if gaming is your thing Valve has made enormous strides in getting the majority of games to work flawlessly on Linux.
* If neither of the above are good enough, dual booting is nearly painless these days, with easy setup and fast boot times across both OSes. I have grub set to boot Linux by default but give me a few seconds to pick Windows instead if I need to do one of the few things that I actually use Windows for.
Which you go for really depends on your ratio of Linux to Windows usage and whether you regularly need to mix the two.
The possibilities seem endless and kinda confusing with Windows on ARM vs Rosetta and Wine, think there's some other options which use MacOS's included virtualization frameworks.
Doesn't Linux as well?
I discovered over the weekend that only 1 monitor works over HDMI, DisplayPort not working, tried different drivers. Suspend takes a good 5 minutes, and on resume, the UI is either turn or things barely display.
I might buy a Windows license, especially if I can't get multi-screen to work.
Side note: our main use case is using cuda for image processing.
To be fair I stay away from NVIDIA to, I would probably run a separate headless box for those GPU workloads if I needed to
I've tried this in the past but I was unable to get the debugger to work from within a VM.
Has this improved, or is there a trick, or are you just going without a debugger?
You cannot claim with a straight face that Virtualbox is easier to use.
I think the two fairly deep integrations are window's ability to navigate WSL's filesystem and wslg's fairly good ability to serve up guis.
The filesystem navigation is something that AFAIK can't easily be replicated. wslg, however, is something that other VMs have and can do. It's a bit of a pain, but doable.
What makes WSL nice is the fact that it feels pretty close to being a native terminal that can launch native application.
I do wish that WSL1 was taken further. My biggest grip with WSL is the fact that it is a VM and thus takes a large memory footprint. It'd be nice if the WSL1 approach panned out and we instead had a nice clean compatibility wrapper over winapi for linux applications.
Sure, but I never claimed otherwise.
> You cannot claim with a straight face that Virtualbox is easier to use.
I also didn't claim that. I wasn't comparing WSL to other virtualization solutions.
WSL2 is cool. Linux doesn't have a tool like WSL2 that manages Linux virtual machines.
The catch 22 is that it doesn't need one. If you want to drop a shell in a virtual environment Linux can do that six ways through Sunday with no hardware VM in sight using the myriad of namespacing technologies available.
So while you don't have WSL2 on Linux, you don't need it. If you just want a ubuntu2204 shell or something, and you want it to magically work, you don't need a huge thing with tons of integration like WSL2. A standalone program can provide all of the functionality.
I have a feeling people might actually be legitimately skeptical. Let me prove this out. I am on NixOS, on a machine that does not have distrobox. It's not even installed, and I don't really have to install it since it's just a simple standalone program. I will do:
$ nix run nixpkgs#distrobox enter
Here's what happened: $ nix run nixpkgs#distrobox enter
Error: no such container my-distrobox
Create it now, out of image registry.fedoraproject.org/fedora-toolbox:latest? [Y/n]: Y
Creating the container my-distrobox
Trying to pull registry.fedoraproject.org/fedora-toolbox:latest...
...
0f3de909e96d48bd294d138b1a525a6a22621f38cb775a991974313eda1a4119
Creating 'my-distrobox' using image registry.fedoraproject.org/fedora-toolbox:latest [ OK ]
Distrobox 'my-distrobox' successfully created.
To enter, run:
distrobox enter my-distrobox
Starting container... [ OK ]
Installing basic packages... [ OK ]
Setting up devpts mounts... [ OK ]
Setting up read-only mounts... [ OK ]
Setting up read-write mounts... [ OK ]
Setting up host's sockets integration... [ OK ]
Integrating host's themes, icons, fonts... [ OK ]
Setting up distrobox profile... [ OK ]
Setting up sudo... [ OK ]
Setting up user groups... [ OK ]
Setting up user's group list... [ OK ]
Setting up existing user... [ OK ]
Ensuring user's access... [ OK ]
Container Setup Complete!
[john@my-distrobox]~% sudo yum install glxgears
...
Complete!
[john@my-distrobox]~% glxgears
Running synchronized to the vertical refresh. The framerate should be
approximately the same as the monitor refresh rate.
302 frames in 5.0 seconds = 60.261 FPS
^C
No steps omitted. I can install software, including desktop software, including things that need hardware acceleration (yep, even on NixOS where everything is weird) and just run them. There's nothing to configure at all.That's just Fedora. WSL can run a lot of distros, including Ubuntu. Of course, you can do the same thing with Distrobox. Is it hard? Let's find out by using Ubuntu 22.04 instead, with console output omitted:
$ distrobox create --image ubuntu:22.04
...
$ distrobox enter ubuntu-22-04
...
$ sudo apt install openarena
...
$ /usr/games/openarena
To be completely, 100% fair: running an old version of Ubuntu like this does actually have one downside: it triggers OpenGL software rendering for me, because the OpenGL drivers in Ubuntu 22.04 are too old to support my relatively new RX 9070 XT. You'd need to install or copy in newer drivers to make it work. There are in fact ways to do that (Ubuntu has no shortage of repos just for getting more up-to-date drivers and they work inside Distrobox pretty much the same way they work in real hardware.) Amusingly, this problem doesn't impact NVIDIA since you can just tell distrobox to copy in the NVIDIA driver verbatim with the --nvidia flag. (One of the few major points in favor of proprietary drivers, I suppose.)On the other hand, even trying pretty hard (and using special drivers) I could never get hardware acceleration for OpenGL working inside of WSL2, so it could be worse.
That aside, everything works. More complex applications (e.g. file browsers, Krita, Blender) work just fine and you get your normal home folder mapped in just like you'd expect.
IDK how many VMs you've used, but there has been a lot of work specifically with x86 to make VMs nearly as fast as native. If you interact with cloud services everything you do is likely on a VM.
Apparently Linux VMs on other people's computers is very much appreciated.
It's a feature of the NT-family of kernels where you can create many environments sharing the same underlying executive and HAL.
It's a quite interesting way to build an OS: https://en.wikipedia.org/wiki/Architecture_of_Windows_NT
If you have control over where you put your git repo, WSL2 will hit max speed. If you want it shared between OSes, WSL2 will be slower.
Turns out that it's easier to emulate a CPU than syscalls. The CPU churns a lot less, too, which means that once things start working things tend to keep working.
I like Linux, and I use Linux as my daily desktop, but it's not because I think Linux or even UNIX is really that elegant. If I had to pick a favorite design it would be Windows NT for sure, even with all its warts. That said, the company behind Windows NT really likes to pile a lot of shit I hate on top of that pretty neat OS design, and now it's full of dubious practices. Automatic "malware submission" on by default, sending apps you download and compile yourself to Microsoft and even executing them in a VM. Forced updates with versions that expire. Unbelievable volumes of network traffic, exfiltrating untold amounts of data from your local machine to Microsoft. Ads and unwanted news all over the UI. Increasing insistence in using a Microsoft account. I could go on and on.
From a technical standpoint I do not think the Linux OS design is superior. I think Linux has some amazing tools and APIs. dmabufs are sweet. Namespaces and cgroups are cool. BPF and it's various integrations are borderline insane. But at its core, ... It's kinda ugly. These things don't all compose nicely and the kernel is an enormous hard-to-tame beast. Windows NT has its design warts too, all over, like the amount of involvement the kernel has in the GUI for historical reasons, and the enormous syscall surface area, and untold amounts of legacy cruft. But all in all, I think the core of what they made is really cool, the subsystems concept is super cool, and it is an OS design that has stood up well to time. I also think the PE format is better than ELF and that it is literally better for the capabilities it doesn't have w.r.t. symbols. Sure it's ugly, in part due to the COFF lineage, but it's functionally very well done IMO.
I feel the need to say this because I think I probably came off as a hater, and tbh I'm not even a hater of WSL2. It's not as cool as WSL1 and subsystems and pico processes, but it's very practical and the 9p bridge works way better than it has any right to.
Thanks for pointing this out.
The new WSL1 uses kernel call translation, like Wine in reverse and WSL2 runs a full blown Linux kernel in a Hyper-V VM. To my knowledge neither of these share anything with the aforementioned POSIX subsystem.
But having Windows tightly integrated when needed is nice.
If only I could run replace the Windows shell with a Linux DE...
It is... I'm working these days on bringing a legacy windows only application to the 21st century.
We are throwing a WSL container behind it and relying on the huge ecosystem of server software available for Linux to add functionality.
Yes that stuff could run directly on windows, but you'd be a lot more limited in what's supported. Even for some restricted values of supported. And you'd have to reinvent the wheel for a few parts.
So, for me Windows + WSL is more productive than just using Linux. The UI is still better on Windows(basic utilities like File Explorer and Config Management is better on Windows). No Remoting Software beats RDP. When I remote to a Windows workstation through RDP, I can't tell the difference. VNC is always janky. Of course there is Word/Excel/Illustrator which is simply not available on Linux
I mean this is basically heresy now.
most code is virtualised, or sandboxed, or in a VM, or a docker container, or several of the above at the same time.
I actually just tried WINE for the FIRST time (surprisingly, I have been out of the Windows world for so long)
And as long as I installed the binaries from their repo, not Debian 12, it worked very well
Wine is an impressive project too. It's not a VM, which has upsides and downsides, but I was able to run GCC-TDM, Python 3, and git bash in it!
I use linux. I don't need WSL at all. Not at work nor at home.
So you praise WSL because you use Windows as your main system? Than yes its great. It definitly makes the Windows experience a lot better.
OpenSSH for Windows was also a game changer. Honestly, i have no clue why Microsoft needed so long for that.
Iterating on improvements and polishing on Screens and Design that they haven't touched in the past 30 years. Improving on ARM support etc. And STOP adding Ads on the OS.
And the Surface Laptop continues to push Hardware quality forward. From Speaker, Touchpad, Screen, Motherboard etc.
I honestly think Microsoft could win back some mind share from Apple if they:
* Put out a version of windows without all the crap. Call it Dev edition or something and turn off or down the telemetry, preinstalled stuff, ads, and Copilot. * Put some effort into silicon to get us hardware with no compromises like the Macbooks
I'm on Mac now, and I jump back and forth between Mac laptop and a Linux desktop. I actually prefer Windows + WSL, but ideologically I can't use it. It has potential - PowerToys is fantastic, WSL is great, I actually like PowerShell as a scripting language and the entire new PC set up can now be done with PowerShell + Winget DSC. But, I just can't tolerate the user hostile behavior from Microsoft, nor the stop the world updates that take entirely too long. They should probably do what macOS and Silverblue, etc. do and move to an immutable/read-only base and deploy image based updates instead of whatever janky patching they do now.
Plus, I can't get a laptop that's on par with my M4 Pro. The Surface Laptop 7 (the arm one) comes close, but still not good enough.
That said I'd pay for a dev edition as you described it, that would be fantastic.
I get customers and most people don't know about it but it's kind of ridiculous that techy people in a tech forum don't know how to do it.
Why? HN has traditionally always largely been a macOS and Linux crowd. Why do we have to care about fixing an OS that is broken out of the box (that most of us don't use anyway)?
Far too many Linux users, especially, make fun of Windows and if you dig a bit you see that most of their complaints are things that are solved with 5 minutes of googling. Some complaints are philosophical, and those I agree with, but even in that case, I'd be curious how consistent they are with their philosophy when for example Linux desktop environments due weird things.
Summarizing a bit: Linux users with years or decades of experience of tinkering as sysadmins with Linux frequently make junior-level user complaints about Windows usage, frequently based on outdated information about it.
I say this who has been using both Linux and Windows for a few decades now and has a fairly decent level of sysadmin skills on both.
as far as MS are concerned, that crap is their business.
Or, possibly, that crap is the multitude of little software empires build by the management layer now in control..
This is the only reason I have not requested a windows laptop from my company. WSL is better for docker development in basically every way than a mac can be (disclaimer: haven't tried orbstack yet, heard good things, but my base assumption is it can't be better than WSL2) except it is literally impossible to get hardware as good as the M3 or M4 for any other OS than macOS.
The MacBook Air M4 supports two external displays now (with the lid open):
https://support.apple.com/guide/macbook-air/use-an-external-...
My SOs MacBook Air can only do one external monitor, even though it has the same specs as her work Pro.
The MacBook Pro with the non-Pro/Max chip (i.e. MacBook Pro M3) has the same limitations as the corresponding MacBook Air (i.e. MacBook Air M3).
No. This is just you repeating marketing.
No Nvidia chip = B tier at best.
I have a $700 Asus with a 3060 that is better. Go ahead and scale up to a $2000 computer with an Nvidia chip and its so obviously better, there is nothing to debate.
No one cares about performance per watt, its like someone ran a 5k race, came in 3rd and said "Well at least I burned fewer calories than the winner!"
Not only that, but being able to run very intensive work (Pro Audio, Development...) seamlessly is an absolute pleasure.
Its screen is one of the best screens out there.
The trackpad (and some keyboards) are an absolute pleasure.
The robustness of the laptop is amazing.
I don't care about the marketing of Apple, I don't buy anything new they launch, and I condemn all of their obscure pricing techniques for the tech they sell. But my M1 is rocking like the first day, after four years of daily use. That's something my Windows laptops have never delivered to me.
Apple has done a lot of things wrong, and I will not buy another Apple laptop in the future, but I don't want Nvidia on a Laptop, I want it to be portable, powerful and durable.
That is changing now, and it's amazing. I want my laptop to be mine, and to be able to install any OS I like. New laptops with arm64 and Intel Lake cpus are promissing, but we're not there yet, at least not that I have experienced.
Each to their own for sure, and for you, the nvidia requisite is important. For me it's not about brands, but usability for my work and hobbies.
Nvidia chip = 45 minutes of battery life
LTSC is a version like that
https://www.windowscentral.com/software-apps/windows-11/what...
But the increasing market share of Macs and even Linux these days plus the ever increasing of OSS initiatives from Microsoft points out that Microsoft knows a lot fewer of their users are as captive as they were in the 90's, for example.
The biggest difference between OSX and Windows is, Apple adds (some say steal) functionality from competition, and open source. They make it neat. On windows to have something working, you need a WezTerm, Everything for search, Windhawk for a vertical taskbar on the right, Powertoys for an app starter, Folder Size for disc management etc. If you spend a lot of time, Win11 can be ok to work with.
If Powerpoint and Affinity would work on Linux, I'd use Linux though.
Huh? Windows supports vertical taskbar.
Each OS is going to have extension applications to improve on the OOTB experience. This is an invalid argument to choosing one over the other.
Interesting enough, that beyond release upgrades, happening may be once a year, all or may be 99% of updates took ~5 minutes of interruption of me, including needed reboot. I really wonder how others manage to have "entirely too long" updates.
I find it dismaying that people on Hacker News willingly submit to incredibly user-hostile behavior from Microsoft and call it "the best of both worlds". Presumably a nontrivial proportion here are building the next generation of software products - and if we don't even respect ourselves, how likely is it that we will respect our users?
At least the nags in Windows look like modern web-based UI (so far that ‘use Electron’ seems to be the post-Win 8 answer to ‘how to make Windows apps’) in contrast to MacOS which drove my wife crazy with nag dialogs that look like a 1999 refresh of what modal dialogs looked like on the classic Mac in 1984.
I have since moved to macbooks for the hardware, but until not too long ago WSL was my linux "distro" of choice because I didn't want to spend time configuring my computer to make basic things work like suspend/wake on lid down/up, battery life, hardware acceleration for video playback on the browser, display scaling on external monitor and so on.
> solved a while ago
Can not be the case because I was facing these issues less than a couple of years ago.
I was responding to the "Stockholm syndrome" comment specifically because there are a number of hardware and software problems (e.g. https://jayfax.neocities.org/mediocrity/gnome-has-no-thumbna...) with using linux as a desktop operating system that linux users have to find their way around, so I found the comment rather full of irony.
PS: I already know that the file-picker issue has been fixed. That does not take away from the fact that it was in fact broken for decades. It is only meant as an example.
Just like with Mac and Windows, you choose the supported hardware, and everything is flawless.
And it's not clear what the Linux ones are. Like, our dept ordered officially Linux-supported Thinkpads for whoever wanted them, and turns out they still have unsolved Bluetooth audio problems. Those people use wired headphones now.
People whose main environment is Linux intentionally buy hardware that works flawlessly with Linux.
People who try Linux occasionally do it on whatever hardware they have, which still almost always works with Linux, but there are occasional issues with sketchy Windows-only hardware or insufficiently tested firmware or flaky wifi cards, and that is enough for there to be valid anecdotes in any given comments section with several people saying they tried it and it isn't perfect. Because "perfect" is a very high bar.
There is also the quiet part to this. People who religiously use Linux and think that it is the best OS that can ever be, don't realize how many little optimizations go into a consumer OS. They use outdated hardware. They use the lower end models of the peripherals (people still recommend 96 DPI screens just for this). They use limited capabilities of that hardware. They don't rely on deeply interactive user interfaces.
It also doesn't appear to be the case even now. I searched for laptops available in my country that fit my budget and for each laptop searched "<laptop name> linux reddit" on google and filtered for results <1 year old. Each laptop's reports included some or other bug.
https://www.reddit.com/r/linuxhardware/comments/1hfqptw/linu...
https://www.reddit.com/r/linuxhardware/comments/1esntt3/leno...
https://www.reddit.com/r/linuxhardware/comments/1j3983j/hp_o...
https://www.reddit.com/r/linuxhardware/comments/1k1nsm8/audi...
The laptop with the best reported linux support seemed to be Thinkpad P14s but even there users reported tweaking some config to get fans to run silently and to make the speakers sound acceptable.
https://www.reddit.com/r/thinkpad/comments/1c81rw4/thinkpad_...
Which Linux? Each distro is essentially a different operating system.
Modern means systemd, pipewire, Wayland, Gnome, an up to date kernel, etc... So the current Ubuntu and Fedora releases.
I've had 100% working laptops for 15 years now. Because I always run the newest Ubuntu.
Maybe I was too positive on Fedora (I was going by it's reputation, I use Ubuntu for work). Ubuntu is solid.
Link 1: screen only updating every 2 seconds, visual glitches. Link 2: brightness reset to full on screen unlock, fans turning on when charging. Link 3: bluetooth troubles, speakers cant be muted if headphone jack is on mute. Link 4: audio quality and low volume, wifi not coming back after sleeping. Link 5: fans being too loud, poor sound quality.
Either your Stockholm syndrome is affecting your reading comprehension or you just take bugs like these as part of the normal "working perfectly" linux experience.
Previous laptops (all ThinkPads) used to be able to get everything all to work (debian) but it did take effort and finding the correct resources. Unfortunately all the old documentation about this stuff is pre-systemd and UFI and it's not exactly straightforward anymore.
Note that NVIDIA drivers didn't get better since they are more open source now. They are not. GPUs are now entire independent computers with their own little operating system. Some significant parts of the driver now runs under that computer.
Yes the manufacturers may allocate some people to deal with it and the corrosiveness of the kernel community. But why? Intel and AMD uses that as a marketing and sales stragtegy. If the hardware manufacturer is the best one there is, where is the profit for supporting Linux? Even Thinkpads don't have 100% support of all the little sensors and PMICs.
HiDPI issue hasn't been solved yet completely. Bluetooth is still quite unreliable. MIPI support should be the best due to the number of devices, until you realize everybody did their own shitty external driver and there are no common good drivers for MIPI cameras so your webcam doesn't work. USB stack is still dodgy. Microsoft in 90s had a cart of random hardware populating the USB tree completely and they just fucked with the NT kernel plugging and unplugging until it didn't break anymore for love's sake. Who did that level of testing with Linux?
I'd at least try Linux cause I abhor Microsoft, but idk if it'd work out.
Indeed, it does. Having stable system and not dealing with Linux on Desktop, clear tradoffs (like "just add another 16gb RAM stick in laptop/desktop and you are golden") is great for peace of mind.
The average uptimes on my laptops (note for plural) is ~3 weeks, until next Windows Update to be applied. I don't have nostalgia on the days of using Linux on desktop (~2003 student times, ~2008 giving it one more try, ~2015 as required by dayjob)
Of course it adds up that I can tell people around me (who are not tech guys often, but smart enough to know basic concepts and be able to run bash scripts provided to them) - "yep, machine with 32GB+ of RAM will work fine, choose any you like" - and it works.
There's also debootstrap which is useful for this technique, not sure if it also works on Ubuntu.
https://learn.microsoft.com/en-us/windows/wsl/connect-usb
I regularly run ADB through WSL2 using this.
I love WSL because it lets me have the best of Windows and Linux.
In fact, I'm a little annoyed that I can't get a comparably smooth experience on my MacBook without spinning up a full QEMU VM. I know it's a bit hypocritical since, like most people, I run WSL2 (which is container/VM-based), not WSL1 (the original magic syscall translation vision).
Does anyone know why there's no lightweight solution on macOS - something like LXC plus a filesystem gadget - that would let me run stuff like "apt-get install chromium"?
I use WSL2 to handle Linux (and Windows cross-) compilation regularly, along with running a number of native tools that are specific to Linux.
I've never had any issues with that, even to the point that I've been able to run MAME natively from Linux and have it show up like any other windowed app.
Another, smaller, gripe is networking. Because of how WSL is networked, I've run into edge-case issues with connecting to networked applications running in WSL from Windows.
https://www.amazingcto.com/upgrading-wsl-with-zsh-and-comman...
This is not often discussed, so it took me a lot of digging a couple of years ago, but I'm still surprised this is never discussed as a consequence / side effect / downside of wsl2. There are performance impacts to turning on hyper V, which may or may not be relevant to user (e.g. If this is also their gaming machine etc:)
Or on a macOS Desktop. Bonus: doing so on either platform doesn't also mean your host OS is running under a hypervisor, as it does with WSL2.
Bigger bonus: you don't have to run fucking Windows.
Why do you think, technologically, this is some form of "bonus"?
You can run multiple Linux distributions in chroots or containers, such as docker containers. I have showed people how to build packages for Ubuntu 22.04 on Ubuntu 20.04 for example.
For this part, I just create systemd-nspawn containers.
Last time I wanted to test something in a very old version of WebKit, creating a Debian Jessie container takes a few minutes. Things run at native speed.
... but WSL is an excellent piece of work. It's really easy to deploy apps on. Frankly, it can be easier to a deployment there than on a Linux or macOS system, for example the reasons detailed above.
It might be due to my corpo's particular setup etc. but for me 95% of the value of WSL would be the ability to run it on "corporate" Windows boxes. Alas.
For a person who will not invest the time to learn, e.g., how to avoid or minimise dependencies, indeed something like Windows with WSL may appear "more powerful".
The point of this comment is that "power" comes from learning and know-how as much as if not more than simply from choice of operating system. That said, some choices may ultimately spell the difference between limitations or possibilities.
I have been using it since the beginning of WSL 1 with a very terminal heavy set up but it has some issues.
For example WSLg's clipboard sharing is buggy compared to VcXsrv. It doesn't handle pasting into Linux apps without introducing Windows CRs. I opened an issue for this https://github.com/microsoft/wslg/issues/1326 but it hasn't gotten a reply.
Also, systemd is still pretty sketchy. It takes over 2 minutes for systemd services to start and if you close a WSL 2 terminal for just a few minutes systemd will delay a new terminal from opening for quite some time. This basically means disabling systemd to use WSL 2 in your day to day.
Then there's this 6 year old issue with 1,000+ upvotes https://github.com/microsoft/WSL/issues/4699 around WSL not reclaiming disk space. It means you need to routinely shut everything down and compress your VM's disk or you'll run out of space.
Beyond that is does work well so I'm happy it exists.
That doesn't sound good. I was planning to set up a Windows/WSL2 box, but this gives me second thoughts. Where can I read more about this?
I'd venture to say this depends on which OS you're more comfortable with. I'm more comfortable with Linux, so I'd say it's easier/better/less janky to use Linux as a host OS.
> Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
Once you're a developer who's been burned by this enough times, you do this with containers or dedicated dev VMs. You do not develop on your host OS and stay sane.
Here's the main difference between making Windows vs Linux the main OS from my POV: Windows is a lot of work and only the corporate editions can be converted into not-a-hot-mess-of-distractions (supposedly). Out of the box Linux doesn't have all of the bullshit that you have to spend time ripping out of Windows. You can easily re-install Linux to get the "powerwash" effect. But if you powerwash Windows you have to go back and undo all the default bullshit again.
Having said that Windows+WSL is a very nice lifeline if you're stuck in Windows-land. It's a much better combo than MacOS.
Have you tried lxd? It's far less janky than Docker (IMHO) to achieve what you describe. Docker is uniquely unsuited to your use case.
The Linux on Desktop is finally approaching, in more than one "shape", none of which is the shape some people expected/wanted.
This is the kind of statement that makes you pay the karma tax. WSL is great, I use it on a day to day basis. I also use Linux on a day to day basis. And as great as WSL is, for running Linux software on supported hardware, Linux beats WSL hands down. And I mean, of course it does, do you expect a VM to beat native? In the same way that Windows software runs better on Windows. (with a few exceptions on both sides).
Compared to Linux, WSL I/O is slow, graphics is slow and a bit janky, I sometimes get crashes, memory management is suboptimal, networking has some quirks, etc... These problems are typical of VMs as it is hard for the host and guest OS to coordinate resource use. If you have an overpowered computer with plenty of RAM, and are mostly just using the command line, and don't do anything unusual with your network, then sure it may be "better" than Linux. But the truth is that it really depends on your situation.
Hardware performance counters basically do not work in WSL2, which among other issues, makes it extremely difficult to use rr. https://github.com/rr-debugger/rr/issues/2506#issuecomment-2... Some people say they got it working, but I and many other users encounter esoteric blockers.
The Dozen driver is never at feature parity with native Linux Vulkan drivers, and that's always going to be the case.
By default, WSL security mitigations cause GCC trampolines to just not work, which partly motivated the opt-in alternative implementations of trampolines last year. https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=28d8c680aaea46...
gWSL is also a terrible X11 server that makes many very basic window management configurations impossible, and while I prefer VcXsrv, it has its own different terrible issues.
I can imagine that WSL2 looks attractive if all you want to do is run command line apps in multiple isolated environments, but it is miserable for anything graphical or interactive.
Indeed, that's my case - using CLI mostly for ssh/curls/ansible/vim over ansible and Puppet, so on.
For GUI part, Windows is chosen and shines for me.
Hmm...
> WSL is more powerful than Linux
Oh.
Are you a Windows user who is happy to have a good way to run Linux on Windows, or are you a Linux user trying to convince other Linux user that instead of using Linux, they should use Linux in a VM running on Windows?
I am a longtime Linux user, and I can't see a reason in the universe why I would want to access my Linux through a VM on Windows. That seems absolutely insane.
[1]: https://developer.apple.com/documentation/virtualization/vzl...
P.S. They also specifically built Rosetta for Linux to compile x64 Linux binaries into aarch64 to run inside Linux VMs on their machines.
Apple could just have gone and do a straight port of the iOS boot procedure to their ARM Mac lineup... and we'd have been thoroughly screwed, given how long ago the latest untethered bootrom exploit was.
Or they could have pulled a Qualcomm, Samsung et al and just randomly change implementation details between each revision to make life for alt-os implementers hell (which is why so many Android BSP dumps are the way they are, with zero hope of ever getting anything upstream). Instead, to the best of my knowledge the UART on the M series SoCs dates back right to the very first iPod.
The fact that the Asahi Linux people were able to create a GPU driver that surpasses Apple's own in conformance tests [1], despite not having any kind of documentation at all is telling enough - and not just of the pure genius of everyone involved.
[1] https://appleinsider.com/articles/23/08/22/linux-for-apple-s...
Could've been worse. At least they're not locking you out of your device like on iPhones and iPads. They don't stop you from running Asahi, they just aren't interested in helping anyone run Asahi.
Microsoft, on the other hand, sells laptops that actively prevent you from running Linux on them. Things get a little blurry once you hit the tablet form factor (Surface devices run on amd64, but are they really that different from an iPad?) where both companies suck equally, though Microsoft also sells tablets that will run Linux once someone bothers to write drivers for them.
Which is... not necessarily wrong.
Also, it's really annoying that macOS switched to zsh. It's not a drop-in for bash. Yeah you can change it back to bash, but then any Mac-specific help/docs assume zsh.
Parallels also has a commercial offering that does some nice GUI-level integration with both Windows and Linux VMs.
My understanding is that these are both built on top of some Apple API, and Parallels actually collaborates with Apple on making it work for their use case. So it's not the first-class support that you get from Microsoft with WSL, but it's still pretty good.
Exactly same experience to WSL - great out of the box experience, easy to use, and insist on using their own patched kernel.
In this case who except Microsoft would have paid for development here.
(Also, I'm surprised that WSL 1 is still supported. It must be in maintenance mode though, right?)
MacOS has a lot of issues (mostly by Apple recent policy changes), but posix systems are more alike than different. =3
It makes it sound like Microsoft is giving some capability to Linux whereas it's the other way around.
Source: https://x.com/richturn_ms/status/1245481405947076610?s=19
https://xcancel.com/richturn_ms/status/1245481405947076610?s...
“ I still hope to see a true "Windows Subsystem for Linux" by Microsoft or a windows becoming a linux distribution itself and dropping the NT kernel to legacy. Windows is currently overloaded with features and does lack a package manager to only get what you need...”
Os/2 basically ran a copy of windows (either the existing one or bundled one) to then execute windows programs side by side with os/2 (and DOS) software.
That said, to address the grandparent comment’s point, it probably should be read as “Windows Subsystem for Linux (Applications)”.
That's not what I say, that's what the former PM Lead of WSL said. To be fair, Windows Services for UNIX was just Unix services for Windows. Probably the same logic applied there back then: they couldn't name it with a leading trademark (Unix), so they went with what was available.
When it works, it's great! When it doesn't....oh man it sucks. It has been non-stop networking and VPN problems, XServer issues, window scaling issues, hardware accelerated graphics not working, etc. this whole time. I've spent more time trying to fix WSL issues then actually developing software. It's never gotten better.
It's fast. It's powerful. But using it as a daily driver is very painful in my experience. I avoid it as much as possible and do most of my work in MSYS2 instead. Sure, it's much slower. But at least it works consistently and has for years.
Now do NT.
Given what Windows has become and already discussed here on HN I would even hesitate to run it in a virtual machine.
Edit: more than 15 years.
I used to do VFIO with hardware passthrough so I could have linux but still run windows software like CAD that takes advantage of the gfx card. That was a pain to set up and use.
The other way, its very simple. WSL2 can run ML tasks with just a tiny bit of overhead in moving the data to the card.
I'm not a novice either, $dayjob has me working on the lowest levels of Linux on a daily basis. I did linux from scratch on a Pentium 2 when I was 12. All that to say yes I happen to agree but edge cases are out there. The blanket statement doesn't apply for all use cases
I set it up originally for gaming, but nowaways I install a lot of disposable software there.
I use Linux guests VMs too (a la Qubes), but sadly there's no guest support for looking-glass on Linux. Native rendering speeds on VMs are something hard to let go.
1. I use UWF on windows (Education Edition). All disk writes to C:/ are ephemeral. On every single reboot, all changes are discarded and my pc is back to the exact same state as when I first set it up. I do keep a separate partition for documents that need persistence.
2. Miracast for screen mirroring.
> We currently package our virtual machines for four different virtualization software options: Hyper-V (Gen2), Parallels, VirtualBox, and VMware. These virtual machines contain an evaluation version of Windows that expires on the date posted. If the evaluation period expires, the desktop background will turn black, you will see a persistent desktop notification indicating that the system is not genuine, and the PC will shut down every hour.
Edit: Oops, dead link -- the dev tools evaluation VM hasn't been released for 6+ months. But they do offer Windows evaluations ISO's after registration.
So, if you don't have a secondary GPU, you'll need to live without graphics acceleration in the VM... so for a lot of people the "oh you just need to use a VM!" solution is not feasible, because most of the software that people want to use that does not run under WINE do require graphics acceleration.
I tried running Photoshop under a VM, but the performance of the QEMU QXL driver is bad, and VirGL does not support Windows guests yet.
VMWare and VirtualBox do have better graphics drivers that do support Windows. I tried using VMWare and the performance was "ok", but still not near the performance of Photoshop on "bare metal".
However now that AMD is including integrated GPUs on every AM5 consumer CPU (if I'm not mistaken?), maybe VMs with passthrough will be more common, without requiring people to spend a lot of money buying a secondary GPU.
If the card is running its own OS, what's the benefit of combining them that way? A high speed networking link will get you similar results and is flexible and cheap.
If the card isn't running its own OS, it's much easier to put all the CPU cores in the same socket. And the demand for both x86 and Arm cores at the same time is not very high.
Every Windows thread on HN is a reminder of the stark divide between people who need to use Windows for productivity apps and those who don’t.
The apps I need a Windows machine for are not the kind that virtualize nicely. Anything GPU related means Windows has to become the base OS for me.
If you’re running an occasional light tool you can get away with Windows in a VM, but it’s a no-go for things like CAD or games.
LibreOffice has gotten quite good over the years, including decent(ish) MSO file format interoperability, and Thunderbird seems to support Exchange Server.
So, I suppose things like MS Project or MS Visio many not have decent counterparts (maybe, I don't really know), but otherwise, it seems like you don't need-need to use Windows for productivity apps.
It also only support email and not calendaring/contacts.
That being said, Office365 Web Client is pretty good at this point and someone who doesn't live in Office all day can probably get along fine with it.
Over time though more and more small issues with it came up. Packages working not quite right, issues with the barriers between the two, etc. It always felt like there was a little bit more friction with the process.
With Valve really pushing Proton and the state of linux gaming, I've recently swapped over to Ubuntu and Nixos. The friction point moved to the gaming side, but things mostly just work.
Things on linux are rapidly getting better, and having things just work on the development side has been a breath of fresh air. I now feel that it's a better experience than windows w/ WSL, despite some AAA titles not working on linux.
One problem that was unsolved last time I checked: Saving highlight videos. It used to work if you told Overwatch to use webm format instead of mp4, but Blizzard broke that somewhere along the line, possibly in the transition to Overwatch 2. (I worked around this with OBS Studio and its replay buffer feature.)
Scrolling to Medals, 50% of all 25.000+ games tracked by the site are playable, either working perfectly or mostly (Platinum or Gold ratings). Another 20% can be alright under specific circumstances, and with compromises (Silver rating).
i have some 2012 projects were the makefiles also build in msvc. never again.
then 2015 projects with build paths for cygwin. never again.
then some 2019 projects with build scripts making choices to work on msys2/git-bash-for-windows. never again.
now we can build on WSL with just some small changes to an env file because we run a psql container in a different way under wsl... let's see how long we endure until saying never again.
I actually switched to Linux full-time when Starfield wouldn’t run on Windows but worked in Proton. We are now in a world where Valve provides a more sable Windows API than Microsoft. The only limitation now is anti-cheat but that’s a political problem, not a technical one.
No, Windows is not about games, Windows is about being an objectively the most stable pile of garbage there is.
Totally hear you though for things like CNC milling software that's meant to stay static for the lifetime of the mill - that's not going anywhere.
I guess you meant Linux here
Microsoft releases the important parts of VS Code under proprietary licenses. Microsoft doesn’t release the source code for Windows or Office or the github.com web app under free software licenses.
Don’t get it twisted. This is marketing, nothing more.
While right now I enjoy the privilege to develop on Linux, things may change.
This whole thread is basically frogs praising the cozy warming water in the pot.
Microsoft ist really terrible at naming things, that's for sure.
Can anyone chime in - is this still a concern? Was it ever a concern?
In general, the utilities on posix systems heavily rely on a standardized permission and path structure fundamentally different than windows registry paradigms.
Even something as simple as curl... while incredibly useful on windows, also opens a scripting ecosystem that side-channels Microsoft signing protections etc.
Linux VM disk image files can be very small (some are just a few MB), simply copied to a physical drive/host later given there is no DRM/Key locks, and avoids mixing utf8 with windows codepage.
Mixing ecosystems creates a Polyglot, and that is going to have problems for sure... given neither ecosystems wants the cost of supporting the other.
Best method, use cross platform application ports supported on every platform. In my opinion, Windows should only be used for games and industry specific commercial software support. For robust system privacy there are better options now, that aren't booby-trapped for new users. =3
Right?
Right?…
All development is done on Windows laptop via SSH to those VMs. When I tried using Ubuntu via WSL, something didn't feel right. There were some oddities, probably with the filesystem integration, which bothered me enough to stop doing this.
Nevertheless, I think it's really great what they now did.
Now all what's missing is that they do it the other way around, that they create a 100% windows compatible Wine alternative.
The thing is: I consider myself a real Linux user, and I don't want it to look like Windows. And I hate it when Windows people try to push Linux there, just because they want a free-with-no-ads version of Windows.
In that sense, if WSL can keep Windows users on Windows such that they don't come and bother me on Linux, I'm happy :-).
pjmlp•4h ago
90s_dev•4h ago
> This is the result of a multiyear effort to prepare for this
pjmlp•4h ago
jayd16•4h ago
tgma•4h ago
jayd16•4h ago
tgma•4h ago
littlestymaar•4h ago
tgma•4h ago
littlestymaar•3h ago
Sigh, and company keep them for sentimental reasons I guess…
tgma•3h ago
[1]: the real debate is not “who’s my lowest performer” for each manager. It is about why I should cut rather than my sibling manager. If you force everyone to cut one person they all know who it will be.
magicalist•3h ago
But the latest layoffs were not performance based. Are you just confidently commenting without knowing about the event being discussed?
tgma•3h ago
(Also I was responding to a more generic comment saying doing layoff is bad and makes org more political.)
bitmasher9•4h ago
int_19h•3h ago