It seems like every time I read about this kind of stuff, it's being done by contractors. I think Proton is similar. Of course that makes it no less awesome, but it makes me wonder about the contractor to employee ratio at Valve. Do they pretty much stick to Steam/game development and contract out most of the rest?
They're also a flat organization, with all the good and bad that brings, so scaling with contractors is easier than bringing on employees that might want to work on something else instead.
And then you consider it in context: a company with huge impact, brand recognition, and revenue (about $50M/employee in 2025). They’ve remained extremely small compared to how big they could grow.
There are not many tech companies with 50k+ employees, as a point of fact.
I’m not arguing just to argue - 300 people isn’t small by any measure. It’s absolutely not “extremely small” as was claimed. It’s not relatively small, it’s not “small for what they are doing”, it’s just not small at all.
300 people is a large company. The fact that a very small number of ultrahuge companies exist doesn’t change that.
For context, 300 people is substantially larger than the median company headcount in Germany, which is the largest economy in the EU.
Valve is a global, revenue-dominant, platform-level technology company. In its category, 300 employees is extremely small.
Valve is not a German company, so that’s an odd context, but if you want to use Germany for reference, here are the five German companies with the closest revenue to Valve’s:
- Infineon Technologies, $16.4B revenue, 57,000 employees
- Evonik Industries, $16B, 31,930 employees
- Covestro, $15.2B, 17,520 employees
- Commerzbank, $14.6B, 39,000 employees
- Zalando, $12.9B, 15,793 employees
Big, small, etc. are relative terms. There is no way to decide whether or not 300 is small without implicitly saying what it's small relative to. In context, it was obvious that the point being made was "valve is too small to have direct employees working on things other than the core business"
Yes, 300 is quite small.
That said, something like this which is a fixed project, highly technical and requires a lot of domain expertise would make sense for _anybody_ to contract out.
For contextual, super specific, super specialized work (e.g. SCX-LAVD, the DirectX-to-Vulkan and OpenGL-to-Vulkan translation layers in Proton, and most of the graphics driver work required to make games run on the upcoming ARM based Steam Frame) they like to subcontract work to orgs like Igalia but that's about it.
There have been demands to do that more on HN lately. This is how it looks like when it happens - a company paying for OSS development.
Back to the root point. Small company focused on core business competencies, extremely effective at contracting non-core business functions. I wish more businesses functioned this way.
If you have 30mins for a video I recommend People Make Games' documentary on it https://www.youtube.com/watch?v=eMmNy11Mn7g
Value is chump change in this department. They allow the practice of purchasing loot boxes and items but don't analyze and manipulate behaviors. Valve is the least bad actor in this department.
I watched half the video and found it pretty biased compared to whats happening in the industry right now.
I feel this argument of Valve deliberately profiting off of gambling not really the whole story. I certainly dont think that Valve designed there systems to encourage gambling. More like they wanted a way to bring in money to develop other areas of their platform so they can make it better, which they did. And in many cases are putting players first. Players developed bad behaviors around purchasing in-game and trading items and have chosen to indulge in the behavior. 3rd parties have rose up around a unhealthy need that IMHO is not Valves doing. And most importantly, since I was around when these systems went into place, allowing me to see what was happening, this kind of player behavior developed over time. I don't think Valve deliberately encouraged it.
The entire gaming industry is burning down before our eyes because of AAA greed and you guys are choosing to focus on the one company thats fighting against it. Im not getting it.
[Citation needed]
> I certainly dont think that Valve designed there systems to encourage gambling
Cases are literally slot machines.
> [section about third-party websites] I don't think Valve deliberately encouraged it.
OK, but they continue to allow it (through poor enforcement of their own ToS), and it continues to generate them obscene amounts of money?
> you guys are choosing to focus on the one company thats fighting against it.
Yes, we should let the billion dollar company get away with shovelling gambling to children.
Also, frankly speaking, other AAAs are less predatory with gambling. Fortnite, CoD, and VALORANT to pick some examples, are all just simple purchases from a store. Yes, they have issues with FOMO, and bullying for not buying skins [0], but oh my god, it isn't allowing children to literally do sports gambling (and I should know, I've actively gambled on esports while underage via CS, and I know people that have lost $600+ while underage on CS gambling).
[0]: https://www.polygon.com/2019/5/7/18534431/fortnite-rare-defa...
This is exactly what you are doing.
> The history of reputation and actions matter.
The history of actions matter, yes. The history of actions on the gambling topic has been very consistent thus far from Valve.
(Oh, talking about Valve electing to engage in scummy behaviour, the “X-ray” feature is a classic example of them deliberately subverting regulation against loot boxes.)
If you want to bring up the “let the free market be the free market” angle, I’d at least be amenable to it.
But pretending as if they’re innocent passengers, and that they have no idea what is going on it ludicrous. Don’t baby a billion dollar company.
(I have skin the game too. If Valve blocked trading, I’d lose $400 worth of value in my skins. I’d still rather not support gambling, especially the type that is so incredibly unregulated.)
The problem seems, at least from a distance, to be that bosses treat it as a fire-and-forget solution.
We haven't had any software done by oursiders yet, but we have hired consultants to help us on specifics, like changing our infra and help move local servers to the cloud. They've been very effective and helped us a lot.
We had talks though so we found someone who we could trust had the knowledge, and we were knowledgeable enough ourselves that we could determine that. We then followed up closely.
If you don't see it happening, the game is being played as intended.
But most of the time you don't want "a unit of software", you want some amorphous blob of product and business wants and needs, continuously changing at the whims of business, businessmen, and customers. In this context, sure, you're paying your developers to solve problems, but moreover you're paying them to store the institutional knowledge of how your particular system is built. Code is much easier to write than to read, because writing code involves applying a mental model that fits your understanding of the world onto the application, whereas reading code requires you to try and recreate someone else's alien mental model. In the situation of in-house products and business automation, at some point your senior developers become more valuable for their understanding of your codebase than their code output productivity.
The context of "I want this particular thing fixed in a popular open source codebase that there are existing people with expertise in", contracting makes a ton of sense, because you aren't the sole buyer of that expertise.
When I worked in the HFC/Fiber plant design industry, the simple act of "Don't use the same boilerplate MSA for every type of vendor" and being more specific about project requirements in the RFP makes it very clear what is expected, and suddenly we'd get better bids, and would carefully review the bids to make sure that the response indicated they understood the work.
We also had our own 'internal' cost estimates (i.e. if we had the in house capacity, how long would it take to do and how much would it cost) which made it clear when a vendor was in over their head under-bidding just to get the work, which was never a good thing.
And, I've seen that done in the software industry as well, and it worked.
That said, the main 'extra' challenge in IT is that key is that many of the good players aren't going to be the ones beating down your door like the big 4 or a WITCH consultancy will.
But really at the end of the day, the problem is what often happens is that business-people who don't really know (or necessarily -care-) about specifics enough unfortunately are the people picking things like vendors.
And worse, sometimes they're the ones writing the spec and not letting engineers review it. [0]
[0] - This once led to an off-shore body shop getting a requirement along the lines of 'the stored procedures and SQL called should be configurable' and sure enough the web.config had ALL the SQL and stored procedures as XML elements, loaded from config just before the DB call, thing was a bitch to debug and their testing alone wreaked havoc on our dev DB.
I don't remember all the details, but it doesn't seem like a great place to work, at least based on the horror stories I've read.
Valve does a lot of awesome things, but they also do a lot of shitty things, and I think their productivity is abysmal based on what you'd expect from a company with their market share. They have very successful products, but it's obvious that basically all of their income comes from rent-seeking from developers who want to (well, need to) publish on Steam.
They needed Windows games to run on Linux so we got massive Proton/Wine advancements. They needed better display output for the deck and we got HDR and VRR support in wayland. They also needed smoother frame pacing and we got a scheduler that Zuck is now using to run data centers.
Its funny to think that Meta's server efficiency is being improved because Valve paid Igalia to make Elden Ring stutter less on a portable Linux PC. This is the best kind of open source trickledown.
"Slide left or right" CPU and GPU underclocking.
Liquid Glass ruined multitasking UX on my iPad. :(
Also my macbook (m4 pro) has random freezes where finder becomes entirely unresponsive. Not sure yet why this happens but thankfully it’s pretty rare.
(And same for Windows to the degree it is more inconsistent on Windows than Mac)
The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.
Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.
[1] https://www.nicksherlock.com/2020/11/working-around-the-amd-...
The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.
The Linux model is the second one. Which isn't what's happening when a hardware vendor doesn't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.
A lot of this is also the direct fault of Microsoft for pressuring hardware vendors to support "Modern Standby" instead of rather than in addition to S3 suspend, presumably because they're organizationally incapable of making Windows Update work efficiently so they need Modern Standby to paper over it by having it run when the laptop is "asleep" and then they can't have people noticing that S3 is more efficient. But Microsoft's current mission to get everyone to switch to Linux appears to be in full swing now, so we'll see if their efforts on that front manage to improve the situation over time.
That's a vastly different statement.
… And that’s all fine, because this is a super niche need: effectively nobody needs Linux laptops and even fewer depend on sleep to work. If ‘Linux’ convinced itself it really really needed to solve this problem for whatever reason, it would do something that doesn’t look like its current development model, something outside that.
Regardless, the net result in the world today is that Linux sleep doesn’t work in general.
until the new s2idle stuff that Microsoft and Intel have foisted on the world (to update your laptop while sleeping… I guess?)
As an example, if you have a mac, run "ioreg -w0 -p IOPower" and see all the drivers that have to interact with each other to do power management.
I think the reality is that Linux is ahead on a lot of kernel stuff. More experimentation is happening.
IO_Uring is still a pale imitation :(
io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.
In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.
https://learn.microsoft.com/en-us/windows/win32/api/ioringap...
Although Windows registered network I/O (RIO) came before io_uring and for all I know might have been an inspiration:
https://learn.microsoft.com/en-us/previous-versions/windows/...
You can see shims for fork() to stop tanking performance so hard too. IOUring doesnt map at all onto IOCP, at least the windows subtitute for fork has “ZwCreateProcess“ to work from. IOUring had nothing.
IOCP is much nicer from a dev point of view because your program can be signalled when a buffer has data on it but also with the information of how much data, everything else seems to fail at doing this properly.
On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.
Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.
Wouldn't the o::--- default ACL, like mode o-rwx, deny others access in the way you're describing?
Here is kernel dev telling they are against adding NFSv4 ACL implementation. The relevant RichAcls patch never got merged: https://lkml.org/lkml/2016/3/15/52
I see what I misunderstood, even in the presence of an ALLOW entry, a DENY entry would prohibit access. I am familiar with that on the Windows side but haven't really dug into Linux ACLs. The ACCESS CHECK ALGORITHM[1] section of the acl(5) man page was pretty clear, I think.
[1] https://man7.org/linux/man-pages/man5/acl.5.html#ACCESS_CHEC...
Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
1. cloud-init support was in RHEL 7.2 which released November 19, 2015. A decade ago.
2. Checking on Ubuntu, it looks like it was supported in Ubuntu 18.04 LTS in April 2018.
3. For admining tens of thousands of servers, if you're in the RHEL ecosystem you use Satellite and it's ansible integration. That's also been going on for... about a decade. You don't need much integration though other than a host list of names and IPs.
There are a lot of people on this list handling tens of thousands or hundreds of thousands of linux servers a day (probably a few in the millions).
- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do?
- I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.
What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.
BTW, we managed to get the earlies history of the project written down here by one of the earliest contributors for anyone who might be interested:
https://anaconda-installer.readthedocs.io/en/latest/intro.ht...
As for how the automated installation on RHEL, Fedora and related distros works - it is indeed via kickstart:
https://pykickstart.readthedocs.io/en/latest/
Note how some commands were introduced way back in the single digit Fedora/Fedora Core age - that was from about 2003 to 2008. Latest Fedora is Fedora 43. :)
Preseed is not new at all:
https://wiki.debian.org/DebianInstaller/Preseed
RH has also had kickstart since basically forever now.
I've been using both preseeds and kickstart professionally for over a decade. Maybe you're thinking of the graphical installer?
You have a hardened Windows 11 system. A critical application was brought forward from a Windows 10 box but it failed, probably a permissions issue somewhere. Debug it and get it working. You can not try to pass this off to the vendor, it is on you to fix it. Go.
And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...
Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.
This makes writing robust code under those systems a lot easier, which in turns makes debugging things when it goes wrong nicer. Now, I'm not going to say debugging those systems is great - SELinux errors are still an inscrutable mess and writing SELinux policy is fairly painful.
But there is real value in limiting where errors can crop up, and how they can happen.
Of course, there is stuff like FUSE that can throw a wrench into this: instead of an LSM, a linux security product could write their own FS overlay to do these kind of shenanigans. But those seem to be extremely rare on Linux, whereas they're very commonplace on Windows - mostly because MS doesn't provide the necessary tools to properly write security modules, so everyone's just winging it.
"Now that's curious..."
I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.
The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.
Unfortunately it doesn't take any display server into consideration, both X11 and Wayland will just get killed.
These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.
1. Snapshot the desktop
2. Switch to a separate secure UI session
3. Display the snapshot in the background, greyed out, with the UAC prompt running in the current session and topmost
It avoids any chance of a user-space program faking or interacting with a UAC window.Clever way of dealing with the train wreck of legacy Windows user/program permissioning.
It's not just visual either. The secure desktop is in protected memory, and no other process can access it. Only NTAUTHORITY\System can initiate showing it and interact with it any way, no other process can.
You can also configure it to require you to press CTRL+ALT+DEL on the UAC prompt to be able to interact with it and enter credentials as another safeguard against spoofing.
I'm not even sure if Wayland supports doing something like that.
Is there an offset. I could have sworn things always seemed offset to the side a little.
It sounds like yet another violation of the Line of Death principle (in context of OS, not browser).
I actually wrote a fake version of RMNet login when I was in school (before Windows added ctrl-alt-del to login).
https://www.rmusergroup.net/rm-networks/
I got the teacher's password and then got scared and deleted all trace of it.
> Example output of the SysRq+h command:
> sysrq: HELP : loglevel(0-9) reboot(b) crash(c) terminate-all-tasks(e) memory-full-oom-kill(f) kill-all-tasks(i) thaw-filesystems(j) sak(k) show-backtrace-all-active-cpus(l) show-memory-usage(m) nice-all-RT-tasks(n) poweroff(o) show-registers(p) show-all-timers(q) unraw(r) sync(s) show-task-states(t) unmount(u) force-fb(v) show-blocked-tasks(w) dump-ftrace-buffer(z) dump-sched-ext(D) replay-kernel-logs(R) reset-sched-ext(S)
But note "sak (k)".
Without Proton there would be no "Linux" games.
It would be great if Valve actually continued Loki Entertainment's work.
I do, MIDI 2.0. It's not because they're not doing it, just that they're doing it at a glacial pace compared to everyone else. They have reasons for this (a complete rewrite of the windows media services APIs and internals) but it's taken years and delays to do something that shipped on Linux over two years ago and on Apple more like 5 (although there were some protocol changes over that time).
But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.
Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.
Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.
Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.
The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.
nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
anti-cheat is essentially existential for studios/publishers that rely on multiplayer gaming.
So yes, the second half of your statement is true. The first half--not so much.
> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.
There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.
I suppose something like a "reboot into '''secure''' mode" to enable the anti-cheat and stuff, or maybe we'll just get steamplay or whatever where literally the entire game runs remote and streams video frames to the user.
No thanks.
https://developer.valvesoftware.com/wiki/Using_Source_Contro...
Their research department rocks however so it's not a full bash on Microsoft at all - i just feel like they are focusing on other way more interesting stuff
Great UX requires a lot of work that is hard but not algorithmically challenging. It requires consistency and getting many stakeholders to buy in. It requires spending lots of time on things that will never be used by more than 10-20% of people.
Windows got a proper graphics compositor (DWM) in 2006 and made it mandatory in 2012. macOS had one even earlier. Linux fought against Compiz and while Wayland feels inevitable vocal forces still complain about/argue against it. Linux has a dozen incompatible UI toolkits.
Screen readers on Linux are a mess. High contrast is a mess. Setting font size in a way that most programs respect is a mess. Consistent keyboard shortcuts are a mess.
I could go on, but these are problems that open source is not set up to solve. These are problems that are hard, annoying, not particularly fun. People generally only solve them when they are paid to, and often only when governments or large customers pass laws requiring the work to be done and threaten to not buy your product if you don't do it. But they are crucially important things to building a great, widely adopted experience.
Here's one top search result that goes into far more detail: https://www.reddit.com/r/linux/comments/1ed0j10/the_state_of...
Linux DEs still can't match the accessibility features alone.
yeah, there's layers and layers of progressively older UIs layered around the OS, but most of it makes sense, is laid out sanely, and is relatively consistent with other dialogs.
macOS beats it, but its still better in a lot of ways over the big Linux DEs.
Every other button triggering Copilots assures even better UX goodness.
Of course that is minus all the recent AI/ad stuff on Windows…
Accessibility does need improvement. It seems severely lacking. Although your link makes it look like it's not that bad actually, I would have expected worse.
It's a big space. Traditionally, Microsoft has held both the multimedia, gaming and lots of professional segments, but with Valve doing a large push into the two first and Microsoft not even giving it a half-hearted try, it might just be that corporate computers continue using Microsoft, people's home media equipment is all Valve and hipsters (and others...) keep on using Apple.
Windows will remain as the default "enterprise desktop." It'll effectively become just another piece of business software, like an ERP.
Gamers, devs, enthusiasts will end up on Linux and/or SteamOS via Valve hardware, creatives and personal users that still use a computer instead of their phone or tablet will land in Apple land.
* invasive AI integration
* dropping support for 40% of their installed base (Windows 10)
* forcing useless DRM/trusted computing hardware - TPM - as a requirement to install the new and objectively worse Windows version version, with even more spying and worse performance (Windows 11)
With that I think their prospects are bleak & I have no idea who would install anything else than Steam OS or Bazzite in the future with this kind of Microsoft behavior.
Also Raspeberri PIs are the only GNU/Linux devices most people can find at retail stores.
But then they deCided it is better to show adds at OS level, rewrite OS UI as a web app, force harware DRM for their new OS version (TPM requirement) as well as automatically capturing content of you screen and feed it to AI.
Gaben does something: Wins Harder
I'm loving what valve has been doing, and their willingness to shove money into projects that have long been under invested in, BUT. Please don't forget all the volunteers that have developed these systems for years before valve decided to step up. All of this is only possible because a ton of different people spent decades slowly building a project, that for most of it's lifetime seemed like a dead end idea.
Wine as a software package is nothing short of miraculous. It has been monumentally expensive to build, but is provided to everyone to freely use as they wish.
Nobody, and I do mean NOBODY would have funded a project that spent 20 years struggling to run office and photoshop. Valve took it across the finish line into commercially useful project, but they could not have done that without the decade+ of work before that.
I'm sure there have been more commercial contributors to Wine other than Valve and CodeWeavers.
A hard lock which requires a reboot or god forbid power cycling is the worst possible outcome, literally anything else which doesn’t start a fire is an improvement TBH.
Hilariously this happens on windows too.
Actually everything you said windows and mac doesn't do they do, if you put on a ton a memory pressure the system becomes unresponsive and locks up...
You get an OOM dialog with a list of apps that you can have it kill.
We shouldn't be depending on trickledown anything. It's nice to see Valve contributing back, but we all need to remember that they can totally evaporate/vanish behind proprietary licensing at any time.
* It will slowly go stale, for example, it may not get ported to newer, increasingly expected desktop APIs. * It will lose users to competing software (such as your proprietary fork) which are better maintained.
As a result, it loses its relevance and utility over time. People that never update their systems can continue using it as they always have, assuming no online-only restrictions or time-limited licenses. But to new use cases and new users, the open software is now less desirable and the proprietary fork accumulates ever more power to screw over people with anti-consumer moves. Regulators ignore the open variant due to its niche marketshare, increasing the likelihood of things going south.
Harm can be done to people who don't have alternatives. In order to have alternatives, you need either a functioning free market or a working, relevant, sufficiently usable product that can be forked if worse comes to worst. Free software can of course help in establishing a free market, it isn't one or the other.
If a proprietary product takes over from one controlled by the community, much of the time it's not a problem. It can be replaced or done without.
If a proprietary platform takes over from one controlled by the community, something that determines not only how you go about your business but what other people expect from you, everyone gets harmed. The problem with a lot of proprietary software is that every company and their dog wants their product to become a platform and reshape the market to discourage alternatives.
MIT by itself does no harm. If it works like LLVM and everyone contributes because it makes more sense than developing a closed-off platform, then great! If it helps to bootstrap a proprietary market leader while the originally useful open original shrivels away into irrelevance, not as great.
The guy is Philip Rebohler.
https://www.gamingonlinux.com/2018/09/an-interview-with-the-...
It’s not my distribution of choice, but it’s currently doing exactly what you suggest.
The problem is that Linux can’t handle hardware it doesn’t have drivers for (or can only run it in an extremely basic mode), and LTS kernels only have drivers for hardware that existed prior to their release.
So they already are the major player in this game and will have a leg up on others trying to enter the same space.
That's why, RHEL for example, has such a long support lifecycle. It's so you can develop software targeting RHEL specifically, and know you have a stable environment for 10+ years. RHEL sells a stable (as in unchanging) OS for x number of years to target.
I've been "gaming" on linux for a long time, and you could see the slow march of progress as more and more stuff worked, and more and more stuff got faster.
The vast majority of people that were using Linux on the desktop before 2015 were either hobbyists, developers or people that didn't want to run proprietary software for whatever reason.
These people generally didn't care about a lot of fancy tech mentioned. So this stuff didn't get fixed.
I think the bigger problem is that commercial use cases suck much of the air out of the room, leaving little for end user desktop use cases.
Most people learn that using some crap top will leave you with stuff on the laptop not working e.g. volume buttons, wifi buttons etc.
All of these just work with Linux.
If I didn't know better, I'd assume Windows was a free, ad-supported product. If I ever pick up a dedicated PC for gaming, it's going to be a Steam Machine and/or Steam Deck. Microsoft is basically lighting Xbox and Windows on fire to chase AI clanker slop.
(I've been a cross platform numerical developer in GIS and geophysics for decades)
serious windows power users, current and former windows developers and engineers, swear by Chris Titus Tech's Windows Utility.
It's an open powershell suite collaboration by hundreds maintained by an opinionated coordinater that allows easy installation of common tools, easy setting of update behaviours, easy tweaking of telemetry and AI addons, and easy creation of custom ISO installs and images for VM application (dedicated stripped down windows OS for games or a Qubes shard)
https://github.com/ChrisTitusTech/winutil
It's got a lot of help hover tooltip's to assist in choices and avoiding suprises, you can always look to the scripts that are run if you're suspicious.
" Windows isn't that bad if you clean it out with a stiff enough broom "
That said, I'm setting my grandkids up with Bazzite decks and forcing them to work in CLI's for a lot of things to get them used to seeing things under the hood.
Linux (and its ecosystem) sucks at having focus and direction.
They might get something right here and there, especially related to servers, but they are awful at not spinning wheels
See how wayland progress is slow. See how some distros moved to it only after a lot of kicking and screaming.
See how a lot of peripherals in "newer" (sometimes a model that's 2 or 3 yrs on the market) only barely works in a newer distro. Or has weird bugs
"but the manufacturers..." "but the hw producers..." "but open source..." whine
Because Linux lacks a good hierarchy at isolating responsibility, otherwise going for a "every kernel driver can do all it wants" together with "interfaces that keep flipping and flopping at every new kernel release" - notable (good) exception : USB userspace drivers. And don't even get me started on the whole mess that is xorg drivers
And then you have a Ruby Goldberg machine in form of udev dbus and what not, or whatever newer solution that solves half the problems and create another new collection of bugs.
Currently almost no one is using Linux for mobile because the lack or apps (banking for example) and bad hardware support. When developing for Linux becomes more and more attractive this might change.
If one (or maybe two) OSes win, then sure. The problem is there is no "develop for Linux" unless you are writing for the kernel.
Each distro is a standalone OS. It can have any variety of userland. You don't develop "for Linux" so much as you develop "for Ubuntu" or "for Fedora" or "for Android" etc.
It's a technique which supposedly helped at one point in time to reduce loading times, helldiver's being the most note-able example of removing this "optimization".
However, this is by design - specifically as an optimization. Can't really be calling that boat in the parents context of inefficient resource usage
any changes to the code or textures will need the same preprocessing done. large patch size is basically 1% of changes + 99% all the preprocessed data for this optimization
More compression means large change amplification and less delta-friendly changes.
More delta-friendly asset storage means storing assets in smaller units with less compression potential.
In theory, you could have the devs ship unpacked assets, then make the Steam client be responsible for packing after install, unpacking pre-patch, and then repacking game assets post-patch, but this basically gets you the worst of all worlds in terms of actual wall clock time to patch, and it'd be heavily constraining for developers.
Basically you can get much better read performance if you can read everything sequentially and you want to avoid random access at all costs. So you can basically "hydrate" the loading patterns for each state, storing the bytes in order as they're loaded from the game. The only point it makes things slower is once, on download/install.
Of course the whole excercise is pointless if the game is installed to an HDD only because of its bigger size and would otherwise be on an nvme ssd... And with still affordable 2TB nvme drives it doesn't make as much sense anymore.
https://steamcommunity.com/games/221410/announcements/detail...
What you should do is just buy a SteamDeck for gaming.
There was a lot of work in Linux scheduling space over the years. Con Kolivas BFS was one example. The issue was that Linus had his own ideas about kernel scheduling which, unfortunately, were very different from the ones of the linux community. And yes. The default linux scheduler sucks.
For individual services what that means is that for something like Google Search there will be dozens of projects in the hopper that aren't being worked on because there's just not enough hardware to supply the feature (for example something may have been tested already at small scale and found to be good SEO ranking wise but compute expensive). So a team that is able to save 1% CPU can directly repurpose that saved capacity and fund another project. There's whole systems in place for formally claiming CPU savings and clawing back those savings to fund new efforts.
> Meta has found that the scheduler can actually adapt and work very well on the hyperscaler's large servers.
I'm not at all in the know about this, so it would not even occur to me to test it. Is it the case that if you're optimizing Linux performance you'd just try whatever is available?
I wouldn't make excel spreadsheet on the steam deck for instance.
So Bazzite in my opinion is probably one of the best user experience flavors of Fedora around.
Yes you can do more than gaming on Bazzite.
999900000999•1mo ago
kstrauser•1mo ago
bigyabai•1mo ago
accelbred•1mo ago
phdelightful•1mo ago
> Starting from version 6.6 of the Linux kernel, [CFS] was replaced by the EEVDF scheduler.[citation needed]
0x1ch•1mo ago
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
jorvi•1mo ago
Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.
[0]https://tinyurl.com/mw6uw9vh
3eb7988a1663•1mo ago
Anon1096•1mo ago
And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.
ranger207•1mo ago
giantrobot•1mo ago
jorvi•1mo ago
bronson•1mo ago
Brian_K_White•1mo ago
senfiaj•1mo ago
cherryteastain•1mo ago
dfedbeef•1mo ago
OsrsNeedsf2P•1mo ago
dralley•1mo ago
There's nothing special or proprietary about the RHEL code. Access to the code isn't an issue, it's reconstructing an exact replica of RHEL from all of the different package versions that are available to you, which is a huge temporal superset of what is specifically in RHEL.
Aperocky•1mo ago
tremon•1mo ago
SSLy•1mo ago
carlwgeorge•1mo ago
https://gitlab.com/redhat/centos-stream/rpms
dralley•1mo ago
For any individual RHEL package, you can find the source code with barely any effort. If you have a list of the exact versions of every package used in RHEL, you could compose it without that much effort by finding those packages in Stream. It's just not served up to you on a silver platter unless you're a paying customer. You have M package versions for N packages - all open source - and you have to figure out the correct construction for yourself.
sintax•1mo ago
HexPhantom•1mo ago