It seems like every time I read about this kind of stuff, it's being done by contractors. I think Proton is similar. Of course that makes it no less awesome, but it makes me wonder about the contractor to employee ratio at Valve. Do they pretty much stick to Steam/game development and contract out most of the rest?
They're also a flat organization, with all the good and bad that brings, so scaling with contractors is easier than bringing on employees that might want to work on something else instead.
That said, something like this which is a fixed project, highly technical and requires a lot of domain expertise would make sense for _anybody_ to contract out.
For contextual, super specific, super specialized work (e.g. SCX-LAVD, the DirectX-to-Vulkan and OpenGL-to-Vulkan translation layers in Proton, and most of the graphics driver work required to make games run on the upcoming ARM based Steam Frame) they like to subcontract work to orgs like Igalia but that's about it.
There have been demands to do that more on HN lately. This is how it looks like when it happens - a company paying for OSS development.
Back to the root point. Small company focused on core business competencies, extremely effective at contracting non-core business functions. I wish more businesses functioned this way.
If you have 30mins for a video I recommend People Make Games' documentary on it https://www.youtube.com/watch?v=eMmNy11Mn7g
The problem seems, at least from a distance, to be that bosses treat it as a fire-and-forget solution.
We haven't had any software done by oursiders yet, but we have hired consultants to help us on specifics, like changing our infra and help move local servers to the cloud. They've been very effective and helped us a lot.
We had talks though so we found someone who we could trust had the knowledge, and we were knowledgeable enough ourselves that we could determine that. We then followed up closely.
If you don't see it happening, the game is being played as intended.
But most of the time you don't want "a unit of software", you want some amorphous blob of product and business wants and needs, continuously changing at the whims of business, businessmen, and customers. In this context, sure, you're paying your developers to solve problems, but moreover you're paying them to store the institutional knowledge of how your particular system is built. Code is much easier to write than to read, because writing code involves applying a mental model that fits your understanding of the world onto the application, whereas reading code requires you to try and recreate someone else's alien mental model. In the situation of in-house products and business automation, at some point your senior developers become more valuable for their understanding of your codebase than their code output productivity.
The context of "I want this particular thing fixed in a popular open source codebase that there are existing people with expertise in", contracting makes a ton of sense, because you aren't the sole buyer of that expertise.
They needed Windows games to run on Linux so we got massive Proton/Wine advancements. They needed better display output for the deck and we got HDR and VRR support in wayland. They also needed smoother frame pacing and we got a scheduler that Zuck is now using to run data centers.
Its funny to think that Meta's server efficiency is being improved because Valve paid Igalia to make Elden Ring stutter less on a portable Linux PC. This is the best kind of open source trickledown.
"Slide left or right" CPU and GPU underclocking.
Liquid Glass ruined multitasking UX on my iPad. :(
Also my macbook (m4 pro) has random freezes where finder becomes entirely unresponsive. Not sure yet why this happens but thankfully it’s pretty rare.
(And same for Windows to the degree it is more inconsistent on Windows than Mac)
The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.
Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.
[1] https://www.nicksherlock.com/2020/11/working-around-the-amd-...
until the new s2idle stuff that Microsoft and Intel have foisted on the world (to update your laptop while sleeping… I guess?)
I think the reality is that Linux is ahead on a lot of kernel stuff. More experimentation is happening.
IO_Uring is still a pale imitation :(
io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.
In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.
On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.
Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.
Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
I do, MIDI 2.0. It's not because they're not doing it, just that they're doing it at a glacial pace compared to everyone else. They have reasons for this (a complete rewrite of the windows media services APIs and internals) but it's taken years and delays to do something that shipped on Linux over two years ago and on Apple more like 5 (although there were some protocol changes over that time).
But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.
Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.
Gaben does something: Wins Harder
I'm loving what valve has been doing, and their willingness to shove money into projects that have long been under invested in, BUT. Please don't forget all the volunteers that have developed these systems for years before valve decided to step up. All of this is only possible because a ton of different people spent decades slowly building a project, that for most of it's lifetime seemed like a dead end idea.
Wine as a software package is nothing short of miraculous. It has been monumentally expensive to build, but is provided to everyone to freely use as they wish.
Nobody, and I do mean NOBODY would have funded a project that spent 20 years struggling to run office and photoshop. Valve took it across the finish line into commercially useful project, but they could not have done that without the decade+ of work before that.
> Meta has found that the scheduler can actually adapt and work very well on the hyperscaler's large servers.
I'm not at all in the know about this, so it would not even occur to me to test it. Is it the case that if you're optimizing Linux performance you'd just try whatever is available?
999900000999•2h ago
kstrauser•1h ago
bigyabai•1h ago
accelbred•1h ago
phdelightful•1h ago
> Starting from version 6.6 of the Linux kernel, [CFS] was replaced by the EEVDF scheduler.[citation needed]
0x1ch•1h ago
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
3eb7988a1663•1h ago
Anon1096•55m ago
And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.
giantrobot•27m ago
jorvi•1h ago
bronson•1h ago
Brian_K_White•1h ago
senfiaj•1h ago
cherryteastain•59m ago
dfedbeef•40m ago
OsrsNeedsf2P•15m ago