It's doubly funny, because Nvidia never expressly tried to stop them either. Working with Khronos, Nvidia proved they weren't afraid to build the next CUDA-killer, even in collaboration with the industry. But we'd sooner get working SPIR-V, because gaming is a much more important market than democratized compute.
This will all be very confusing and hard to explain to people in the future. Why was Nvidia rich? Because nobody else wanted the money, I guess.
The long answer is... well, I'm not particularly qualified to explain that either. Nvidia's been working on CUDA for nearly two decades now, which has a lot of advantages besides just platform maturity. Nvidia has been shipping CUDA-compatible hardware with most of their GPUs since ~2009, which means almost every Nvidia GPU (even secondhand ones) support some level of compute capability. This compute is orchestrated via CUDA at the software level, which also has the advantage of being largely backwards/forwards compatible for most operations, in addition to being highly scalable. For many operations, you could reuse the same code you run on a server to run on the Tegra chip of a Nintendo Switch, or a Jetson developer board.
AMD, Intel or Apple feasibly could chase this golden goose, but it's a lot of long-term investment that still sacrifices their consumer appeal. AMD has the most pressure on them, so they're pushing hard on ROCm as a simplified compute layer for certain (mostly ML) acceleration to tide users over. Intel has bigger fish to fry, so they're generally not interested in burning $X billion dollars on a market they can't compete in. Apple has too large of a commitment to the consumer market for it to be worthwhile; additionally they lack the hardware and software interconnect technology to compete with Nvidia's datacenter products. Really it's only AMD in the running, though things could change.
tfwnopmt•2h ago