Also what NVIDIA is doing has full Windows support, while Mojo support still isn't there, other than having to make use of WSL.
Naturally assuming they are using laptops with NVidia GPUs.
From their license.
It's not obvious what happens when you have >8 users, with one GPU each (typical laptop users).
solarmist•6mo ago
mirsadm•6mo ago
solarmist•6mo ago
I'm a novice in the area, but Chris is well respected in this area and cares a lot of about performance.
pjmlp•6mo ago
The problem isn't the language, rather how to design the data structures and algorithms for GPUs.
solarmist•6mo ago
The primitives and pre-coded kernels provided by CUDA (it solves for the most common scenarios first and foremost) is what's holding things back and in order to get those algorithms and data structures down to the hardware level you need something flexible that can talk directly to the hardware.
pjmlp•6mo ago
The pre-coded kernels help a lot, but you don't have to use them necessarly.
melodyogonna•6mo ago
Basically, imagine if you can target Cuda, but you don't have to do too much for your inference to also work on other GPU Vendors e.g AMD, Intel, Apple. All with performance matching or surpassing what the hardware vendors themselves can come up with.
Mojo comes into the picture because you can program Max with it, create custom kernels that is JIT compiled to the right vendor code at rumtime.