Also, it has RDMA. Last I checked, Ray did not support RDMA.
There are probably other differences as well, but the lack of RDMA immediately splits the world into things you can do with ray and things you cannot do with ray
https://pytorch.org/blog/pytorch-foundation-welcomes-ray-to-...
Monarch:
class Example(Actor):
@endpoint
def say_hello(self, txt):
return f"hello {txt}"
procs = this_host().spawn_procs({"gpus": 8})
actors = procs.spawn("actors", Example)
hello_future = actors.say_hello.call("world")
hello_future.get()
Ray: @ray.remote(num_gpus=1)
class Example:
def say_hello(self, txt):
return f"hello {txt}"
actors = [Example.remote() for _ in range(8)]
hello_object_refs = [a.say_hello.remote("world") for a in actors]
ray.get(hello_object_refs)As far as things that might be a performance loss here, one thing I'm wondering is if custom kernels are supported. I'm also wondering how much granularity of control there is with communication between different actors calling a function. Overall, I really like this project and hope to see it used over multi-controller setups.
Yeah, you might end up needing some changes to remote worker initialization, but you can generally bake in whatever kernels and other system code you need.
In case someone that can fix this is reading here
Grammarians are going to be big angry here. Ain’t an adverb in sight.
Found a few typo's. The em dash makes me suspect an LLM was involved in proofreading
- Is this similar to openMPI?
- How is a mesh established? Do they need to be on the same host?
> ...Note that this does not support tensor engine, which is tied to CUDA and RDMA (via ibverbs).
I.e. yet another CUDA married approach: the issue is not ibverbs but the code shows they use GPUDirect RDMA, going from there this can only get worse - more CUDA dependencies. There would have been OpenUCX.
There are some infamous tech based on the "hiding" paradigm. PHP comes to mind. By hiding how the http request/response cycle actually works it fostered a generation of web developers who didn't know what a session cookie was, resulting in login systems that leaked like a sieve. Distributed computing is complicated. There are many parameters you need to tweak and many design decisions you need to take to make distributed model training run smoothly. I think explicit and transparent architectures are way better. Distributed model training shouldn't "feel" like running on a single device because it isn't.
pjmlp•3mo ago
> Monarch is split into a Python-based frontend, and a backend implemented in Rust.
Other than that, looks like a quite interesting project.
galangalalgol•3mo ago
gaogao•3mo ago
dhrt12327•3mo ago
It's a pity they don't do a complete rewrite with a functional language as the driver.
gaogao•3mo ago
It's open source, so seeing such an extension would be quite cool. There's much that could be done with native Rust actors and code that get maybe at what you want, but nothing precludes mixing PyTorch and other backends.
For example, you could wrap a C++ inference engine as part of one of the actors generating data for other actors doing distributed training.
pjmlp•3mo ago
hansvm•3mo ago
bullfightonmars•3mo ago
https://github.com/elixir-nx/axon