frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
156•theblazehen•2d ago•45 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
670•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
950•xnx•19h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•33 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
57•videotopia•4d ago•2 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
19•kaonwarb•3d ago•19 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
231•isitcontent•14h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
225•dmpetrov•15h ago•118 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
331•vecti•16h ago•144 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
382•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•21h ago•182 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
289•eljojo•17h ago•173 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•20h ago•279 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•7 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
91•quibono•4d ago•21 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•8 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
31•jesperordrup•4h ago•16 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
258•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1069•cdrnsf•1d ago•446 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
16•speckx•3d ago•6 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
36•gmays•9h ago•12 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•68 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
150•SerCe•10h ago•140 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
185•limoce•3d ago•100 comments
Open in hackernews

PyTorch Monarch

https://pytorch.org/blog/introducing-pytorch-monarch/
377•jarbus•3mo ago

Comments

pjmlp•3mo ago
Apparently PyTorch oxidation has started.

> Monarch is split into a Python-based frontend, and a backend implemented in Rust.

Other than that, looks like a quite interesting project.

galangalalgol•3mo ago
This is a new project right? Not the oxidation of an existing one.
gaogao•3mo ago
Yup, hyperreactor, one of the new crates that's part of it, does some particularly interesting things for efficient parallel distributed channels.
dhrt12327•3mo ago
Multiple sources say that it is an experimental framework around PyTorch, not a replacement. People will still get to enjoy a circular graph using std::shared_ptr with memory leaks.

It's a pity they don't do a complete rewrite with a functional language as the driver.

gaogao•3mo ago
> It's a pity they don't do a complete rewrite with a functional language as the driver.

It's open source, so seeing such an extension would be quite cool. There's much that could be done with native Rust actors and code that get maybe at what you want, but nothing precludes mixing PyTorch and other backends.

For example, you could wrap a C++ inference engine as part of one of the actors generating data for other actors doing distributed training.

pjmlp•3mo ago
Interesting, by the way, you can replicate the experience in Rust.
hansvm•3mo ago
Arc<T> has entered the chat.
bullfightonmars•3mo ago
You might be looking for elixir/nx and axon

https://github.com/elixir-nx/axon

jonapro•3mo ago
Beowulf then.
valzam•3mo ago
I assume this is similar to Ray?
lairv•3mo ago
I'm also curious what's the use case of this over Ray. Tighter integration with PyTorch/tensors abstractions?
porridgeraisin•3mo ago
That.

Also, it has RDMA. Last I checked, Ray did not support RDMA.

There are probably other differences as well, but the lack of RDMA immediately splits the world into things you can do with ray and things you cannot do with ray

zacmps•3mo ago
Not currently, but it is being worked on https://github.com/ray-project/ray/issues/53976.
disattention•3mo ago
I had the same thought, especially because of their recent collaboration.

https://pytorch.org/blog/pytorch-foundation-welcomes-ray-to-...

unnah•3mo ago
There's also Dask, which can do distributed pandas and numpy operations etc. However it was originally developed for traditional HPC systems and has only limited support for GPU computing. https://www.dask.org/
cwp•3mo ago
The code example is very similar to Ray.

Monarch:

  class Example(Actor):
     @endpoint
     def say_hello(self, txt):
         return f"hello {txt}"

  procs = this_host().spawn_procs({"gpus": 8})
  actors = procs.spawn("actors", Example)
  hello_future = actors.say_hello.call("world")
  hello_future.get()
Ray:

  @ray.remote(num_gpus=1)
  class Example:
      def say_hello(self, txt):
          return f"hello {txt}"

  actors = [Example.remote() for _ in range(8)]
  hello_object_refs = [a.say_hello.remote("world") for a in actors]
  ray.get(hello_object_refs)
milancurcic•3mo ago
Cool! Essentially Fortran coarrays from 2008.
philipallstar•3mo ago
Or Hadoop from 2006? But you don't need to write MapReduce or Fortran, so it's probably far nicer.
pjmlp•3mo ago
Fortran 2023 is already quite nice, and doesn't need to rewrite stuff in C for performance.
alyxya•3mo ago
I made my own single controller PyTorch extension [1], though mines doesn't yet support cross node communication. I found it interesting to compare how Monarch makes things performant. I believe Monarch also uses cloudpickle for code to be shared among all nodes, which is probably the only way to performantly have various nodes execute work as that ends up being a one time setup cost. I found the fanning out of sending messages from the single controller to be really interesting, so the controller is unlikely to be the bottleneck besides any synchronous operations.

As far as things that might be a performance loss here, one thing I'm wondering is if custom kernels are supported. I'm also wondering how much granularity of control there is with communication between different actors calling a function. Overall, I really like this project and hope to see it used over multi-controller setups.

[1] https://github.com/alyxya/mycelya-torch

gaogao•3mo ago
> As far as things that might be a performance loss here, one thing I'm wondering is if custom kernels are supported

Yeah, you might end up needing some changes to remote worker initialization, but you can generally bake in whatever kernels and other system code you need.

logicchains•3mo ago
This seems strictly less powerful than Jax, which comes with a powerful compiler that optimises how cross-node communication is conducted.
gaogao•3mo ago
Nah, focusing on a different controller paradigm. Jax is focused on multi-controller SPMD, while this is focused on a single-controller setup. Both have their place, with single-controller being generally easier to reason about, and multi-controller more optimal for certain dataflows. There's also some interesting mixes of the two control paradigms.
nothrowaways•3mo ago
FB should create a pytorch foundation and set it free before they fuck it up.
gooodvibes•3mo ago
https://pytorch.org/foundation/
dkdcio•3mo ago
damn that was fast!
porridgeraisin•3mo ago
> This lets us avoid single-host bottlenecks, effectively using the whole mesh as a distributed cluster for message forwarding. (Cite scalability numbers here.)

In case someone that can fix this is reading here

chandureddyvari•3mo ago
Interesting - this seems to target a different layer than services like Tinker (https://thinkingmachines.ai/blog/announcing-tinker/). Monarch provides the infrastructure primitives while Tinker is a managed finetuning service. Could someone build something like Tinker on top of Monarch?
gaogao•3mo ago
Yup, there's stuff like https://pytorch.org/blog/introducing-torchforge/ on top of it now
chandureddyvari•3mo ago
Nice, so the open source equivalent now exists. Meta basically commoditized Tinker's($12B valuation) value prop by giving away the infra (Monarch) and the RL framework (TorchForge). Will be interesting to see how a managed service competes with free + open source at this layer.
pstoll•3mo ago
“Service Adverbs - like ‘route’ and ‘fanout’”

Grammarians are going to be big angry here. Ain’t an adverb in sight.

SomaticPirate•3mo ago
"Our Rust-based backend facilitates our performance, scale, and robustness — we amply use Rust’s fearless concurrency in Monarch’s implementation"

Found a few typo's. The em dash makes me suspect an LLM was involved in proofreading

alt187•3mo ago
https://www.scottsmitelli.com/articles/em-dash-tool/
geedzmo•3mo ago
That was a really good read. Glad I clicked
alt187•3mo ago
It's not even one of the funniest pieces of the author, and that says a lot.
whimsicalism•3mo ago
that it is surrounded by spaces makes this less likely
ComputerGuru•3mo ago
Most style guides would call that an error, em dash should be used without surrounding spaces (while an en dash requires them). The only publication I know that has (recently?) eschewed that advice is WaPo. If the idea was to make it more visible, I believe the correct solution would have been for WaPo to use an en dash but render it longer in their typeface.
whimsicalism•3mo ago
yes, i agree with you and this is how i used to use emdashes. chatgpt also agrees with you, which is why spaces are a pretty good indicator that it's not an LLM
hellohello2•3mo ago
I would argue that typos suggest an LLM did not proofread.
fadedsignal•3mo ago
It is a nice project. I have questions.

- Is this similar to openMPI?

- How is a mesh established? Do they need to be on the same host?

semessier•3mo ago
this could become a major thing in coarray world, but the issues start already:

> ...Note that this does not support tensor engine, which is tied to CUDA and RDMA (via ibverbs).

I.e. yet another CUDA married approach: the issue is not ibverbs but the code shows they use GPUDirect RDMA, going from there this can only get worse - more CUDA dependencies. There would have been OpenUCX.

bjourne•3mo ago
> Monarch lets you program distributed systems the way you’d program a single machine, hiding the complexity of distributed computing:

There are some infamous tech based on the "hiding" paradigm. PHP comes to mind. By hiding how the http request/response cycle actually works it fostered a generation of web developers who didn't know what a session cookie was, resulting in login systems that leaked like a sieve. Distributed computing is complicated. There are many parameters you need to tweak and many design decisions you need to take to make distributed model training run smoothly. I think explicit and transparent architectures are way better. Distributed model training shouldn't "feel" like running on a single device because it isn't.