7 0
0 8
8 1
1 9
9 2
2 10
10 3
3 11
[1] https://news.ycombinator.com/item?id=46022965I.e. you can see from these animations that LC reductions have some "jumping" parts. And that does reflect LC nature, as a reduction 'updates' many places at once.
IN basically fixes this problem. And this locality can enable parallelism. And there's an easy way to translate LC to IN, as far as I understand.
I'm a noob, but I feel like INs are severely under-rated. I dunno if there's any good interaction net animations. I know only one person who's doing some serious R&D with interaction nets - that's Victor Taelin.
While easy, it sadly doesn't preserve semantics. Specifically, when you duplicate a term that ends up duplicating itself, results will diverge.
There exist more involved semantics preserving translations, using so-called croissants and brackets, or with the recent rephrased approach of [1].
There is actually an easy way that does preserve semantics at least to WHNF - it's called closed reduction. Mackie has worked on it a bunch (see some resources [1]).
An even simpler implementation is Sinot's token passing.
The problem with both of these approaches is the decreased amount of sharing and potential for parallelism, which is typically the reason for using interaction nets in the first place.
[1] https://github.com/marvinborner/interaction-net-resources?ta...
__grob•2mo ago