frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
258•theblazehen•2d ago•86 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
26•AlexeyBrin•1h ago•3 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
706•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
969•xnx•21h ago•558 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
69•jesperordrup•6h ago•31 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•48m ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
135•matheusalmeida•2d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
45•speckx•4d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
68•videotopia•4d ago•7 comments

Welcome to the Room – A lesson in leadership by Satya Nadella

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
39•kaonwarb•3d ago•30 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
13•matt_d•3d ago•2 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
45•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
240•isitcontent•16h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
238•dmpetrov•16h ago•127 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
340•vecti•18h ago•149 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
506•todsacerdoti•23h ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
389•ostacke•22h ago•98 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
304•eljojo•18h ago•188 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•186 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
428•lstoll•22h ago•284 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
3•andmarios•4d ago•1 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
71•kmm•5d ago•10 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
24•bikenaga•3d ago•11 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
26•1vuio0pswjnm7•2h ago•16 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
271•i5heu•18h ago•219 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
34•romes•4d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1079•cdrnsf•1d ago•462 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•30 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
306•surprisetalk•3d ago•44 comments
Open in hackernews

Interactive λ-Reduction

https://deltanets.org/
138•jy14898•2mo ago

Comments

qsort•2mo ago
What the hell is this?

The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.

But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?

I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?

papes_•2mo ago
The author, Daniel Augusto Rizzi Salvadori' and Github user, 'https://github.com/danaugrs' align. Couldn't comment on the actual content, though.
forgotpwd16•2mo ago
Affiliation they mean with an institute/company funding their research/work. It's quite rare, if it ever happens, for someone to find an innovative algorithm, let alone write a technical paper, as hobbyist.
psychoslave•2mo ago
What a strange expectation. Obviously humans didn't wait these institutions to create ideas and notations. The fact that these institutions now exist don't make individual with own desire to express novel idea disappear. All the more when these institutions with their elitist mindset will actively reject whatever is going too much of the rails of their own social bubbles.
danaugrs•2mo ago
I'm the author of the research paper (https://arxiv.org/abs/2505.20314) and owner of that GitHub account.

Please see my comment here: https://news.ycombinator.com/item?id=46069564

arethuza•2mo ago
HN Guidelines: "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
qsort•2mo ago
I'm not being "generically" negative, I'm being very specifically negative.

We have a paper from someone not working in the field, with no affiliation, and with an abstract that claims to "solve the longstanding enigma with groundbreaking clarity", a sentence never before uttered by a human being in flesh and blood, and that feels like it takes 4 (four) citations to justify that lambda calculus is Turing-complete, a fact that's well-known to every undergraduate student.

I'm sorry if this gives reviewer #2 vibes, but this doesn't look right to me and I'm asking if someone with actual expertise in the field can clarify what's happening.

koolala•2mo ago
The AI slop statement is harsh. The website looks nice.
nurettin•2mo ago
To be fair, isn't this how haskell people introduce a new monad to express a solution to their pet problems?
mrkeen•2mo ago
The interactive lambda-calculus interpreter looks like it does the right thing, not that I've tried to push it too hard.

Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin

I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)

But if I were to go sniffing for red flags:

From the first commit:

lambdacalc.ts: // The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.

What?

util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it

koolala•2mo ago
Weak vs. "strong" lambda calculus maybe? Typed vs untyped?
anonnon•2mo ago
> it's very weird they're calling this "lambda-reduction"

That was my reaction as well, only ever having heard of β-reduction, α-conversion (to prevent variable collisions), and η-reduction (the logical equivalence of a β-redex of a term and a bound variable with the term itself, provided the variable does not occur free in said term). Sloppy use of nomenclature is absolutely a red flag.

marvinborner•2mo ago
The annihilating interaction between abstraction and application nodes is well-known in the area of interaction net research to ~correspond to β-reduction, as is also explained in the associated research paper [1].

α-conversion is not required in interaction nets. η-reduction is an additional rule not typically discussed, but see for example [2].

[1] https://arxiv.org/pdf/2505.20314

[2] https://www.sciencedirect.com/science/article/pii/S030439750...

anonnon•2mo ago
> α-conversion is not required in interaction nets. η-reduction is an additional rule not typically discussed, but see for example [2].

Which makes the sloppy use of "λ-Reduction" in place of "β-reduction"--the only form of reduction or conversion applied here--even less defensible. Maybe them being non-native English speakers is partly to blame?

qsort•2mo ago
Yes.

To be transparent: I don't understand this stuff all that well and it's entirely possible I'm missing something, but everything here is weird AF.

- Who is the author? Why he has no affiliation?

- What is the main result of the paper? How does it improve on the state of the art? Even for stuff that's way beyond my pay grade, I can usually tell from the abstract. I'm completely baffled here.

- Why do they introduce graphical notation without corresponding formal definitions?

- Why is it written in this weird style where theorems are left implicit? Usually, there's at least a sketch of proof.

- Why does it not address that the thing they're claiming to do isn't elementary recursive as per https://doi.org/10.1006/inco.2001.2869?

Again, it's entirely possible that it's a skill issue on my part and I'd love to be corrected, but I'm completely baffled and I still have absolutely no idea of what I'm looking at. Am I the stupid one and it's obvious to everyone else?

etiamz•2mo ago
Note that, in the interaction net literature, it is pretty common to introduce graphical notation without corresponding textual counterparts. See the works of Lafont, Lamping, Asperti, Guerrini, and many others [1]. (However, the textual calculus can be absolutely crucial for formal proof.)

The absence of proofs and benchmarks undermines the paper for me as well. I find it absolutely critical to demonstrate how the approach works in comparison with already established software, such as BOHM [2].

Parallel beta reduction not being elementarily recursive is not a thing to address, nor a thing one can address. Lamping's abstract algorithm already performs the minimal amount of work to reduce a term -- one cannot improve it further.

From my understanding, the paper aims to optimize the bookkeeping overhead present in earlier implementations of optimal reduction. However, as I said, the absence of formal/empirical evidence of correctness and extensive benchmarks makes the contributions of the paper debatable.

[1] https://github.com/etiamz/interaction-net-resources

[2] https://github.com/asperti/BOHM1.1

danaugrs•2mo ago
Author here. Other experts in this field have also used the term "lambda reduction", including Levy himself [1] and Lamping [2], both which are referenced in the Delta-Nets paper. "Lambda-reduction" is clearly an abbreviation of Lambda-calculi reduction.

[1] https://dl.acm.org/doi/abs/10.1145/143165.143172

[2] https://dl.acm.org/doi/pdf/10.1145/96709.96711

asgr•2mo ago
reminds me of https://github.com/HigherOrderCO/HVM

I see Salvatore has a fork, so they are obviously aware of it. unsure whether theyre proposing the exact same thing without reference or citation…

marvinborner•2mo ago
This is quite different. Salvadori's work aims for optimal reduction of the full lambda calculus (which requires something called "bookkeeping"/"oracle"), while HOC works on optimal/parallel reduction of a certain subset of the lambda calculus.

Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.

The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.

tromp•2mo ago
I linked to the paper in a recent comment [1]. The author has been active in the Higher Order Community Discord channel for quite a while. The Higher Order Company [2] is developing HVM (Higher Order Virtual Machine), a high performance implementation of interaction nets for both CPUs and GPUs, and started the channel to discuss development and related topics, such as the Bend language for programming the HVM, discussed on HN at [3].

The paper manages to present previous work on Levy's optimal beta reduction in a more streamlined fashion, generalizing duplicators to so-called replicators that avoid the need for separate book-keeping gadgets (like croissants and brackets) . Its author also proves himself to be a skilled programmer in bringing his theory to life with this web based evaluator.

The app contrasts traditional graph-based λ-calculus reduction (which replaces nodes with entire subgraphs) with interaction (which makes only local modifications and hence needs more steps), while showing that semantics is preserved.

[1] https://news.ycombinator.com/item?id=46034355

[2] https://higherorderco.com/

[3] https://news.ycombinator.com/item?id=40390287

danaugrs•2mo ago
Author here. Thanks for posting.

I spent several years' worth of weekends working on this, and I'm glad to see it here on Hacker News.

I started working on this problem when I learned about Lamping's algorithm for optimal lambda reduction [1]. He invented a beautiful algorithm (often referred to as his "abstract algorithm"), which uses fan-in and fan-out nodes to reduce lambda terms optimally (with optimal sharing). Unfortunately, in order to make it fully work, some fans needed to duplicate one another while others needed to cancel one another. To determine this correctly Lamping had to extend his abstract algorithm with several delimiter node types and many additional graph reduction rules. These delimiter nodes perform "bookkeeping", making sure the right fan nodes match. I was dissatisfied with the need for these additional nodes and rules. There had to be a better way.

My goal was to try to implement Lamping's abstract algorithm without adding any delimiter nodes, and to do it under the interaction net paradigm to ensure perfect confluence. I tried countless solutions, and finally Delta-Nets was born. Feel free to ask any questions.

I recently started building a programming language on top of Delta-Nets, called Pur (https://pur.dev/).

Feel free to follow along this journey:

https://x.com/danaugrs

https://x.com/purlanguage

[1] https://dl.acm.org/doi/pdf/10.1145/96709.96711

xjm•2mo ago
I find it much easier to see what is going on when selecting λ-calculus instead of Δ-Nets. E.g. for the mandatory Y Combinator,

λf.(λx.f (x x)) (λx.f (x x))

for which the difference with

λf.(λx.f (x x)) (λx.f (x x) f)

is very clear, whereas with Δ-nets the difference is more subtle. I guess it is because the visualization has more information than with the λ-calculus.

fnord77•2mo ago
What is this about? A pointer to a tutorial or a wiki link would be nice for someone who has no idea what this is. Thank you
refulgentis•2mo ago
Cosign, 10 hours in and comments are exclusively people who seemingly know each other already riffing on top of something that's not clear to an audience outside the community, or replying to a coarser version of request with ~0 information. (some tells: referring to other people by first name; having a 1:1 discussion about the meaning of a fork by some other person)
layer8•2mo ago
It’s about term reduction in lambda calculus: https://en.wikipedia.org/wiki/Lambda_calculus#Reduction_stra...
csense•2mo ago
I don't know much about any of this, and I'm far from an expert on the lambda calculus. But I skimmed paper by OP, which is informative: https://arxiv.org/abs/2505.20314

The lambda calculus is an alternate model of computation created by Alonzo Church in the 1930s. It's basically a restricted programming language where the only things you can do are define anonymous functions, and apply those anonymous functions to input variables or the output of other functions.

The Lambda calculus statement And = λp.λq.p q p is more conventionally notated as And(p, q) = p(q(p)).

You can reduce lambda expressions -- basically, this is "Simplifying this program as far as possible" what you expect a good optimizing compiler to do.

When there are multiple possible simplifications, which one should you pick? This is a complicated problem people basically solved years ago, but the solutions are also complicated and have a lot of moving parts. OP came up with a graph-based system called ∆-Nets that is less complicated and more clearly explained than the older solutions. ∆-Nets is a lot easier for people to understand and implement, so it might have practical applications making better performance and tooling for lambda-calculus-based programming languages. Which might in turn have practical benefits for areas where those languages are used (e.g. compilers).

The linked simulation lets you write a lambda calculus program, see what it looks like as a ∆-Net graph, and click to simplify the graph.

It's only tangentially related to OP, but YouTube channel 2swap recently had an interesting video about lambda calculus that you don't have to be an expert to enjoy: https://www.youtube.com/watch?v=RcVA8Nj6HEo