frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
46•valyala•2h ago•19 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
228•ColinWright•1h ago•248 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
31•valyala•2h ago•4 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
9•gnufx•1h ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
128•AlexeyBrin•8h ago•25 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
132•1vuio0pswjnm7•9h ago•161 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
71•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
836•klaussilveira•22h ago•251 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
181•alephnerd•2h ago•125 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1064•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
85•onurkanbkrc•7h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
493•theblazehen•3d ago•178 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
215•jesperordrup•12h ago•77 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
15•momciloo•2h ago•0 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
231•alainrk•7h ago•366 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
578•nar001•6h ago•261 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
9•languid-photic•3d ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
41•rbanffy•4d ago•8 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
30•marklit•5d ago•3 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
80•speckx•4d ago•91 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
278•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
289•dmpetrov•23h ago•156 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
558•todsacerdoti•1d ago•272 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
431•ostacke•1d ago•111 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments
Open in hackernews

From Zero to QED: An informal introduction to formality with Lean 4

https://sdiehl.github.io/zero-to-qed/01_introduction.html
145•rwosync•1mo ago

Comments

oersted•1mo ago
The motivation itself is quite fresh and compelling even as a standalone article.

https://sdiehl.github.io/zero-to-qed/02_why.html

randomtoast•1mo ago
I have been following recent progress in the formalization of mathematical proofs in Lean, particularly in the context of large language models. One prominent advocate of this approach is Terence Tao, who regularly writes about developments in this area.

From a programmer's perspective, this puts up an interesting parallel. Models such as Sonnet or Opus 4.5 can generate thousands of lines of code per hour. I can review the output, ask the model to write tests, iterate on the result, and after several cycles become confident that the software is sufficiently correct.

For centuries, mathematicians developed proofs by hand, using pen and paper, and were able to check the proofs of their peers. In the context of LLMs, however, a new problem may arise. Consider an LLM that constructs a proof in Lean 4 iteratively over several weeks, resulting in more than 1,000,000 lines of Lean 4 code and concluding with a QED. At what point is an mathematician no longer able to confirm with confidence that the proof is correct?

Such a mathematician might rely on another LLM to review the proof, and that system might also report that it is correct. We may reach a stage where humans can no longer feasibly verify every proof produced by LLMs due to their length and complexity. Instead, we rely on the Lean compiler, which confirm formal correctness, and we are effectively required to trust the toolchain rather than our own direct understanding.

lgas•1mo ago
I'm not sure I understand what you're getting at -- your last paragraph suggestions that you understand the point of formal specification languages and theorem provers (ie. for the automated prover to verify the proof such that you just have to trust the toolchain) but in your next to last paragraph you speak as if you think that human mathematicians need to verify the lean 4 code of the proof? It doesn't matter how many lines the proof is, a proof can only be constructed in lean if it's correct. (Well, assuming it's free of escape hatches like `sorry`).
practal•1mo ago
> Well, assuming it's free of escape hatches like `sorry`

There are bugs in theorem provers, which means there might be "sorries", maybe even malicious ones (depending on what is at stake), that are not that easy to detect. Personally, I don't think that is much of a problem, as you should be able to come up with a "superlean" version of your theorem prover where correctness is easier to see, and then let the original prover export a proof that the superlean prover can check.

I think more of a concern is that mathematicians might not "understand" the proof anymore that the machine generated. This concern is not about the fact that the proof might be wrong although checked, but that the proof is correct, but cannot be "understood" by humans. I don't think that is too much of a concern either, as we can surely design the machine in a way that the generated proofs are modular, building up beautiful theories on their own.

A final concern might be that what gets lost is that humans understand what "understanding" means. I think that is the biggest concern, and I see it all the time when formalisation is discussed here on HN. Many here think that understanding is simply being able to follow the rules, and that rules are an arbitrary game. That is simply not true. Obviously not, because think about it, what does it mean to "correctly follow the rules"?

I think the way to address this final concern (and maybe the other concerns as well) is to put beauty at the heart of our theorem provers. We need beautiful proofs, written in a beautiful language, checked and created by a beautiful machine.

mutkach•1mo ago
Understanding IMO is "developing a correct mental model of a concept". Some heuristics of correctness:

Feynman: "What I cannot build. I do not understand"

Einstein: "If you can't explain it to a six year old, you don't understand it yourself"

Of course none of this changes anything around the machine generated proofs. The point of the proof is to communicate ideas; formalization and verification is simply a certificate showing that those ideas are worth checking out.

practal•1mo ago
Ideas and correctness depend on each other. You usually start with an idea, and check if it is correct. If not, you adjust the idea until it becomes correct. Once you have a correct idea, you can go looking for more ideas based on this.

Formalisation and (formulating) ideas are not separate things, they are both mathematics. In particular, it is not that one should live in Lean, and the other one in blueprints.

Formalisation and verification are not simply certificates. For example, what language are you using for the formalisation? That influences how you can express your ideas formally. The more beautiful your language, the more the formal counter part can look like the original informal idea. This capability might actually be a way to define what it means for a language to be beautiful, together with simplicity.

mutkach•1mo ago
I share your fascination with proof assistants and formal verification, but the reality is that I am yet to see an actual mathematician working on frontier research who is excited about formalizing their ideas, or enthusiastic about putting in the actual (additional) work to build the formalization prerequisites to even begin defining the theorem's statement in that (formal) language.
hollerith•1mo ago
>I am yet to see an actual mathematician working on frontier research who is excited about formalizing their ideas

British mathematician Kevin Buzzard has been evangelizing proof assistants since 2017. I'll leave it to you to decide whether he is working on frontier research:

https://profiles.imperial.ac.uk/k.buzzard/publications

mutkach•1mo ago
Sure, he is one of biggest advocates for it, and yet he was quite clear that it is not yet possible for him to do his actual research in Lean.

Quoting one of the recent papers (2020):

> With current technology, it would take many person-decades to formalise Scholze’s results. Indeed, even stating Scholze’s theorems would be an achievement. Before that, one has of course to formalise the definition of a perfectoid space, and this is what we have done, using the Lean theorem prover.

hollerith•1mo ago
Thanks for that.
practal•1mo ago
You know what? I agree with you. I have not formalised any of my stuff on abstraction logic [1] for that reason (although that would not be too difficult in Isabelle or Lean), I want to write it down in Practal [2], this becoming possible I see as the first serious milestone for Practal. Eventually, I want Practal to feel more natural than paper, and definitely more natural than LaTeX. That's the goal, and I feel many people now see that this will be possible with AI within the next decade.

[1] http://abstractionlogic.com

[2] https://practal.com

lgas•1mo ago
> Personally, I don't think that is much of a problem, as you should be able to come up with a "superlean" version of your theorem prover where correctness is easier to see, and then let the original prover export a proof that the superlean prover can check.

I think this is sort of how lean itself already works. It has a minimal trusted kernel that everything is forced through. Only the kernel has to be verified.

practal•1mo ago
In principle, this is how these systems work. In practice, there are usually plenty of things that make it difficult to say for sure if you have a proof of something.
mutkach•1mo ago
> more than 1,000,000 lines of Lean 4 code and concluding with a QED.

Usually the point of the proof is not to figure out whether a particular statement is true (which may be of little interest by itself, see Collatz conjecture), but to develop some good ideas _while_ proving that statement. So there's not much value in verified 1mil lines of Lean by itself. You'd want to study the (Lean) proof hoping to find some kind of new math invented in it or a particular trick worth noticing.

LLM may first develop a proof in natural language, then prove its correctness while autoformalizing it in Lean. Maybe it will be worth something in that case.

aejm•1mo ago
No, the point of proofs in mathematics IS to prove a particular statement is true, given certain axioms (accepted truths). Yes, there are numerous benefits beyond demonstrating something is undeniably true, given certain accepted truths, perhaps more “useful” than the proof itself, but math is a method of formal knowledge that doesn’t accept shortcuts.
peterkagey•1mo ago
A lot of mathematicians (myself included) would say that the point of proofs isn’t entirely to know whether or not a statement is true, but that it exists to promote human understanding. In fact, I’d argue that at some level, knowing whether or not a theorem is true can be less important than understanding an argument.

This is why having multiple different proofs is valuable to the math community—because different proofs offer different perspectives and ways of understanding.

zozbot234•1mo ago
You don't have to trust the full Lean toolchain, only the trusted proof kernel (which is small enough to be understood by a human) and any desugarings involved in converting the theorem statement from high-level Lean to whatever the proof kernel accepts (these should also be simple enough). The proof itself can be checked automatically.
tuhgdetzhh•1mo ago
Lets assume the kenerl is correct and a mathematian trusts it, now what happens if it is the proof itself that he cannot understand anymore because it has gotten too sophisticated? I think that this would indeed be a difference, and a crossing point in history of mathematical proofs.
zozbot234•1mo ago
In principle, it's no different than failing to understand the details of any calculation. If the calculating process is executed correctly, you can still trust the outcome.
esafak•1mo ago
I am not a mathematician either, but having read Tao's essay on intuition (https://terrytao.wordpress.com/career-advice/theres-more-to-...) I can easily see how he would profitably point an LLM in the right direction and suggest approaches that would conclude in a successful proof upon connecting the dots.
IshKebab•1mo ago
This looks really good and complete. And I like that it starts from normal programming stuff rather than immediately jumping into type universes and proof expressions.

Edit: some feedback on the monads page:

1. "The bind operation (written >>=) is the heart of the monad." .. but then it doesn't use >>= at all in the code example.

2. "The left arrow ← desugars to bind; the semicolon sequences operations." .. again, no semicolons in the code example.

3. I don't really understand the List Monad section. The example doesn't seem to use any monad interfaces.

4. "The same laws look cleaner in the Kleisli category, where we compose monadic functions directly. If f:A→MB and g:B→MC, their Kleisli composition is g∘f:A→MC" - it was going so well!