frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Earth is going to spin much faster over the next few months

https://www.livescience.com/planet-earth/earth-is-going-to-spin-much-faster-over-the-next-few-months-so-fast-that-several-days-are-going-to-get-shorter
1•TMEHpodcast•1m ago•0 comments

Home-manager is a false enlightenment

https://fzakaria.com/2025/07/07/home-manager-is-a-false-enlightenment
1•setheron•4m ago•0 comments

Weedkiller ingredient widely used in US can damage organs and gut bacteria

https://www.theguardian.com/environment/2025/jul/06/weedkiller-diquat-organ-damage-study
1•mikhael•5m ago•0 comments

Photography 'Rules' That Social Media Destroyed

https://fstoppers.com/social-media/5-photography-rules-social-media-destroyed-705854
2•patrakov•6m ago•0 comments

Show HN: Agents for DevOps? Demo

https://www.bitflux.ai/examples/
1•jared_hulbert•12m ago•0 comments

Poll: Interested in joining a "Founders over Fifty" group?

1•sendos•15m ago•0 comments

A Chinese Wikipedia editor spent years writing fake Russian medieval history (2022)

https://www.engadget.com/chinese-wikipedia-editor-fake-russian-medieval-history-122001604.html
2•georgecmu•18m ago•0 comments

Kind Candorship – A reflection on the high cost of being a 'straight shooter'

https://tloriato.com/essay/2025/07/07/kind-candorship/
1•tloriato•21m ago•0 comments

Arizona brings a grid battery online ahead of peak demand

https://electrek.co/2025/07/07/arizona-brings-a-huge-grid-battery-online-ahead-of-peak-demand/
2•toomuchtodo•21m ago•0 comments

Atlassian migrated 4M Postgres databases to shrink AWS bill

https://www.theregister.com/2025/07/07/asia_tech_news_in_brief/
2•kmdupree•23m ago•0 comments

You will own NOTHING and be HAPPY [video]

https://www.youtube.com/watch?v=rAsgjKBkKMA
2•tartoran•24m ago•0 comments

From Prompt Towards Silicon

https://sidpm.com
1•suraj_sindhia•25m ago•1 comments

Liquid glass, now with frosted tips

https://birchtree.me/blog/liquid-glass-now-with-frosted-tips/
2•robenkleene•28m ago•0 comments

Build your own SQLite, Part 6: Overflow pages

https://blog.sylver.dev/build-your-own-sqlite-part-6-overflow-pages
3•sevender•29m ago•0 comments

Ted Nelson struggles with uncomprehending radio interviewer [video] (1979)

https://www.youtube.com/watch?v=RVU62CQTXFI
2•EGreg•30m ago•0 comments

Atomic macOS infostealer adds backdoor for persistent attacks

https://www.bleepingcomputer.com/news/security/atomic-macos-infostealer-adds-backdoor-for-persistent-attacks/
4•sandwichsphinx•30m ago•0 comments

We just hit $1M ARR in 4 years. With zero funding

https://projectionlab.com/blog/we-reached-1m-arr-with-zero-funding
4•jonkuipers•34m ago•0 comments

Evidence our brains make neurons in adulthood may close century-old debate

https://www.science.org/content/article/genetic-evidence-our-brains-make-new-neurons-adulthood-may-close-century-old-debate
3•gnabgib•36m ago•0 comments

The Psychological Significance of Eiji Yoshikawa's Musashi

https://abesorock.substack.com/p/the-psychological-significance-of
2•stopachka•47m ago•0 comments

AMD OpenSIL PoC Still Being Worked on for Phoenix SoCs

https://www.phoronix.com/news/AMD-openSIL-Phoenix-Turin-2025
3•pietrushnic•50m ago•0 comments

Caltech Agrees to Settle Lawsuit Accusing It of Misleading Students

https://www.nytimes.com/2025/07/07/us/caltech-simplilearn-settlement-bootcamp.html
4•breadwinner•52m ago•0 comments

Guy built a TikTok replacement for students

https://skilltok.vercel.app/
2•iboshidev•53m ago•1 comments

Core Conversations: Apple M4 vs. Lunar Lake/ Arrow Lake

https://www.intel.com/content/www/us/en/content-details/859225/core-conversations-apple-m4-vs-lunar-lake-arrow-lake.html
2•high_na_euv•55m ago•1 comments

N8n AI Workflows – 3,400 Workflows and an LLM Prototype

2•sayedev•1h ago•0 comments

Numbers are in and NYC congestion pricing is a big 'success,' Hochul says

https://gothamist.com/news/numbers-are-in-and-nyc-congestion-pricing-is-a-big-success-hochul-says
7•rntn•1h ago•0 comments

Naming Software Teams

https://staysaasy.com/management/2025/07/06/team-names.html
2•thisismytest•1h ago•0 comments

Nonchalant Situationship Advice (Scientific)

https://dubrizz.com/
2•Loyae•1h ago•0 comments

xmcp – TypeScript framework for building and shipping MCP applications

https://xmcp.dev/
6•ragojose•1h ago•0 comments

Why Are Protesters in Mexico City Angry at Remote Workers?

https://www.nytimes.com/2025/07/07/world/americas/mexico-city-protests-rent-prices-tourists.html
6•RestlessMind•1h ago•4 comments

GitHub status page displays Atlassian cookie notice

3•eaurouge•1h ago•1 comments
Open in hackernews

My first verified imperative program

https://markushimmel.de/blog/my-first-verified-imperative-program/
107•TwoFx•5h ago

Comments

Joker_vD•5h ago
Naturally, this proof only works for arbitrary-precision integers: when you use fixed-precision integers, the algorithm will wrongfully report "false" for arrays like e.g. [INT_MIN, -1] or (if you insist on C semantics) [UINT_MAX, 1].

Hopefully the proof would break if one tried to transfer it over?

Jtsummers•5h ago
> the algorithm will wrongfully report "false" for arrays like e.g. [INT_MIN, -1]

`INT_MIN + -1` is not 0 so it should report false in that case.

For UINT_MAX, the algorithm would need to be reconsidered, though, since it's written with signed integers in mind.

> Hopefully the proof would break if one tried to transfer it over?

Hopefully. The proof would have to be modified to account for the actual types. If you're using bounded integers you'd need to write a different proof.

derdi•4h ago
> For UINT_MAX, the algorithm would need to be reconsidered, though, since it's written with signed integers in mind.

The algorithm is written assuming that unary - produces the additive inverse. That is also true for C's unsigned integers. -1U == UINT_MAX, -UINT_MAX == 1U. It Just Works.

junon•56m ago
INT_MIN - 1 is undefined behavior in C.
Jtsummers•49m ago
Sure, can you find an example where INT_MIN - 1 results in 0 though?
zelphirkalt•4h ago
Wait, so much effort and it doesn't even consider this widely known issue? That would mean, that even though all this effort has been spent, a decent programmer still has a better idea of whether something is correct than the proof system used here. And worse this might lull one into thinking, that it must be correct, while actually for a simple case it breaks.
Jtsummers•4h ago
> a decent programmer still has a better idea of whether something is correct than the proof system used here.

The proof is correct in the language it's written for, Lean. If you change the context (axioms) of a proof then the proof may be invalidated. This is not a surprising thing to anyone who spends a second thinking about it.

Joker_vD•4h ago
> anyone who spends a second thinking about it.

Except most programmers don't spend even a second to think about it, and we end up with "int mid = (low + high) / 2;" bugs in standard implementations of binary search in e.g. Java. And that implementation even had a written proof accompanying it!

Jtsummers•3h ago
Try that in SPARK/Ada. It'll stop you there [if it can't prove that low + high won't overflow]. Don't take a proof written with one set of assumptions (in this case how integers are expected to behave) and translate it to another language where those assumptions don't hold.
zelphirkalt•1h ago
Are writing our next program in Lean then? Where does that run?

There seems to be a fundamental difficulty here. Either we prove things in the language we want to use, which means modelling the behavior of the things we use in that language, or we prove things in Lean, but then cannot apply that to an actual implementation, because of issues like the one above.

I would be surprised, if there was no standard approach for modelling bounded integers and their specific properties in a language (which can differ) in a proof language like this. There must have been more people having thought about this and come up with solutions.

bollu•1h ago
yes, Lean is executable, and the proof of natural numbers runs with arbitrary width integers. they're stored as tagged pointers, with upto 63bit numbers being normal numbers, and larger numbers become GMP encoded.
Jtsummers•1h ago
> Are writing our next program in Lean then? Where does that run?

That first question is hard to parse. If you mean "Are you writing your next program in Lean then?" then: No, but in principle we could, it runs on each OS we use (Windows and Linux, standard x64 hardware). If you mean something else, I can't figure out what it would be.

> Either we prove things in the language we want to use

Sure, I mentioned SPARK/Ada. There are systems for C, Java, and others that also work and understand their types so you don't have to add extra modeling.

> which means modelling the behavior of the things we use in that language

It would already be done for you, you wouldn't have to model Ada's integers in SPARK, for instance.

> we prove things in Lean, but then cannot apply that to an actual implementation, because of issues like the one above.

https://lean-lang.org/doc/reference/latest/Basic-Types/Fixed...

If you knew your target system was using fixed-width integers, you'd use this.

aseipp•1h ago
It does not look like it due to its largest target audience (hardcore math nerds writing proofs), but Lean 4 is actually a meta-programming language, like Racket -- it is one single unified language for programming, meta programming, and theorem proving, on top of a small core. The implementation of Lean is almost entirely written in Lean, including components like LSP servers, the compiler itself, many efficient data structures, etc. It is a key use case to write real world programs, because the implementation itself is such a program.

In the long past, Lean 3 had this idea that you could use one language both for writing proofs, and writing proof automation -- programs that manipulate proofs and automatically do things. Like in the article itself, the 'grind' thing-a-mabob is proof automation. (Roqc has two separate languages for proofs and proof automation.) But there was a problem: Lean started as a theorem prover and the implementation tried to move towards "executing programs" and it didn't work very well and was slow. The prototype compiler from Lean 3 to C++ ran out of steam before Lean 3 got canned.

Lean 4 instead went and did things the other way around: it started as a programming language that was executable. The Lean 4 compiler was self-hosted during development for a long-time, way before anyone ported any big math proofs to it from Lean 3. Why did they do this? Because the key insight is that if you want to write programs that manipulate proofs (AKA programs), the best thing to have at hand is have a robust general programming language -- like Lisp or Racket. And so Lean is macro based, too, and like Racket it allows you to have families of languages and towers of macros that expand as deep as you want. Meta programs that write programs that write programs, etc...

So in Lean you write Lean programs that can manipulate Lean ASTs, and you write "reader macros" that allow you to use domain specific syntax right inside the source file for any kind of math or programming-language DSL you want. And all those macros and meta-programs are compiled and executed efficiently just like you'd expect. Finally, there is a total fragment of the language that you can actually write proofs over and reason about. And there is a big library called "mathlib" with lots of macros for writing math and math proofs in this language-framgent-we-can-reason-and-prove-things-with.

Lean 4 is very close in spirit to the Lisp idea, but modified for math proving and generalized programming. It's very unique and powerful.

DavidVoid•4h ago
> the algorithm will wrongfully report "false" for arrays like e.g. [INT_MIN, -1]

If you have INT_MIN along with any other negative number in the array then your program has undefined behavior in C. Signed integer overflow is UB (but unsigned overflow is not).

andrepd•2h ago
But false is the correct result for those cases. Addition is addition and overflow is undefined (= can assume that doesn't happen), it's not addition modulo 2^n.
Joker_vD•1h ago
We are not talking about C here. Imagine it was e.g. Java, or C#, or Rust in release mode, or heck, even Lean itself but with fixed-precision integers.
necunpri•2h ago
This is the strength of typing, right?

If I can specify the type of my input I can ensure the verification.

SUPERX_112•5h ago
nice
munchler•4h ago
Lean is awesome and this is an impressive new feature, but I can't help but notice that the proof is significantly longer and more complex than the program itself. I wonder how well this will scale to real-world programs.
grumbelbart•4h ago
Long-term this would be done using LLMs. It would also solve LLMs' code quality issues - they could simply proof that the code works right.
codebje•1h ago
Maybe very long term. I turn off code assistants when doing Lean proofs because the success rate for just suggestions is close to zero.
amw-zero•3h ago
Research points to there being a quadratic relationship between automated proof and code size: https://trustworthy.systems/publications/nictaabstracts/Mati....

Specifically, the relationship is between the _specification_ and the proof, and it was done for proofs written in Isabelle and not Lean.

The good news is that more and more automation is possible for proofs, so the effort to produce each proof line will likely go down over time. Still, the largest full program we've fully verified is much less than 100,000 LOC. seL4 (verified operating system) is around 10,000 lines IIRC.

armchairhacker•2h ago
Real-world programs can be verified by formally proving properties on a small part of the code (called the kernel) in a way that transitively guarantees those for the remaining code.

For example, Rust's borrow checker guarantees* memory safety of any code written in Rust, even a 10M+ LOC project. Another example is sel4, a formally-verified micro-kernel (https://sel4.systems/About/seL4-whitepaper.pdf).

* Technically not; even if the code doesn't use `unsafe`, not only is Rust's borrow checker not formally verified, there are soundness holes (https://github.com/rust-lang/rust/issues?q=is%3Aopen%20is%3A...). However, in theory it's possible to formally prove that a subset of Rust can only encode memory-safe programs, and in practice Rust's borrow checker is so effective that a 10M+ LOC project without unsafe still probably won't have memory issues.

serbuvlad•1h ago
What's a memory issue?

If I access beyond the end of an array in Rust, the panic handler runs and starts unwinding my stack. If I access beyond the end of an array in C++ with .at() the excwption handler runs and starts unwining my stack. If I access beyond the end of an array in C the SIGSEGV handler may (*) run and I could, if I wanted to, start unwinding my stack.

Ah, but in C, sometimes if I access the wrong memory, I get garbadge instead of a panic.

Sure, and if I store my data in a Rust array and store indexes into that array around the place as sort of weak references (something I've seen Rust programmers use and talk about all the time), I can easily fetch the wrong data too.

Rust provides a robust type system and a borrow checker which avoids a lot of common problems at the expence of adhering to a particular programming style. That's fine. That's worth advocating for.

But it's no pannacea. Not even close.

My favorite memory about this is a programmer lambasting Go's strings (which are basically immutable byte vectors) for not enforcing UTF-8, like Rust strings.

He then said that this means that in Go you can print filenames to the screen that can break your terminal session because of this if they contain invalid UTF-8, which Rust forces you to escape explicitly. The irony, of couse, is that the characters that can break your terminal session are perfectly valid UTF-8.

Rust's type safety convinced this guy that his Rust program was immune to a problem that it was simply not immune to.

caim•27m ago
Funny thing is that you can get undefined behavior and segfaults using only "safe rust", and the rust compiler has subtle bugs that allow you to disable important checks (like type checking), which can leave your code completely broken.

But for some crazy propaganda, rust devs believes that any rust code is safe and sound no matter what.

https://github.com/Speykious/cve-rs/issues/49

rtpg•22m ago
My belief is that the core part of a lot of programs where you want formal proofs are tricky enough to where you need hand holding but the higher up the stack you go the simpler your proofs get since you did the hard stuff in the core.

Though it really does feel like we're still scratching the surface of proof writing practices. A lot of proofs I've seen seem to rely only on very low level building blocks, but stronger practitioners more immediately grab tools that make stuff simpler.

I would say, though, that it feels likely that your proofs are always going to be at least within an order of magnitude of your code, because in theory the longer your code is the more decision points there are to bring up in your proof as well. Though automatic proof searches might play out well for you on simpler proofs.

norir•4h ago
My brain has been slowly trained to reject imperative programming. This example could be rewritten in a tail recursive manner using an immutable set which would be simpler to verify for correctness even without a formal verifier.

I have found that while there is a learning curve to programming using only recursion for looping, code quality does go significantly up under this restriction.

Here is why I personally think tail recursion is better than looping: with tail recursion, you are forced to explicitly reenter the loop. Right off the bat, this makes it difficult to inadvertently write an infinite loop. The early exit problem is also eliminated because you just return instead of making a recursive call. Moreover, using recursion generally forces you to name the function that loops which gives more documentation than a generic for construct. A halfway decent compiler can also easily detect tail recursion and rewrite it as a loop (and inline if the recursive function is only used in one place) so there need not to be any runtime performance cost of tail recursion instead of looping.

Unfortunately many languages do not support tail call optimization or nested function definitions and also have excessively wordy function definition syntax which makes loops more convenient to write in those languages. This conditions one to think in loops rather than tail recursion. Personally I think Lean would be better if it didn't give in and support imperative code and instead helped users learn how to think recursively instead.

kevindamm•4h ago
Which languages do support TCO at this point? From my recollection we have

* Scheme

* Haskell

* Elixir

* Erlang

* OCaml

* F#

* Scala

* (not Clojure)

* the JVM could remove tail-recursive calls, but IIRC this still hasn't been added for security reasons

* Racket

* Zig

* Lua

* Common Lisp, under certain compilers/interpreters

* Rust? (depends)

* Swift? (sometimes)

louthy•4h ago
> * F#

The .NET CLR supports the ‘.tail’ opcode which means that any .NET based language could support it. I’m hoping one day the C# team will get around to it. It seems like such low hanging fruit.

nhubbard•4h ago
Kotlin as well, through the ‘tailrec’ marker on a function.
kevindamm•4h ago
ah, thanks, good to know.. but does that make it optional? I kind of like how ocaml requires a letrec annotation on any recursive definition and I don't know when you wouldn't want to add tailrec
alexisread•3h ago
Freeforth (implicit) and Ableforth (deliberately explicit)
taeric•3h ago
I don't understand the security reasons on not removing tail calls. Any chance you have a good place to read up on that?
kevindamm•1h ago
It was raised in one of the initial proposals, back in 2002

https://bugs.java.com/bugdatabase/view_bug?bug_id=4726340

but that looks like a dead link and no wayback archive..

IIRC, basically it's because some parts of the JVM use stack unwinding to figure out what userland code is calling certain system code.. also the current stack frame has metadata about lock status used for allowing re-entrant locks that you lose if you elide the entire recursive call (which the initial proposal did by only removing the few bytecode instructions that set up the callstack frame and return from it).

A more informal proposal from ~2016 allows for soft tail calls and hard (annotated) tail calls, with some restrictions that evidently avoid issues with system calls and lock/reentry maintenance:

https://web.archive.org/web/20161112163441/https://blogs.ora...

And a video by one of the JVM architects at Oracle about adding TCO for Scala

https://www.youtube.com/watch?v=2y5Pv4yN0b0&t=1h02m18s

Also previously featured here on HN, a way to do it that avoids security concerns, by using goto instead of strictly deleting bytecode instructions:

https://news.ycombinator.com/item?id=22945725

zabzonk•2h ago
C++, depending on compiler and other stuff.
geoffhill•44m ago
Both Clang and GCC have musttail attributes than can force tail calls at specific return statements in C/C++.
salutis•9m ago
Also Fennel, both implicitly and explicitly with `tail!`.

Source: https://fennel-lang.org/reference#tail

louthy•4h ago
I agree, but also folds, traversals, list-comprehensions, and recursion-schemes work well and can be even more resistent to common bugs than regular recursion.

Although it’s hard to fault the simple elegance of recursion!

taeric•3h ago
This feels overly strong? I've certainly messed up my fair share of recursive calls.

I don't know why, but I have actually gotten a bit stronger on the imperative divide in recent years. To the point that I found writing, basically, a GOTO based implementation of an idea in lisp to be easier than trying to do it using either loops or recursion. Which, really surprised me.

I /think/ a lot of the difference comes down to how localized the thinking is. If I'm able to shrink the impact of what I want to do down to a few arguments, then recursion helps a ton. If I'm describing a constrained set of repetitive actions, loops. If I'm trying to hold things somewhat static as I perform different reduction and such, GOTO works.

I think "functional" gets a bit of a massive boost by advocates that a lot of functional is presented as declarative. But that doesn't have to be the case. Nor can that help you, if someone else hasn't done the actual implementation.

We can get a long way with very mechanical transformations, in the form of compilation. But the thinking can still have some very imperative aspects.

tuveson•1h ago
> I've certainly messed up my fair share of recursive calls.

It’s a common enough problem that the “why is my program crashing” website is basically named after it.

ngruhn•1h ago
Another case for recursion is that you have to think of the base case. With loops people pathologically forget to handle 0, [], [[]], "", etc.
jeremyscanvic•3h ago
That's really neat! I'm very excited for the future of Lean.