frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Mathematical Exploration and Discovery at Scale

https://terrytao.wordpress.com/2025/11/05/mathematical-exploration-and-discovery-at-scale/
64•nabla9•2h ago•13 comments

Ratatui – App Showcase

https://ratatui.rs/showcase/apps/
402•AbuAssar•9h ago•120 comments

80year old grandmother becomes oldest woman to finish Ironman World Championship

https://bigislandnow.com/2025/10/19/80-year-old-grandmother-becomes-oldest-woman-to-finish-ironma...
26•austinallegro•1h ago•2 comments

Show HN: qqqa – a fast, stateless LLM-powered assistant for your shell

https://github.com/matisojka/qqqa
15•iagooar•57m ago•11 comments

Solarpunk is happening in Africa

https://climatedrift.substack.com/p/why-solarpunk-is-already-happening
882•JoiDegn•15h ago•426 comments

What the hell have you built

https://wthhyb.sacha.house/
215•sachahjkl•3h ago•140 comments

Dillo, a multi-platform graphical web browser

https://github.com/dillo-browser/dillo
348•nazgulsenpai•17h ago•142 comments

End of Japanese community

https://support.mozilla.org/en-US/forums/contributors/717446
638•phantomathkg•9h ago•466 comments

ChatGPT terms disallow its use in providing legal and medical advice to others

https://www.ctvnews.ca/sci-tech/article/openai-updates-policies-so-chatgpt-wont-provide-medical-o...
316•randycupertino•17h ago•313 comments

Firefox profiles: Private, focused spaces for all the ways you browse

https://blog.mozilla.org/en/firefox/profile-management/
273•darkwater•1w ago•143 comments

Recursive macros in C, demystified (once the ugly crying stops)

https://h4x0r.org/big-mac-ro-attack/
98•eatonphil•10h ago•50 comments

NY school phone ban has made lunch loud again

https://gothamist.com/news/ny-smartphone-ban-has-made-lunch-loud-again
337•hrldcpr•22h ago•250 comments

Why aren't smart people happier?

https://www.theseedsofscience.pub/p/why-arent-smart-people-happier
369•zdw•19h ago•456 comments

Chibi Izumi: Phased dependency injection for TypeScript

https://github.com/7mind/izumi-chibi-ts
9•pshirshov•5d ago•0 comments

The state of SIMD in Rust in 2025

https://shnatsel.medium.com/the-state-of-simd-in-rust-in-2025-32c263e5f53d
209•ashvardanian•17h ago•119 comments

Show HN: Flutter_compositions: Vue-inspired reactive building blocks for Flutter

https://github.com/yoyo930021/flutter_compositions
21•yoyo930021•5h ago•8 comments

Vacuum bricked after user blocks data collection – user mods it to run anyway

https://www.tomshardware.com/tech-industry/big-tech/manufacturer-issues-remote-kill-command-to-nu...
303•toomanyrichies•4d ago•102 comments

I was right about dishwasher pods and now I can prove it [video]

https://www.youtube.com/watch?v=DAX2_mPr9W8
462•hnaccount_rng•1d ago•333 comments

Ruby and Its Neighbors: Smalltalk

https://noelrappin.com/blog/2025/11/ruby-and-its-neighbors-smalltalk/
201•jrochkind1•20h ago•117 comments

New gel restores dental enamel and could revolutionise tooth repair

https://www.nottingham.ac.uk/news/new-gel-restores-dental-enamel-and-could-revolutionise-tooth-re...
515•CGMthrowaway•16h ago•192 comments

A new oral history interview with Ken Thompson

https://computerhistory.org/blog/a-computing-legend-speaks/
40•oldnetguy•5d ago•4 comments

Show HN: CoordConversions NPM Module for Map Coordinate Conversions

https://github.com/matthewcsimpson/CoordConversions
3•smatthewaf•1w ago•0 comments

Scientists growing colour without chemicals

https://www.forbes.com/sites/maevecampbell/2025/06/20/dyeing-for-fashion-meet-the-scientists-grow...
32•caiobegotti•4d ago•18 comments

Carice TC2 – A non-digital electric car

https://www.caricecars.com/
242•RubenvanE•21h ago•177 comments

App Store web has exposed all its source code

https://www.reddit.com/r/webdev/comments/1onnzlj/app_store_web_has_exposed_all_its_source_code/
240•redbell•2d ago•112 comments

The Basic Laws of Human Stupidity (1987) [pdf]

https://gandalf.fee.urv.cat/professors/AntonioQuesada/Curs1920/Cipolla_laws.pdf
95•bookofjoe•12h ago•35 comments

I want a good parallel language [video]

https://www.youtube.com/watch?v=0-eViUyPwso
79•raphlinus•2d ago•43 comments

The shadows lurking in the equations

https://gods.art/articles/equation_shadows.html
276•calebm•21h ago•84 comments

Radiant Computer

https://radiant.computer
208•beardicus•22h ago•148 comments

The Transformations of Fernand Braudel

https://www.historytoday.com/archive/behind-times/transformations-fernand-braudel
8•benbreen•5d ago•2 comments
Open in hackernews

Mathematical Exploration and Discovery at Scale

https://terrytao.wordpress.com/2025/11/05/mathematical-exploration-and-discovery-at-scale/
64•nabla9•2h ago

Comments

piker•1h ago
That was dense but seemed nuanced. Anyone care to summarize for those of us who lack the mathematics nomenclature and context?
qsort•47m ago
I'm not claiming to be an expert, but more or less what the article says is this:

- Context: Terence Tao is one of the best mathematician alive.

- Context: AlphaEvolve is an optimization tool from Google. It differs from traditional tools because the search is guided by an LLM, whose job is to mutate a program written in a normal programming language (they used Python). Hallucinations are not a problem because the LLM is only a part of the optimization loop. If the LLM fucks up, that branch is cut.

- They tested this over a set of 67 problems, including both solved and unsolved ones.

- They find that in many cases AlphaEvolve achieves similar results to what an expert human could do with a traditional optimization software package.

- The main advantages they find are: ability to work at scale, "robustness", i.e. no need to tune the algorithm to work on different problems, better interpretability of results.

- Unsurprisingly, well-known problems likely to be in the training set quickly converged to the best known solution.

- Similarly unsurprisingly, the system was good at "exploiting bugs" in the problem specification. Imagine an underspecified unit test that the system would maliciously comply to. They note that it takes significant human effort to construct an objective function that can't be exploited in this way.

- They find the system doesn't perform as well on some areas of mathematics like analytic number theory. They conjecture that this is because those problems are less amenable to an evolutionary approach.

- In one case they could use the tool to very slightly beat an existing bound.

- In another case they took inspiration from an inferior solution produced by the tool to construct a better (entirely human-generated) one.

It's not doing the job of a mathematician by any stretch of the imagination, but to my (amateur) eye it's very impressive. Google is cooking.

nsoonhui•35m ago
>> If the LLM fucks up, that branch is cut.

Can you explain more on this? How on earth are we supposed to know LLM is hallucinating?

khafra•32m ago
Math is a verifiable domain. Translate a proof into Lean and you can check it in a non-hallucination-vulnerable way.
tux3•26m ago
In this case AlphaEvolve doesn't write proofs, it uses the LLM to write Python code (or any language, really) that produces some numerical inputs to a problem.

They just try out the inputs on the problem they care about. If the code gives better results, they keep it around. They actually keep a few of the previous versions that worked well as inspiration for the LLM.

If the LLM is hallucinating nonsense, it will just produce broken code that gives horrible results, and that idea will be thrown away.

qsort•24m ago
We don't, but the point is that it's only one part of the entire system. If you have a (human-supplied) scoring function, then even completely random mutations can serve as a mechanism to optimize: you generate a bunch, keep the better ones according to the scoring function and repeat. That would be a very basic genetic algorithm.

The LLM serves to guide the search more "intelligently" so that mutations aren't actually random but can instead draw from what the LLM "knows".

energy123•6m ago
Google's system is like any other optimizer, where you have a scoring function, and you keep altering the function's inputs to make the scoring function return a big number.

The difference here is the function's inputs are code instead of numbers, which makes LLMs useful because LLMs are good at altering code. So the LLM will try different candidate solutions, then Google's system will keep the good ones and throw away the bad ones (colloquially, "branch is cut").

stabbles•1h ago
Link to the problems: https://google-deepmind.github.io/alphaevolve_repository_of_...
iNic•36m ago
I didn't know the sofa problem had been resolved. Link for anyone else: https://arxiv.org/abs/2411.19826
muldvarp•26m ago
There seems to be zero reason for anyone to invest any time into learning anything besides trades anymore.

AI will be better than almost all mathematicians in a few years.

andrepd•25m ago
I'm very sorry for anyone with such a worldview.
quchao•8m ago
very nice~
tornikeo•3m ago
I love this. I think of mathematics as writing programs but for brains. Not all programs are useful and to use AI for writing less useful programs would generally save humans our limited time. Maybe someday AI will help make even more impactful discoveries?

Exciting times!