frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Amateur armed with ChatGPT solves an Erdős problem

https://www.scientificamerican.com/article/amateur-armed-with-chatgpt-vibe-maths-a-60-year-old-pr...
134•pr337h4m•10h ago•73 comments

Why has there been so little progress on Alzheimer's disease?

https://freakonomics.com/podcast/why-has-there-been-so-little-progress-on-alzheimers-disease/
120•chiefalchemist•3h ago•56 comments

USB Cheat Sheet (2022)

https://fabiensanglard.net/usbcheat/index.html
198•gwerbret•6h ago•47 comments

Flickr: The first and last great photo platform

https://petapixel.com/2026/04/22/flickr-the-first-and-last-great-photo-platform/
76•Nrbelex•3d ago•38 comments

OpenAI Privacy Filter

https://openai.com/index/introducing-openai-privacy-filter/
139•tanelpoder•3d ago•24 comments

The Free Universal Construction Kit

https://fffff.at/free-universal-construction-kit/
288•robinhouston•3d ago•56 comments

1-Bit Hokusai's "The Great Wave" (2023)

https://www.hypertalking.com/2023/05/08/1-bit-pixel-art-of-hokusais-the-great-wave-off-kanagawa/
535•stephen-hill•3d ago•88 comments

Using coding assistance tools to revive projects you never were going to finish

https://blog.matthewbrunelle.com/its-ok-to-use-coding-assistance-tools-to-revive-the-projects-you...
216•speckx•11h ago•121 comments

America's Geothermal Breakthrough

https://oilprice.com/Alternative-Energy/Geothermal-Energy/Americas-Geothermal-Breakthrough-Could-...
88•sleepyguy•8h ago•99 comments

The Joy of Folding Bikes

https://blog.korny.info/2026/04/19/the-joy-of-folding-bikes
107•pavel_lishin•3d ago•64 comments

Math Is Hard – OpenBSD Stories

http://miod.online.fr/software/openbsd/stories/vaxfp.html
57•signa11•2d ago•1 comments

Reviving BrowserID in 2026

https://wakamoleguy.com/p/reviving-browserid-in-2026
4•wakamoleguy•1h ago•0 comments

New 10 GbE USB adapters are cooler, smaller, cheaper

https://www.jeffgeerling.com/blog/2026/new-10-gbe-usb-adapters-cooler-smaller-cheaper/
555•calcifer•22h ago•332 comments

Optimizing Datalog for the GPU

https://dl.acm.org/doi/10.1145/3669940.3707274
28•tosh•2d ago•3 comments

The Long Reply

https://ironicsans.ghost.io/the-long-reply/
19•NaOH•2d ago•0 comments

Simulacrum of Knowledge Work

https://blog.happyfellow.dev/simulacrum-of-knowledge-work/
109•thehappyfellow•10h ago•41 comments

The George Business, by Roger Zelazny (1980)

https://www.eternal-flame.org/library/oldlibrary/georgebusiness.html
10•xeonmc•2d ago•0 comments

What async promised and what it delivered

https://causality.blog/essays/what-async-promised/
177•zdw•3d ago•201 comments

Mine, an IDE for Coalton and Common Lisp

https://coalton-lang.github.io/mine/
78•varjag•10h ago•30 comments

DeepSeek-V4 on Day 0: From Fast Inference to Verified RL with SGLang and Miles

https://www.lmsys.org/blog/2026-04-25-deepseek-v4/
9•mji•4h ago•0 comments

Does Internet Advertising Work?

https://freakonomics.com/podcast/does-advertising-actually-work-part-2-digital-ep-441/
5•hackthemack•2h ago•5 comments

Desmond Morris has died

https://www.bbc.com/news/articles/c51y797v200o
113•martey•5d ago•20 comments

How Hard Is It to Open a File?

https://blog.sebastianwick.net/posts/how-hard-is-it-to-open-a-file/
68•ffin•2d ago•11 comments

Martin Galway's music source files from 1980's Commodore 64 games

https://github.com/MartinGalway/C64_music
167•ingve•17h ago•25 comments

Discret 11, the French TV encryption of the 80s

https://fabiensanglard.net/discret11/
155•adunk•16h ago•27 comments

GPT‑5.5 Bio Bug Bounty

https://openai.com/index/gpt-5-5-bio-bug-bounty/
138•Murfalo•13h ago•98 comments

Lute: A Standalone Runtime for Luau

https://lute.luau.org/
73•vrn-sn•3d ago•12 comments

Colorado Adds Open-Source Exemption to Age-Verification Bill

https://fosstodon.org/@carlrichell/116460505717380644
76•terminalbraid•5h ago•26 comments

Tell HN: An app is silently installing itself on my iPhone every day

37•_-x-_•3h ago•32 comments

The Super Nintendo Cartridges

https://fabiensanglard.net/snes_carts/
6•offbyone42•3h ago•1 comments
Open in hackernews

Amateur armed with ChatGPT solves an Erdős problem

https://www.scientificamerican.com/article/amateur-armed-with-chatgpt-vibe-maths-a-60-year-old-problem/
129•pr337h4m•10h ago
https://www.erdosproblems.com/1196

Comments

adamgordonbell•1h ago
Here is the chat:

    don't search the internet. This is a test to see how well you can craft non-trivial, novel and creative proofs given a "number theory and primitive sets" math problem. Provide a full unconditional proof or disproof of the problem.

    {{problem}}

    REMEMBER - this unconditional argument may require non-trivial, creative and novel elements.
Then "Thought for 80m 17s"

https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba...

ipaddr•1h ago
Tried the same prompt and ended up no where close on the free plan.
jasonfarnon•53m ago
Is there a known lag that it takes the Pro plan's abilities to migrate to the free plans?
brianjking•49m ago
GPT 5.5 Pro is not available to any plan outside of ChatGPT Pro ($100 or $200) tier or the API as far as consumer access.
jasonfarnon•30m ago
Yes, but don't we expect GPT 5.5 Pro will eventually be a free tier? Maybe I'm missing something because I only use the free tier. But the free tier has gotten way better over the last few years. I'm pretty sure, based on descriptions on this site from paid subscribers, that the free tier now is better than the paid tier of say 2 years ago. That's the lag I'm wondering about.
hyraki•10m ago
You should pay for it if you find value in it.
vessenes•44m ago
Do not use the free plan. It is not good.
andai•37m ago
Tangential but I learned today that GPT-5.5 in ChatGPT (Plus) has a smaller context window than the one in the API. (Or at least it thinks it does.)

I'd guess / hope the Pro one has the full context window.

Someone1234•52m ago
Does the free plan even have access to thinking models?
jychang•51m ago
Technically yes, gpt-5.4-mini is available on the free plan
Matticus_Rex•48m ago
Was this a surprise?
nycdatasci•47m ago
Thanks for the link! Here's the full prompt with {{problem}} expanded:

  don't search the internet. This is a test to see how well you can craft non-trivial, novel and creative proofs given a "number theory and primitive sets" math problem. 
  Provide a full unconditional proof or disproof of the problem. 
  Problem: "Is it true that, for any $x$, if $A\subset [x,\infty)$ is a primitive set of integers (so that no distinct elements of $A$ divide each other) then\[\sum_{a\in A}\frac{1}{a\log a}< 1+o(1),\]where the $o(1)$ term $\to 0$ as $x\to \infty$?" 
  information you may or may not need to help with the above problem 
  "It is proved that\[\sum_{a\in A}\frac{1}{a\log a}< e^{\gamma}\frac{\pi}{4}+o(1)\approx 1.399+o(1).\]" 
  "It is proved that if $A$ is the set of all integers with exactly $k$ prime factors (so that $A\subset [2^k,\infty)$ and $A$ is a primitive set) then\[\sum_{a\in A}\frac{1}{a\log a}\geq 1+O(k^{-1/2+o(1)}),\]" 
  "It is proved that\[\sum_{a\in A}\frac{1}{a\log a}= 1-(c+o(1))k^22^{-k}\]where $c\approx 0.0656$ is an explicit constant." 
  REMEMBER - this unconditional argument may require non-trivial, creative and novel elements.
cryptoegorophy•42m ago
Mine took 20min. Pro. https://chatgpt.com/share/69ed83b1-3704-8322-bcf2-322aa85d7a... But I wish I was math smart to know if it worked or not.
ravenical•1h ago
https://archive.ph/2w4fi
tomlockwood•1h ago
My big question with all these announcements is: How many other people were using the AI on problems like this, and, failing? Given the excitement around AI at the moment I think the answer is: a lot.

Then my second question is how much VC money did all those tokens cost.

gdhkgdhkvff•1h ago
Why do you care about either of those questions?
Eufrat•1h ago
I think we should at least ask the latter, if it turned out it cost $100,000 to generate this solution, I would question the value of it. Erdős problems are usually pure math curiosities AFAIK. They often have no meaningful practical applications.
anematode•1h ago
Neither does the Collatz conjecture, Fermat's last theorem, ....

(Of course, those problems are on another plane than this one.)

Eufrat•58m ago
But that’s exactly my point.

These are absolutely worth studying, but being what they are, nobody should be dumping massive amounts of money on them. I would not find it persuasive if researchers used LLMs to solve the Collatz conjecture or finally decode Etruscan. These are extremely valuable, but it is unlikely to be worth it for an LLM just grinding tokens like crazy to do it.

anematode•55m ago
Maybe... but I would love if 1% of the investment in AI were redirected to the mathematics education and professional research that would allow progress on any of these problems...
mhb•46m ago
Is it worth it to buy a super-yacht?
Eufrat•6m ago
No.
inerte•59m ago
I would question at $60k. At $100k is a steal.
jasonfarnon•56m ago
Also, it's one thing if the AI age means we all have to adopt to using AI as a tool, another thing entirely if it means the only people who can do useful research are the ones with huge budgets.
peteforde•28m ago
Your logic undoes your point, because the kid who "solved" this technically didn't even have to invest in a degree.
tomlockwood•15m ago
America should fund tertiary education better, and that would solve even more problems.
tomlockwood•41m ago
Because it could be a massive waste of time and money.
peteforde•29m ago
Can you imagine how many bags of chips we could buy if we stopped funding cancer research?

It's so expensive!

tomlockwood•19m ago
Can you imagine how much ChatGPT cancer research we could fund if we stopped funding cancer research?
ecshafer•6m ago
I've tried my hand at a few of the Erdos problems and came up short, you didn't hear about them. But if a Mathematician at Harvard solved on, you would probably still hear about it a bit. Just the possibility that a pro subscription for 80 minutes solved an Erdos problem is astounding. Maybe we get some researchers to get a grant and burn a couple data centers worth of tokens for a day/week/month and see what it comes up with?
Eufrat•1h ago
Humans and very often the machines we create solve problems additively. Meaning we build on top of existing foundations and we can get stuck in a way of thinking as a result of this because people are loathe to reinvent the wheel. So, I don’t think it’s surprising to take a naïve LLM and find out that because of the way it’s trained that it came up with something that many experts in the field didn’t try.

I think LLMs can help in limited cases like this by just coming up with a different way of approaching a problem. It doesn’t have to be right, it just needs to give someone an alternative and maybe that will shake things up to get a solution.

That said, I have no idea what the practical value of this Erdős problem is. If you asked me if this demonstrates that LLMs are not junk. My general impression is that is like asking me in 1928 if we should spent millions of dollars of research money on number theory. The answer is no and get out of my office.

resident423•1h ago
I wonder if the rationalizations people come up with for why this isn't real intelligence will be as creative as ChatGPTs solution.
walrus01•57m ago
For one, everything its 'intelligence' knows about solving the problem is contained within the finite context window memory buffer size for the particular model and session. Unless the memory contents of the context window are being saved to storage and reloaded later, unlike a human, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced later.
resident423•49m ago
What your describing sounds more like the model is lacking awareness than lacking intelligence? Why does it need to know it solved the problem to be intelligent?
walrus01•40m ago
We say African Elephants are intelligent for a number of reasons, one of which is because they remember where sources of water are in very dry conditions, and can successfully navigate back to them across relatively large distances. An intelligent being that can't remember its own past is at a significant disadvantage compared to others that can, which is exactly one of the reasons why alzheimers patients often require full time caregivers.
peteforde•31m ago
You are confusing lack of intelligence with the presence of impairment.
resident423•11m ago
There's probably a limit to how intelligent something can be with no long term memory, but solving Erdos problems in 80 minutes is clearly not above it, and I think the true limit is probably much higher than that.
jychang•49m ago
There's humans that have memory issues, or full blown Anterograde amnesia.
techblueberry•32m ago
This is real intelligence is the bear position, so I think it’s real intelligence.
0xBA5ED•31m ago
And how about the creative rationalizations about how statistical text generation is actual intelligence? As if there is any intent or motive behind the words that are generated or the ability to learn literally any new thing after it has been trained on human output?
tomlockwood•22m ago
I think one day the VCs will have given the monkeys on typewriters enough money that these kinds of comments can be generated without human intervention.
thesmtsolver2•9m ago
Remember when people thought multiplying numbers, remembering a large number of facts, and being good at rote calculations was intelligence?

Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.

Most intelligent people do not think that.

Eventually, we will arrive at the same conclusion for what LLMs are doing now.

wizardforhire•1h ago
WTF!?
userbinator•59m ago
The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.

Of course LLMs are still absolutely useless at actual maths computation, but I think this is one area where AI can excel --- the ability to combine many sources of knowledge and synthesise, may sometimes yield very useful results.

Also reminds me of the old saying, "a broken clock is right twice a day."

karlgkk•51m ago
Also just the sheer value of brute force.

80 hours! 80 hours of just trying shit!

FrasiertheLion•50m ago
It's 80 minutes, not 80 hours.
ChrisGreenHeur•39m ago
80 minutes! 80 minutes of just trying shit!
peteforde•33m ago
... shit that solved an apparently significant Erdős problem.

That is not nothing, no matter how much you hate AI.

userbinator•27m ago
It shows that AI is apparently very good at brute-forcing.
jasonfarnon•28m ago
and you can be sure mathematicians spent way more than 80 hrs on it
brokencode•25m ago
How long do you figure it’d take to solve the problem yourself?
jaggederest•51m ago

    > Every Mathematician Has Only a Few Tricks
    > 
    > A long time ago an older and well-known number theorist made some disparaging remarks about Paul Erdös’s work.
    > You admire Erdös’s contributions to mathematics as much as I do,
    > and I felt annoyed when the older mathematician flatly and definitively stated
    > that all of Erdös’s work could be “reduced” to a few tricks which Erdös repeatedly relied on in his proofs.
    > What the number theorist did not realize is that other mathematicians, even the very best,
    > also rely on a few tricks which they use over and over.
    > Take Hilbert. The second volume of Hilbert’s collected papers contains Hilbert’s papers in invariant theory.
    > I have made a point of reading some of these papers with care.
    > It is sad to note that some of Hilbert’s beautiful results have been completely forgotten.
    > But on reading the proofs of Hilbert’s striking and deep theorems in invariant theory,
    > it was surprising to verify that Hilbert’s proofs relied on the same few tricks.
    > Even Hilbert had only a few tricks!
    > 
    > - Gian-Carlo Rota - "Ten Lessons I Wish I Had Been Taught"
https://www.ams.org/notices/199701/comm-rota.pdf
tptacek•45m ago
Wait, what do you mean "LLMs are still absolutely useless at actual maths computation"? I rely on them constantly for maths (linear algebra, multivariable calc, stat) --- literally thousands of problems run through GPT5 over the last 12 months, and to my recollection zero failures. But maybe you're thinking of something more specific?
schneems•39m ago
They are bad at math. But they are good at writing code and as an optimization some providers have it secretly write code to answer the problem, run it and give you the answer without telling you what it did in the middle part.
avaer•36m ago
Someone should tell the mathematicians if they use a calculator or a whiteboard or heavens forbid a computer they are "bad at math".
tempaccount5050•17m ago
Are they bad at math? Or are they bad at arithmetic?
jasonfarnon•25m ago
What tier are you using? I have run lots of problems and am very impressed, but I find stupid errors a lot more frequently than that, e.g., arithmetic errors buried in a derivation or a bad definition, say 1/15 times. I would love to get zero failures out of thousands of (what sounds like college-level math) posed problems.
y0eswddl•36m ago
Yeah, they're great at interpolation - they'll just never be worth much at extrapolation.
SR2Z•11m ago
Luckily for us, whole fortunes can be made by filling in the blanks between what we know and what we realize.
keyle•34m ago
The ultimate generalist
homo__sapiens•56m ago
Big if true.
iqihs•47m ago
referring to Tao as just a 'mathematician' gave me a good chuckle
debo_•47m ago
> “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says.

This is how I feel when I read any mathematics paper.

mhb•44m ago
> He’s 23 years old and has no advanced mathematics training.

How is he even posing the question and having even a vague idea of what the proof means or how to understand it?

ChrisGreenHeur•38m ago
my guess would be due to having an interest in the field
hx8•29m ago
> “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.” He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge.

Seems like standard 23 year old behavior. You're spending $100-$200/mo on the pro subscription, and want to get your money's worth. So you burn some tokens on this legendarily hard math problem sometimes. You've seen enough wrong answers to know that this one looks interesting and pass it on to a friend that actually knows math, who is at a place where experts can recognize it as correct.

Seems like a classic example of in-expert human labeling ML output.

ghstinda•27m ago
Scientific American going out of business next lol, weak headline. Chat GPT let's have a better headline for the God among Men that realized the capability of the new tool, many underestimate or puff up needlessly. Fun times we live in. One love all.
ripped_britches•22m ago
At this point we should make a GitHub repo with a huge list of unsolved “dry lab” problems and spin up a harness to try and solve them all every new release.
johntopia•8m ago
that's actually a brilliant idea