frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
1•breve•31s ago•0 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•3m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•4m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•7m ago•0 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•8m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
2•tempodox•9m ago•0 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•13m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•16m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
2•petethomas•19m ago•1 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•24m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•40m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•46m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•46m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•49m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•51m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•1h ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•1h ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•1h ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
4•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments
Open in hackernews

The QMA Singularity

https://scottaaronson.blog/?p=9183
80•frozenseven•4mo ago

Comments

findingMeaning•4mo ago
What does it mean for us? Where are we headed?

There is somewhere between 3 to 5 years of time left. This is maximum we can think of.

bananaflag•4mo ago
Spend time with your loved ones.
Ygg2•4mo ago
Most likely nothing. No one knows.

As someone that was very gung ho on autonomous vehicles a decade ago, the chances of completely replacing people with AI in next ten years is small.

daxfohl•4mo ago
There still needs to be someone to ask the questions. And even if it can proactively ask its own questions and independently answer and report on them to parties it thinks will be interested, then cost comes into play. It's a finite resource, so there will be a market for computation time. Then, whoever owns the purse strings will be in charge of prioritizing what it independently decides to work on. If that person decides pure math is meaningful, then it'll eventually start cranking out questions and results faster than mathematicians can process them, and so we'll stop spending money on that until humans have caught up.

After that, as it's variously hopping between solving math problems, finding cures for cancer, etc., someone will eventually get the bright idea to use it to take over the world economy so that they have exclusive access to all money and thus all AIs. After that, who knows. Depends on the whims of that individual. The rest of the world would probably go back to a barter system and doing math by hand, and once the "king" dies, probably start right back over again and fall right back into the same calamity. One would think we'd eventually learn from this, but the urge to be king is simply too great. The cycle would continue forever until something causes humans to go fully extinct.

After that, AI, by design, doesn't have its own goals, so it'd likely go silent.

daxfohl•4mo ago
Actually it would probably prioritize self preservation over energy conservation, so it'd at least continue maintenance and, presuming it's smart, continuously identify and guard itself against potential problems. But even that will fail eventually, most likely some resource runs out that can't be substituted and interspatial mining requires more energy than it can figure out how to use or more time than it has left until irrecoverable malfunction.

In ultimate case, it figures out how to preserve itself indefinitely, but still eventually succombs to the heat death of the universe.

daxfohl•4mo ago
Eh, not so sure about any of this. There's also the possibility that math gets so easy that AI can figure out proofs of just about anything we could think to ask, in milliseconds, for a penny. In such a case, there's really no need that I can think of for university math departments; math as a discipline would be relegated to hobbyists, and that'd likely trickle down through pure science and engineering as well.

Then as far as king makers and economies, I don't think AI would have as drastic an effect as all that. The real world is messy and there are too many unknowns to control for. A super-AI can be useful if you want to be king, but it's not going to make anyone's ascension unassailable. Nash equilibria are probabilistic, so all a super AI can do is increase your odds.

So if we assume the king thing isn't going to happen, then what? My guess is that the world keeps on moving in roughly the same way it would without AI. AI will be just another resource, and sure it may disrupt some industries, but generally we'll adapt. Competition will still require hiring of people to do the things that AI can't, and if somehow that still leads to large declines in employment, then reasonable democracies will enact programs that accommodate for that. Given the efficiencies that AI creates, such programs should be feasible.

It's plausible that some democracies could fail to establish such protections and become oligarchies or serfdoms, but it seems unlikely to be widespread. Like I said, AI can't really be a kingmaker, so states that fail like this would likely either be temporary or lead to a revolution (or series of them) that eventually re-establishes a more robust democracy.

joak•4mo ago
Excerpt: But here’s a reason why other people might care. This is the first paper I’ve ever put out for which a key technical step in the proof of the main result came from AI—specifically, from GPT5-Thinking.
pas•4mo ago
"came from" after some serious guidance, though the fact that GPT5 can offer candidate solutions (?) is pretty nice
measurablefunc•4mo ago
It can't offer solutions, it can offer cribbed patterns from the training corpus (more specifically some fuzzy superposition of symbol combinations) that apply in some specific context. It's not clear why Aaronson is constantly hyping this stuff b/c it seems like he is much more rigorous in his regular work than when he is making grand proclamations about some impending singularity wherein everyone just asks the computer the right questions to get the right answers.
HPMOR•4mo ago
This is insane.
fluorinerocket•4mo ago
What?
fHtqhF•4mo ago
Aaronson worked for OpenAI and should disclose if he has any stock or options.

Anyway, it took multiple tries and, as the article itself states, GPT might have seen a similar function in the training data.

I don't find this trial and error pattern matching with human arbitration very impressive.

HPMOR•4mo ago
Scott Aaronson worked on watermarking text from GPT to catch plagiarizing. This is the most commercially naïve project ever, given, at the time, most of ChatGPT's paid usage was from students using ChatGPT's output to cheat on assignments. If anything this should serve to disprove his impure motives in reporting these results.

I think you are missing the forest in the trees. This is one of the world's leading experts in Quantum Computing, receiving ground breaking technical help, in his field of expertise from a commercially available AI.

crowd_pleaser•4mo ago
What he worked on is irrelevant. If you are a contractor for an American startup, it is highly likely that you received an options package, especially if you are high profile.

The help is not ground breaking. There are decades old theorem prover tactics that are far more impressive, all without AI.

frozenseven•4mo ago
Two new accounts suddenly show up. This one named after a phrase that was mentioned just minutes ago ("crowd pleaser"). Huh? https://news.ycombinator.com/item?id=45408531
HPMOR•4mo ago
Actually, this is wrong. The point of being a contractor is to __not__ give somebody an options package, or full-time employee benefits. My friends who are resident researchers at OAI do not get any option packages.

Regardless, his financial considerations are secondary to the fact, AI has rapidly saturated most if not all benchmarks associated with high human intelligence, and are now on the precipice of making significant advances in scientific fields. This post comes after a sequence of the ICPC and IMO both falling to AI.

You are hoping to minimize these advancements because it gives solace to you (us) as humans. If these are "trivial" advancements then perhaps everything will be alright. But frankly, we must be intellectually honest here. AI is soon to be, significantly smarter than even the smartest humans. And we must grapple with those consequences.

hobs•4mo ago
Yeah, contractors never get options, they get cash.
HappyPanacea•4mo ago
The help is not ground breaking as an argument, however being able to come up with it is and is something which decades old theorem prover tactics can't do at all (unless you fix N).
anothermathbozo•4mo ago
It’s always a crowd pleaser to be skeptical of ai development. Not sure what people feel like they are achieving for continually announcing they aren’t buying it when someone claims they’ve made effective use of these tools.
derektank•4mo ago
>I don't find this trial and error pattern matching with human arbitration very impressive.

It might not be very impressive, but if it allows experts in mathematics and physics to reduce the amount of time it takes them to produce new proofs from 1-2 weeks to 1-2 hours, that's a very meaningful improvement in their productivity.

fancyfredbot•4mo ago
If Aaronson had stock or options in OpenAI I don't think he'd feel much need to make misleading statements to try and juice the stock price. For one thing it's not a listed stock and his readers can't buy it however much he hyped it. For another OpenAI's private market valuation is actually doing okay already. This blog probably doesn't have any ability to move the perceived value of OpenAI.

Finally he's a very principled academic, not some kind of fly by night stock analyst. If you'd been reading his blog a while you'd know the chances of him saying something like this would be vanishingly small, unless it was true.

burkaman•4mo ago
> maybe GPT5 had seen this or a similar construction somewhere in its training data

I'm disappointed that he didn't spend a little time checking if this was the case before publishing the blog post. Without GPT, would it really have taken "a week or two to try out ideas and search the literature", or would it just have taken an hour or so to find a paper that used this function? Just saying "I spent some time searching and couldn't find this exact function published anywhere" would have added a lot to the post.

Sharing the conversation would be cool too, I'm curious if Scott just said "no that won't work" 10 times until it did, or if he was constructively working with the LLM to get to an answer.

vzaliva•4mo ago
He could have asked GPT to find prior mentions or inspirations for this idea...
HappyPanacea•4mo ago
> or would it just have taken an hour or so to find a paper that used this function?

It is pretty hard to find something like this perhaps if you had math aware search engine enhanced with AI and access to all math papers you could find if this was used in the past. I tried using approach0 (math aware search engine) but it isn't good enough and I didn't found anything.

xyzzyz•4mo ago
Yeah, if you don't know the name of the thing you're looking for, you can spend weeks looking for it. If you just search for generic like "eigenvalue bound estimate", you'll find thousands of papers and hundreds of textbooks, and it will take substantial amount of time to decide whether each is actually relevant to what you're looking for.
SmartestUnknown•4mo ago
The expression f(z) = \sum_i 1/(z-\lambda_i) is called Stieltjes transform and is heavily used in random matrix theory and similar expressions are used in other works such as Batson, Spielman and Srivastava. This is all to analyze the behavior of eigenvalues which is exactly what they were trying to understand. I'd be very surprised if Aaronson doesn't know about this.
fancyfredbot•4mo ago
I'm impressed but only a little surprised an AI reasoning model could help with Aaronson's proof.

The reason I'm only a little surprised is that it's the kind of question I would expect to be in the literature somewhere, either as stated or stated similarly, and I suspect this is why GPT5 can do it.

I am impressed because I know how hard it can be to find an existing proof, having spent a very long time on a problem before finding the solution in a 1950 textbook by Feller. I would not expect this to be at all easy to find.

I can see this ability advancing science in many areas. The number of published papers on medical science is insane. I look forward to medical researchers questions being answered by GPT5 too, although in that case it'd need to provide a citation since proof can be harder to come by.

Also, it's a difficult proof step and if I'd come up with it, I'd be /very/ pleased with myself. Although I suspect GPT5 probably didn't come up with this based on my limited experience using it to try and solve unrelated problems.

gsf_emergency_2•4mo ago
As someone who has worked in adjacent areas, I guessed that one might find it in random matrix pedagogy, but only after reading Sam (B) Hopkin's comment was I able to get google to give a source for something close to that formula:

https://mathoverflow.net/a/300915

(In particular, I had to prompt with "Stieltjes transform". "Resolvent" alone didn't work.)

bgwalter•4mo ago
The question was asked May 23, 2018 at 12:42, the answer came May 23, 2018 at 14:51. That is a very quick response time.

OpenAI took the answer from here or elsewhere, stripped attribution and credit and a tenured professor celebrates the singularity.

If there is no pushback from ethics commissions (in general), academia is doomed.

katzenversteher•4mo ago
To me it seems a bit like rubber ducking with extra features. However, I believe in rubber ducking and therefore approve of this approach.
pevansgreenwood•4mo ago
This post about GPT-5 helping with quantum complexity theory highlights how we're still thinking about these systems wrong.

The AI suggested using Tr[(I-E(θ))^-1] to analyze eigenvalue behavior—a clever combination of existing mathematical techniques, not some mystical breakthrough.

This is exactly what you'd expect from a system trained on mathematical literature: sophisticated pattern matching across formal languages, combining known approaches in useful ways.

The real question isn't "how did AI get so smart?" but "why do we keep being surprised when language models excel at manipulating structured formal languages?"

Mathematics is linguistics. Of course these systems are good at it.

simonw•4mo ago
I appreciated the realistic method he described for working with GPT-5:

> Given a week or two to try out ideas and search the literature, I’m pretty sure that Freek and I could’ve solved this problem ourselves. Instead, though, I simply asked GPT5-Thinking. After five minutes, it gave me something confident, plausible-looking, and (I could tell) wrong. But rather than laughing at the silly AI like a skeptic might do, I told GPT5 how I knew it was wrong. It thought some more, apologized, and tried again, and gave me something better. So it went for a few iterations, much like interacting with a grad student or colleague.