frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•4m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•6m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
1•mfiguiere•12m ago•0 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
2•meszmate•14m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•16m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•31m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•36m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•40m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•41m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•42m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•47m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•50m ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•53m ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
4•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
4•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•1h ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
6•miohtama•1h ago•5 comments

Benchmarking how well LLMs can play FizzBuzz

https://huggingface.co/spaces/venkatasg/fizzbuzz-bench
1•_venkatasg•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
28•SerCe•1h ago•22 comments

Octave GTM MCP Server

https://docs.octavehq.com/mcp/overview
1•connor11528•1h ago•0 comments

Show HN: Portview what's on your ports (diagnostic-first, single binary, Linux)

https://github.com/Mapika/portview
3•Mapika•1h ago•0 comments
Open in hackernews

The QMA Singularity

https://scottaaronson.blog/?p=9183
80•frozenseven•4mo ago

Comments

findingMeaning•4mo ago
What does it mean for us? Where are we headed?

There is somewhere between 3 to 5 years of time left. This is maximum we can think of.

bananaflag•4mo ago
Spend time with your loved ones.
Ygg2•4mo ago
Most likely nothing. No one knows.

As someone that was very gung ho on autonomous vehicles a decade ago, the chances of completely replacing people with AI in next ten years is small.

daxfohl•4mo ago
There still needs to be someone to ask the questions. And even if it can proactively ask its own questions and independently answer and report on them to parties it thinks will be interested, then cost comes into play. It's a finite resource, so there will be a market for computation time. Then, whoever owns the purse strings will be in charge of prioritizing what it independently decides to work on. If that person decides pure math is meaningful, then it'll eventually start cranking out questions and results faster than mathematicians can process them, and so we'll stop spending money on that until humans have caught up.

After that, as it's variously hopping between solving math problems, finding cures for cancer, etc., someone will eventually get the bright idea to use it to take over the world economy so that they have exclusive access to all money and thus all AIs. After that, who knows. Depends on the whims of that individual. The rest of the world would probably go back to a barter system and doing math by hand, and once the "king" dies, probably start right back over again and fall right back into the same calamity. One would think we'd eventually learn from this, but the urge to be king is simply too great. The cycle would continue forever until something causes humans to go fully extinct.

After that, AI, by design, doesn't have its own goals, so it'd likely go silent.

daxfohl•4mo ago
Actually it would probably prioritize self preservation over energy conservation, so it'd at least continue maintenance and, presuming it's smart, continuously identify and guard itself against potential problems. But even that will fail eventually, most likely some resource runs out that can't be substituted and interspatial mining requires more energy than it can figure out how to use or more time than it has left until irrecoverable malfunction.

In ultimate case, it figures out how to preserve itself indefinitely, but still eventually succombs to the heat death of the universe.

daxfohl•4mo ago
Eh, not so sure about any of this. There's also the possibility that math gets so easy that AI can figure out proofs of just about anything we could think to ask, in milliseconds, for a penny. In such a case, there's really no need that I can think of for university math departments; math as a discipline would be relegated to hobbyists, and that'd likely trickle down through pure science and engineering as well.

Then as far as king makers and economies, I don't think AI would have as drastic an effect as all that. The real world is messy and there are too many unknowns to control for. A super-AI can be useful if you want to be king, but it's not going to make anyone's ascension unassailable. Nash equilibria are probabilistic, so all a super AI can do is increase your odds.

So if we assume the king thing isn't going to happen, then what? My guess is that the world keeps on moving in roughly the same way it would without AI. AI will be just another resource, and sure it may disrupt some industries, but generally we'll adapt. Competition will still require hiring of people to do the things that AI can't, and if somehow that still leads to large declines in employment, then reasonable democracies will enact programs that accommodate for that. Given the efficiencies that AI creates, such programs should be feasible.

It's plausible that some democracies could fail to establish such protections and become oligarchies or serfdoms, but it seems unlikely to be widespread. Like I said, AI can't really be a kingmaker, so states that fail like this would likely either be temporary or lead to a revolution (or series of them) that eventually re-establishes a more robust democracy.

joak•4mo ago
Excerpt: But here’s a reason why other people might care. This is the first paper I’ve ever put out for which a key technical step in the proof of the main result came from AI—specifically, from GPT5-Thinking.
pas•4mo ago
"came from" after some serious guidance, though the fact that GPT5 can offer candidate solutions (?) is pretty nice
measurablefunc•4mo ago
It can't offer solutions, it can offer cribbed patterns from the training corpus (more specifically some fuzzy superposition of symbol combinations) that apply in some specific context. It's not clear why Aaronson is constantly hyping this stuff b/c it seems like he is much more rigorous in his regular work than when he is making grand proclamations about some impending singularity wherein everyone just asks the computer the right questions to get the right answers.
HPMOR•4mo ago
This is insane.
fluorinerocket•4mo ago
What?
fHtqhF•4mo ago
Aaronson worked for OpenAI and should disclose if he has any stock or options.

Anyway, it took multiple tries and, as the article itself states, GPT might have seen a similar function in the training data.

I don't find this trial and error pattern matching with human arbitration very impressive.

HPMOR•4mo ago
Scott Aaronson worked on watermarking text from GPT to catch plagiarizing. This is the most commercially naïve project ever, given, at the time, most of ChatGPT's paid usage was from students using ChatGPT's output to cheat on assignments. If anything this should serve to disprove his impure motives in reporting these results.

I think you are missing the forest in the trees. This is one of the world's leading experts in Quantum Computing, receiving ground breaking technical help, in his field of expertise from a commercially available AI.

crowd_pleaser•4mo ago
What he worked on is irrelevant. If you are a contractor for an American startup, it is highly likely that you received an options package, especially if you are high profile.

The help is not ground breaking. There are decades old theorem prover tactics that are far more impressive, all without AI.

frozenseven•4mo ago
Two new accounts suddenly show up. This one named after a phrase that was mentioned just minutes ago ("crowd pleaser"). Huh? https://news.ycombinator.com/item?id=45408531
HPMOR•4mo ago
Actually, this is wrong. The point of being a contractor is to __not__ give somebody an options package, or full-time employee benefits. My friends who are resident researchers at OAI do not get any option packages.

Regardless, his financial considerations are secondary to the fact, AI has rapidly saturated most if not all benchmarks associated with high human intelligence, and are now on the precipice of making significant advances in scientific fields. This post comes after a sequence of the ICPC and IMO both falling to AI.

You are hoping to minimize these advancements because it gives solace to you (us) as humans. If these are "trivial" advancements then perhaps everything will be alright. But frankly, we must be intellectually honest here. AI is soon to be, significantly smarter than even the smartest humans. And we must grapple with those consequences.

hobs•4mo ago
Yeah, contractors never get options, they get cash.
HappyPanacea•4mo ago
The help is not ground breaking as an argument, however being able to come up with it is and is something which decades old theorem prover tactics can't do at all (unless you fix N).
anothermathbozo•4mo ago
It’s always a crowd pleaser to be skeptical of ai development. Not sure what people feel like they are achieving for continually announcing they aren’t buying it when someone claims they’ve made effective use of these tools.
derektank•4mo ago
>I don't find this trial and error pattern matching with human arbitration very impressive.

It might not be very impressive, but if it allows experts in mathematics and physics to reduce the amount of time it takes them to produce new proofs from 1-2 weeks to 1-2 hours, that's a very meaningful improvement in their productivity.

fancyfredbot•4mo ago
If Aaronson had stock or options in OpenAI I don't think he'd feel much need to make misleading statements to try and juice the stock price. For one thing it's not a listed stock and his readers can't buy it however much he hyped it. For another OpenAI's private market valuation is actually doing okay already. This blog probably doesn't have any ability to move the perceived value of OpenAI.

Finally he's a very principled academic, not some kind of fly by night stock analyst. If you'd been reading his blog a while you'd know the chances of him saying something like this would be vanishingly small, unless it was true.

burkaman•4mo ago
> maybe GPT5 had seen this or a similar construction somewhere in its training data

I'm disappointed that he didn't spend a little time checking if this was the case before publishing the blog post. Without GPT, would it really have taken "a week or two to try out ideas and search the literature", or would it just have taken an hour or so to find a paper that used this function? Just saying "I spent some time searching and couldn't find this exact function published anywhere" would have added a lot to the post.

Sharing the conversation would be cool too, I'm curious if Scott just said "no that won't work" 10 times until it did, or if he was constructively working with the LLM to get to an answer.

vzaliva•4mo ago
He could have asked GPT to find prior mentions or inspirations for this idea...
HappyPanacea•4mo ago
> or would it just have taken an hour or so to find a paper that used this function?

It is pretty hard to find something like this perhaps if you had math aware search engine enhanced with AI and access to all math papers you could find if this was used in the past. I tried using approach0 (math aware search engine) but it isn't good enough and I didn't found anything.

xyzzyz•4mo ago
Yeah, if you don't know the name of the thing you're looking for, you can spend weeks looking for it. If you just search for generic like "eigenvalue bound estimate", you'll find thousands of papers and hundreds of textbooks, and it will take substantial amount of time to decide whether each is actually relevant to what you're looking for.
SmartestUnknown•4mo ago
The expression f(z) = \sum_i 1/(z-\lambda_i) is called Stieltjes transform and is heavily used in random matrix theory and similar expressions are used in other works such as Batson, Spielman and Srivastava. This is all to analyze the behavior of eigenvalues which is exactly what they were trying to understand. I'd be very surprised if Aaronson doesn't know about this.
fancyfredbot•4mo ago
I'm impressed but only a little surprised an AI reasoning model could help with Aaronson's proof.

The reason I'm only a little surprised is that it's the kind of question I would expect to be in the literature somewhere, either as stated or stated similarly, and I suspect this is why GPT5 can do it.

I am impressed because I know how hard it can be to find an existing proof, having spent a very long time on a problem before finding the solution in a 1950 textbook by Feller. I would not expect this to be at all easy to find.

I can see this ability advancing science in many areas. The number of published papers on medical science is insane. I look forward to medical researchers questions being answered by GPT5 too, although in that case it'd need to provide a citation since proof can be harder to come by.

Also, it's a difficult proof step and if I'd come up with it, I'd be /very/ pleased with myself. Although I suspect GPT5 probably didn't come up with this based on my limited experience using it to try and solve unrelated problems.

gsf_emergency_2•4mo ago
As someone who has worked in adjacent areas, I guessed that one might find it in random matrix pedagogy, but only after reading Sam (B) Hopkin's comment was I able to get google to give a source for something close to that formula:

https://mathoverflow.net/a/300915

(In particular, I had to prompt with "Stieltjes transform". "Resolvent" alone didn't work.)

bgwalter•4mo ago
The question was asked May 23, 2018 at 12:42, the answer came May 23, 2018 at 14:51. That is a very quick response time.

OpenAI took the answer from here or elsewhere, stripped attribution and credit and a tenured professor celebrates the singularity.

If there is no pushback from ethics commissions (in general), academia is doomed.

katzenversteher•4mo ago
To me it seems a bit like rubber ducking with extra features. However, I believe in rubber ducking and therefore approve of this approach.
pevansgreenwood•4mo ago
This post about GPT-5 helping with quantum complexity theory highlights how we're still thinking about these systems wrong.

The AI suggested using Tr[(I-E(θ))^-1] to analyze eigenvalue behavior—a clever combination of existing mathematical techniques, not some mystical breakthrough.

This is exactly what you'd expect from a system trained on mathematical literature: sophisticated pattern matching across formal languages, combining known approaches in useful ways.

The real question isn't "how did AI get so smart?" but "why do we keep being surprised when language models excel at manipulating structured formal languages?"

Mathematics is linguistics. Of course these systems are good at it.

simonw•4mo ago
I appreciated the realistic method he described for working with GPT-5:

> Given a week or two to try out ideas and search the literature, I’m pretty sure that Freek and I could’ve solved this problem ourselves. Instead, though, I simply asked GPT5-Thinking. After five minutes, it gave me something confident, plausible-looking, and (I could tell) wrong. But rather than laughing at the silly AI like a skeptic might do, I told GPT5 how I knew it was wrong. It thought some more, apologized, and tried again, and gave me something better. So it went for a few iterations, much like interacting with a grad student or colleague.