frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

New Nick Bostrom Paper: Optimal Timing for Superintelligence [pdf]

https://nickbostrom.com/optimal.pdf
28•uejfiweun•1h ago

Comments

timfsu•44m ago
These narratives are so strange to me. It's not at all obvious why the arrival of AGI leads to human extinction or increasing our lifespan by thousands of years. Still, I like this line of thinking from this paper better than the doomer take.
copperx•9m ago
I don't have a clue either. The assumption that AGI will cause a human extinction threat seems inevitable to many, and I'm here baffled trying to understand the chain of reasoning they had to go through to get to that conclusion.

Is it a meme? How did so many people arrive at the same dubious conclusion? Is it a movie trope?

ed•40m ago
This paper argues that if superintelligence can give everyone the health of a 20 year-old, we should accept a 97% percent chance of superintelligence killing everyone in exchange for the 3% chance the average human lifespan rises to 1400 years old.
paulmooreparks•29m ago
There is no "should" in the relevant section. It's making a mathematical model of the risks and benefits.

> Now consider a choice between never launching superintelligence or launching it immediately, where the latter carries an % risk of immediate universal death. Developing superintelligence increases our life expectancy if and only if:

> [equation I can't seem to copy]

> In other words, under these conservative assumptions, developing superintelligence increases our remaining life expectancy provided that the probability of AI-induced annihilation is below 97%.

measurablefunc•29m ago
Bostrom is very good at theorycrafting.
wmf•7m ago
That's what the paper says. Whether you would take that deal depends on your level of risk aversion (which the paper gets into later). As a wise man once said, death is so final. If we lose the game we don't get to play again.
jibal•25m ago
The usual bunch of logical fallacies and unexamined assumptions from Bostrom.

Good philosophers focus on asking piercing questions, not on proposing policy.

> Would it not be wildly irresponsible, [Yudkowsky and Soares] ask, to expose our entire species to even a 1-in-10 chance of annihilation?

Yes, if that number is anywhere near reality, of which there is considerable doubt.

> However, sound policy analysis must weigh potential benefits alongside the risks of any emerging technology.

Must it? Or is this a deflection from concern about immense risk?

> One could equally maintain that if nobody builds it, everyone dies.

Everyone is going to die in any case, so this a red herring that misframes the issues.

> The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.

"might", if one accepts numerous dubious and poorly reasoned arguments. I don't.

> In particular, sufficiently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.

"could" ... but it won't; certainly not for me as an individual of advanced age, and almost certainly not for "civilization", whatever that means.

> Superintelligence would be able to enormously accelerate advances in biology and medicine—devising cures for all diseases

There are numerous unstated assumptions here ... notably an assumption that all diseases are "curable", whatever exactly that means--the "cure" might require a brain transplant, for instance.

> and developing powerful anti-aging and rejuvenation therapies to restore the weak and sick to full youthful vigor.

Again, this just assumes that such things are feasible, as if an ASI is a genie or a magic wand. Not everything that can be conceived of is technologically possible. It's like saying that with an ASI we could find the largest prime or solve the halting problem.

> These scenarios become realistic and imminent with superintelligence guiding our science.

So he baselessly claims.

Sorry, but this is all apologetics, not an intellectually honest search for truth.

rf15•22m ago
Paper again largely skips the issue that AGI cannot be sold to people, because either you try to swindle people out of money (all the AI startups) or transactions like that are now meaningless because your AI runs the show anyway.
wmf•11m ago
Companies developing AI don't worry about this issue so why should we?
neom•17m ago
"For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don't depend on exactly how the distinction is drawn)" Hmm, is that true? His models actually depend quite heavily on what the AI can do, "can reduce mortality to 20yo levels (yielding ~1,400-year life expectancy), cure all diseases, develop rejuvenation therapies, dramatically raise quality of life, etc. Those assumptions do a huge amount of work in driving the results. If "AGI" meant something much less capable, like systems that are transformatively useful economically but can't solve aging within a relevant timeframe- the whole ides shifts substantially, surly the upside shrinks and the case for tolerating high catastrophe risk weakens?
Ucalegon•5m ago
That is the thing about these conversations, is that the issue is potentiality. It comes back to Amara's Law; “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Its the same thing with nuclear energy in the 1950s about what could be without realizing that those potentials are not possible due to the limitations of the technology, and not stepping into the limitations realistically, that hampers the growth, and thus development, in the long term.

Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term.

artninja1988•4m ago
The argument seems to be that any AI capable of eliminating all of humanity would necessarily be intelligent enough to cure all diseases. This appears plausible to me because achieving total human extinction is extraordinarily difficult. Even engineered bioweapons would likely leave some individuals immune by chance, and even a full-scale nuclear exchange would leave survivors in bunkers or remote areas

Israel accuses two of using military secrets to place Polymarket bets

https://www.npr.org/2026/02/12/nx-s1-5712801/polymarket-bets-traders-israel-military
1•starkparker•4m ago•0 comments

Amtrak offers first look at Airo equipment

https://www.trains.com/pro/passenger/amtrak-offers-first-look-at-airo-equipment/
1•divbzero•7m ago•0 comments

Seedream 5.0 Lite – Deeper Thinking, More Accurate Generation

https://seed.bytedance.com/en/blog/deeper-thinking-more-accurate-generation-introducing-seedream-...
1•BoorishBears•10m ago•0 comments

I Built a Free,Online Heart Rate Monitor – Could You Help Me Improve It?

https://www.heartratetap.com/
1•CloudHu•15m ago•1 comments

RetEx: Midnight Hackathon in Vienna

https://mathiasd.fr/p/midnight/
1•mathiasdpx•20m ago•0 comments

Ring owners are returning their cameras

https://www.msn.com/en-us/lifestyle/shopping/ring-owners-are-returning-their-cameras-here-s-how-m...
3•c420•22m ago•1 comments

The Void

https://github.com/nostalgebraist/the-void/blob/main/the-void.md
1•stickynotememo•23m ago•0 comments

How I Learned to Stop Worrying and Love OpenClaw

https://jpreagan.com/p/how-i-learned-to-stop-worrying-and-love-openclaw
1•jpreagan•24m ago•1 comments

Why Hokkaido Is the New Taiwan

https://twitter.com/james_riney/status/2021721761013018643
2•MrBuddyCasino•28m ago•1 comments

Show HN: Phonchain – A Mobile-Native Blockchain Secured by Smartphones (Pop-S4)

1•PHONCOIN•31m ago•1 comments

Show HN: Busca – the fuzzy ripgrep fast code explorer

https://github.com/rokyed/busca
2•rokyed•33m ago•0 comments

Manage Ralph loops in a DAG pipeline with a Docker-like CLI

https://github.com/mj1618/swarm-cli
1•mj2718•38m ago•1 comments

Who discovered grokking and why is the name hard to find?

1•asmodeuslucifer•40m ago•0 comments

File shareing going viral due to fast, free and no login friction. Try it now

https://www.styloshare.com
1•stylofront•41m ago•1 comments

The Future of AI Slop Is Constraints

https://askcodi.substack.com/p/the-future-of-ai-slop-is-constraints
1•himalayansailor•42m ago•0 comments

Show HN: Seedance AShow HN: Seedance AI Video Generation (Next.js, Drizzle)

https://seedanceai2.org/
1•xuyanmei•47m ago•0 comments

7-Zip 26.00

https://sourceforge.net/p/sevenzip/discussion/45797/thread/a1f7e08417/
1•tokyobreakfast•48m ago•0 comments

First Vibecoded AI Operating System

https://github.com/viralcode/vib-OS
3•amichail•52m ago•0 comments

You're Building Petri Nets. You're Just Building Them Badly

https://joshtuddenham.dev/blog/petri-nets/
1•joshuaisaact•57m ago•0 comments

A recursive and authoritative DNS resolver from scratch in Go

1•Jyotishmoy•1h ago•1 comments

Three Inverse Laws of AI and Robotics

https://susam.net/inverse-laws-of-robotics.html
4•susam•1h ago•0 comments

Quantum Phenomena in Biological Systems(2024)

https://www.frontiersin.org/journals/quantum-science-and-technology/articles/10.3389/frqst.2024.1...
1•rolph•1h ago•0 comments

Ask HN: Would Steve Jobs Get into YC?

1•ipnon•1h ago•1 comments

One-click deploy OpenClaw bot on runclaw.com

https://www.runclaw.com/
1•bear2024•1h ago•1 comments

Why Audio Is the One Area Small Labs Are Winning

https://www.amplifypartners.com/blog-posts/arming-the-rebels-with-gpus-gradium-kyutai-and-audio-ai
3•rocauc•1h ago•0 comments

A nice way to share articles

https://www.justthearticleplease.com/
1•JnthnMyrs•1h ago•1 comments

WinClaw: Windows-native AI assistant with Office automation and skills

https://github.com/itc-ou-shigou/winclaw
1•winclaw-dev•1h ago•1 comments

It's Yours

https://inventingthefuture.ghost.io/its-yours/
1•hellojohnbuck•1h ago•1 comments

A polymerase ribozyme that can synthesize itself

https://www.biorxiv.org/content/10.1101/2024.10.11.617851v1
4•eq_ind•1h ago•0 comments

Become a Gigachad

https://www.gigachadify.com/
2•jespinoza17•1h ago•2 comments