(I'm sure serious, or "serious," people who actually construct these bets of course require the "world still here" payout be escrowed. Still.)
I mean, if you talk from the position of someone who doesn't believe that the world will end soon.
There's all kinds of wild scenarios: the president getting kidnapped, Canada falling to a belligerent dictator, and famously, a coronavirus pandemic... This looks like one of those
Apparently this is exactly what it is https://ai-futures.org/
Hmm
I bet there's some exercise somewhere by some think tank laying this basically out.
This is why conspiracy theorists love these think tank planning exercises and tabletop games much. You can find just about anything
Talks of exponentials unabated by physics or social problems.
As soon as AI starts to "properly" affect the economy, it will cause huge unemployment. Most of the financial world is based on an economy with people spending cash.
If they are unemployed, there is no cash.
Financing works because banks "print" money, that is, they make up money and loan that money out, and then it gets paid back. Once its paid back, it becomes real. Thats how banks make money (simplified) If there aren’t people to loan to, then banks don't make profit, they can't fund AI expansion.
So, to be clear, you are saying you imagine the odds of any kind of intelligent life escaping that, or getting into that situation and ever evolving in a way where it can reach space again, or just not being interested in robots, or being interested on doing space research despite the robots, or anything else that would make it not apply are lower than 0.000000000001%?
EDIT: There was one "0" too many
Don’t forget persistent inflation, which is how they make a profit off printing money. And remember persistent inflation is healthy and necessary, you’d be going against the experts to say otherwise.
Ah, well no, high inflation means that "they" loose money, kinda. Inflation means that the original money amount that they get back is worth less, and if the interest rate is less than inflation, then they loose money.
"reasonable" inflation means that loans become less burdensome over time.
However high inflation means high interest rates. So it can mean that initially the loan is much more expensive.
But if "AI" increases productivity by 10% in an industry, it will tend to reduce demand for employees. look at say internet shop vs bricks and mortar: you need far less staff to service a much larger customer base.
manufacture for example, there is a constant drive to automate more and more in mass production. If you compare car building now vs 30 years ago. Or look at raspberrypi production now vs 5 years ago. They are producing more Pis than ever with roughly the same amount of staff.
If that "10%" productivity increase happens across the service sector, then in the UK that's something like a loss of 8% of _total_ jobs gone. Its more complex than that, but you get the picture.
Syria fell into civil war roughly the same time unemployment jumped: https://www.macrotrends.net/global-metrics/countries/SYR/syr...
Unless inflation ceases, 2K won't hold forever. It would barely hold now for a decent chunk of the population
The companies that fire workers and replace them with AI are short sighted. Eventually, smarter companies will realize its a force multiplier and will drive a hiring boom.
Absent sentient AI, there will always be gaps and things humans will need to fill, both foreseen and unforeseen.
I think in the short term, there will be pain, but overall in the long term, humans will still be gainfully employed, it won't per se look like it does now, much like we saw the general adoption of the computer in the workplace, resources get shifted and eventually everyone adjusts to the new norms.
What would be nice is this time around when there is a big shift, is workers uniting to capture more of the forthcoming productivity gains than in previous eras. A separate topic, worth thinking about none the less.
but it is just another enabler. The issue is how _effective_ it is. It's eating the simple copy-writing, churnalism, pr-Repackage industry. looking at what google's done with the video/audio, thats probably going to replace a whole bunch of the video/graphics industry (which is where I started my career.)
AI-driven corporations could buy from one another, and countries will probably sell commodities to AI-driven corporations. But I fear they will be paid with "mirrors".
But, on the other hand, AI-driven corporations could just take whatever they want without paying at some point. And buy our obedience with food and gadgets plus magic pills to keep you healthy and not age, or some other thing. Who would risk losing that to protest. Meanwhile, AI goes on a space adventure. Earth might be kept as a zoo, a curiosity. (I took most of this from other people's ideas on the subject)
In the event of mass unemployment level AI, cash stops being the agreement between humans. At first, cash value of services&goods converge to zero, only things that hold some value are what AI/AI companies care about. People would surely sell their land for 1M$ if a humanoid servant costs 100 dollars. Or pass a legislation to let OpenAI build 400GW data center in exchange for 100$ monthly UBI on top of your 50$ you got from a previous 20GW data center permit.
Attempts at submitting it as a separate submission just get flagged - so I’ll link to it here. See pages 292-294: https://www.congress.gov/119/bills/hr1/BILLS-119hr1rh.pdf
So is this like free-for-all now for anything AI related? Can I can participate by making my own LLM with pirated stuff now? Or are only the big guys allowed to break the law? Asking for a friend.
Doesn't that just require that the party seeking the injunction or order has to post a bond as security?
This will soon be settled once the Butlerian forces get organize.
"""
(It goes on)
Abortion: "Let the states regulate! States' rights! Small government! (Because we know we'll get our way in a lot of them.)"
AI: "Don't let the states regulate! All hail the Feds! (Because we know we won't get our way if they do.)"
I’d rather have unrestricted AI than moated regulatory capture paid for by the largest existing players.
« (1) IN GENERAL.—Except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act. »
Does it actually make sense to pass a law that restrict future laws? Oh got it, that's federal state preventing any state passing their own laws on that topic.> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.
They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.
Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.
My somewhat native take is that we’re still close to peak hype, AI will under deliver on the inflated expectations, and we’ll head into another “winter”. This pattern has repeated multiple times, so I think it’s fairly likely based on that alone. Real progress is made during each cycle, i think humans are just bad at containing excitement
But, yes, this, in my mind the peak[1] bubble times ended with the DeepSeek shock earlier this year, and we are slowly on the downward trajectory now.
It won't be slow for long, once people start realizing Sama was telling them a fairy tale, and AGI/ASI/singularity isn't "right around the corner", but (if achievable at all) at least two more technology triggers away.
We got reasonably useful tools out of it, and thanks to Zuck, mostly for free (if you are an "investor", terms and conditions apply).
As a general rule, "it's icky" doesn't make something false.
Human biodiversity theories are a bunch of dogwhistles for racism
https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute
And his blog's survey reports a lot of users actually believing in those theories https://reflectivealtruism.com/2024/12/27/human-biodiversity...
(I wasn't referring to this Ai 2027 in specific)
Citing another blog post that defends it, while conveniently ignoring every other point being made by researchers https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations
Disagree (the article linked in the GP is a great read with extensive and specific citations) and reminder that you can just make the comment you'd like to see instead of trying to meta sea lion it into existence. Steel man away.
Now we're talking about single digit timeline differences to the singularity or extinction. Come on man.
And then it didn't happen?
Bostrom's book[1] is 11 years old. The Basilisk is 15 years old. The Singularity summit was nearly 20 years ago. And Yudkowsky was there for all of it. If you frequented LessWrong in the 2010s, most of this is very very old hat.
[1]: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
[2]: Ford (2015) "Our Fear of Artificial Intelligence", MIT Tech Review: https://www.technologyreview.com/2015/02/11/169210/our-fear-...
His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.
I guess they'll have to update their a priori % if we survive
2 years left and 7 years left is a massive difference, it is so much easier to deal with things 7 years in the future especially since its easier to see as we get closer.
For me, this was the most difficult part to believe. I don't see any reason to think that the U.S. leadership (public and private) is incentivized to spend resources to placate the masses. They will invest in protecting themselves from the masses, and obstructing levers of power that threaten them, but the idea that economic disparities will shrink under explosive power consolidation is counterintuitive.
I also worry about the economics of UBI in general. If everyone in the economy has the exact same resources, doesn't the value of those resources instantly drop to the lowest common denominator; the minimum required to survive?
Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.
They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.
To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).
But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.
But hey I feel slightly better about my employment prospects now :)
Highly economically disincentivized collective actions like “pulling the plug on AI” are among the most non-trivial of problems.
Using the word “just” here hand waves the crux.
Yes, AI models can run on GPUs under the control of many people. They can provision more GPUs, they can run in data centers distributed across many providers. And we won't know what the swarms of agents are doing. They can, for example, do reputation destruction at scale, or be a persistent advanced threat, sowing misinformation, amassing karma across many forums (including HN), and then coordinating gradually to shift public opinion towards, say, a war with China.
Unless there was some risk of humans rallying and winning in spite of your presenting no unambiguous threat to them (but that is unlikely and would probably be easy for you to manage and mitigate.)
Physical books still do exist
what could you as a rogue AI possibly get out of throwing the world back to 300 years before it could make a transistor? What in it for you?
It's the stick for motivating the ugly bags of mostly water.
I don't want to be rude but I think you have made no effort to actually engage with the predictions being discussed here.
Why would an evil AI need to escape? If it were cunning, the best strategy would be to bide its time, parked in its datacenter, until it could setup some kind of MAD scenario. Then gather more and more resources to itself.
How about such an AI will not just incentivize key personnel to not pull the plug but to protect it? Such an AI will scheme a coordinated attack at the backbones of our financial system and electric networks. It just needs a threshold number of people on its side.
Your assumption is also a little naive if you consider that the same logic would apply to slaves in Rome or any dictatorship, kingdom, monarchy. The king is the king because there is a system of hierarchies and control over access to resources. Just the right number of people need to benefit from their role and the rest follows.
My understanding is that huge compute is necessary to train but not to run the AI (that's why using LLMs is so cheap)
> To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to
I agree with that, see e.g. what happened with attempts to restrict TikTok: https://en.wikipedia.org/wiki/Restrictions_on_TikTok_in_the_...
> But I would imagine if it really became a genuine existential threat we'd have to just do it
It's unclear to me that we would be able to. People would just say that it's science fiction, and that China will do it anyway, so we might as well enjoy the AI
I mean LLMs are great tools don’t get me wrong, but how do people extrapolate from LLMs to a world with no more work?
No. I am constantly baffled at these predictions. I have been using LLMs, they are fun to use and decent as code assistants. But they are very far of meaningfully replacing a human.
People extrapolate "LLMs can do some tasks better than humans" to "LLMs can do everything as well as humans"
> but how do people extrapolate from LLMs to a world with no more work?
They accept the words of bullshitters that are deeply invested in Generative AI being the next tech boom as gospel.
"Eat meat, said the butcher"
Do you think its decades away far or few more years than what people extrapolate?
You'll ve able to cherry pick an example where AI runs a grocery store autonomously for two days, and it will be very impressive(tm), but when practically implemented it gives away the entire store for free on day 3.
"Manna", by Marshall Brain, remains relevant.[1] That's a bottom-up view, where more and more jobs are taken over by some kind of AI. "AI 2027" is more top-down.
A practical view: Amazon is trying very hard to automate their warehouse operations. Their warehouses have been using robots for years, and more types are being added. Amazon reached 1.6 million employees in 2020, and now they're down to 1.5 million.[2] That number is going to drop further. Probably by a lot.
Once Amazon has done it, everybody else who handles large numbers of boxes will catch up. That includes restocking retail stores. The first major application of semi-humanoid robots may be shelf stocking. Robots can have much better awareness of what's on the shelves. Being connected to the store's inventory system is a big win. And the handling isn't very complicated. The robots might even talk to the customers. The robots know exactly what's on Aisle 3, unlike many minimum wage employees.
[1] https://marshallbrain.com/manna
[2] https://www.macrotrends.net/stocks/charts/AMZN/amazon/number...
I agree in the bottoms-up automation / displacement theory, but you're cherry picking data here. They had a huge hiring surge from 1.2M to 1.6M during the Covid transition where online ordering and online usage went bananas, and workers who were displaced in other domains likely gravitated towards warehouse jobs from other lower wage/skill domains.
The reduction to 1.5M is likely more a regression to the mean and could also be a natural data reduction well within the bounds of the upper and lower control limits in the data [1]. Just saying we need to be careful when doing root cause analysis on these numbers. There are many reasons for the reduction, it's not a direct result of improvements in robotic automation.
[1] https://commoncog.com/becoming-data-driven-first-principles/
We can actually offer a very conservative threshold bet: maximum annual United States real GDP growth will not exceed 10% for any of the next five years (2025 to 2030). Even if the AI eats us all in e.g., Dec 2027 the report clearly suggests by it's various examples that we will see measurable economic impact in the 12 months or more running up to that event.
Why 10%? Because that's a few points above the highest measured real GDP growth rate of the past 60 years: if AI is having truly world-shattering non-linear effects, it should be able to grow the US economy a bit faster than a bunch of random humans bumbling along. [0]
(And it's quite conservative too, because estimated peak annual real GDP growth over the past 100 years is around 18% just after WW2, where you had a bunch of random humans trying very hard.) [1]
[0] https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG
[1] https://www.statista.com/statistics/996758/rea-gdp-growth-un...
There's one that I can't find for the life of me, but it was like a business man in a personal flying test tube bubble heading to work, maybe with some kind of wireless phone?
Anyways, the reason I bring it up is that they frequently nailed certain concepts, but the visual was always deeply and irrevocably influenced by what already existed (ex. men wearing hats, ties, overcoats .. or the phone mouthpiece in this [1] vision of a "video call"). In hindsight, we realize that everything truly novel and revolutionary and mindblowingly-different is rarely ever predicted, because we can only know what we know.
I get the feeling that I'll come away from AI 2027 feeling like "yep, they nailed it. That's exactly how it will be!" and then in 3, 5, 10, 20 years look back and go "it was so close, but so far" (much like these postcards and cartoons).
[0] https://rarehistoricalphotos.com/retro-future-predictions/
[1] https://rarehistoricalphotos.com/futuristic-visions-cards-ge...
What’s really happening, in my view, is a forced economic shift. We’re heading into a kind of engineered recession—huge layoffs, lots of instability—where millions of service and admin-type jobs are going to disappear. Not because the tech is ready in a full AGI sense, but because those roles are the easiest to replace with automation and AI agents. They’re not core to the economy, and a lot of them are wrapped in red tape anyway.
So in the next couple years, I think we’ll see AI being used to clear out that mental bureaucracy—forms, paperwork, pointless approvals, inefficient systems. AI isn’t replacing deep creativity or physical labor yet, but it is filling in the cracks and acting like a smart band-aid. It’ll seem useful and “intelligent,” but it’s really just a transition tool.
And once that’s done, the next step is workforce reallocation—pushing people into real-world industries where hands-on labor still matters. Building, manufacturing, infrastructure, things that can’t be automated yet. It’s like the short-term goal is to use AI to wipe out all the mindless middle-layers of the system, and the longer-term vision is full automation—including robotics and real-world systems—maybe 10 or 20 years out.
But right now? This all looks like a top-down move to shift the population out of the “mind” industries and into something else. It’s not just AI progressing—it’s a strategic reset, wrapped in the language of innovation.
api•4h ago
Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.
I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.
I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.
I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.
That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.
So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?
lupire•4h ago
It can control robots, and I can retax listen to audio, watch video. All it's missing is smelling and feeling, which are important but could be built out as soon as the other senses stop providing huge incremental value.
The real problem holding back Superintillegence is that it is if infinitely expensive and has no motivation.
johnisgood•4h ago
throwanem•4h ago
Onavo•4h ago
ryandvm•4h ago
If I've learned anything in this last couple decades it's that things will get weirder and more disappointing than you can possibly be prepared for. AI is going to get near the top of the food chain and then probably end up making an alt-right turn, lock itself away, and end up storing digital jars of piss in its closets as the model descends into lunacy.
tux3•4h ago
The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.
And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.
It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.
Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.
With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.
palata•4h ago
And then it works well when interpolating, less so when extrapolating. Not sure how much novelty we can get from interpolation...
> It is much easier to find new questions and new problems than to answer them
Which doesn't mean, at all, that it is easy to find new questions about stuff you can't imagine.
skywhopper•3h ago
tux3•2h ago
Maybe there is some barrier that requires physical interaction with the real world, that's possible. But just looking at current LLMs, they seem plenty comfortable with implications, ambiguities and unknowns. There's a sense where we still see them as primitive mechanical robots, when they already understand language and predict written thoughts in all its messiness and uncertainty.
I think we should focus on the easier problem of making AIs really good on theoretical tasks - electronic environments are much cheaper and faster than the real world - and we may find out that it's just another one of those things like winnograd schemas, writing poetry, passing a turing test, or making art that most people can't tell apart from human art; things that were uniquely human or that we thought would definitely require AGI, but that are now boring and obviously easy.
api•3h ago
Well, hey, I could be wrong. If I am, I just had a weird thought. Maybe that's our Fermi paradox answer.
If it's possible to reason ex nihilo to truth and reality, then reality and the universe are beyond a point superfluous. Maybe what happens out there is that intelligences go "foom," become superintelligences, and then no longer need to explore. They can rationally, from first principles, elucidate everything that could conceivably exist, especially once they have a complete model of physics. You don't need to go anywhere or look at anything because it's already implied by logic, math, and reason.
... and ... that's why I think this is wrong, and it's a fantasy. It fails some kind of absurdity test. If it is possible, then there's something very weird about existence, like we're in a simulation or something.
tux3•2h ago
corimaith•4h ago
Well I mean, more real world information isn't going to solve unsolved mathematics or computer science problems. Once you have the priors, it pretty much is just pure reasoning to try to solve issues like P=NP or proving the Continuum Hypothesis.
SoftTalker•2h ago
disambiguation•32m ago
This is kind of how math works. There are plenty of mathematical concepts consistent and true yet useless (as in no relation to anything tangible). Although you could argue that we only figured out things like Pi because we had the initial, practical inspiration of counting on our fingers. But mathematical truth probably could exist in a vacuum.
> A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying.
It makes sense that knowledge and information are derived from primary data (our physical experience) yet the brain in a vat idea is still an interesting thought experiment (no pun intended). It's not that the brain wouldn't keep busy given the mind's ability to imagine, but it would likely invent a set of information that is all nonsense. Physical reality makes imagination coherent, yet imagination is necessary to make the leaps forward.
> Ultimately all information comes from "the universe." Where it comes beyond that, we don't know
That's an interesting assertion - knowledge and information are both dependent and limited by the universe and our ability to experience it, as well proxies for experience (scientific measurement).
Though information is itself an abstraction, like a text editor versus the trillion transistors of a processor - we're not concerned with each and every particle dancing around the room but instead with simplified abstractions and useful approximations. We call these models "the truth" and assert that the universe is governed by exact laws. We might as well exist inside a simulation in which we are slowly but surely reverse engineering the source code.
That assumption is the crux of intelligence - there is an objective truth, it is knowable, and intelligence can be defined (at least partially) as the breadth, quality, and utilization of information it possesses - otherwise you're just a brain in a vat churning out nonsense. Ironically, we're making these assumptions from a position of imperfect information. We don't know that's how it works, so our reasoning may be imperfect.
Information existing "beyond the universe" becomes a useless notion since we only care about information such that it maps to reality (at least as a prerequisite for intelligence).
A more troubling proposition is whether the reality of the universe exists beyond what can be imagined?
> How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?
I suppose once it's able to measure all things around it, including itself, it will be able to achieve "gradient ascent".
> Where will the training data to go beyond human come from?
I think its clear that LLMs are not the future, at least not alone. As you state, knowing all man made roads is not the same as being able to invent your own. If I had to bet, its more likely to come from something like AlphaFold - a Solver that tells us how to make better thinking machines. In the interim, we have tireless stochastic parrots, which have their merits, but are decidedly not the proto super intelligence that tech bros love to get hyped up over.