That's not quite the level of disagreement I was expecting given the title.
> I’m not against people making shoddy toy models, and I think they can be a useful intellectual exercise. I’m not against people sketching out hypothetical sci-fi short stories, I’ve done that myself. I am against people treating shoddy toy models as rigorous research, stapling them to hypothetical short stories, and then taking them out on podcast circuits to go viral. What I’m most against is people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI2027. This is just a model for a tiny slice of the possibility space for how AI will go, and in my opinion it is implemented poorly even if you agree with the author's general worldview.
In particular, I wouldn't describe the author's position as "probably not longer than 2032" (give or take the usual quibbles over what tasks are a necessary part of "superhuman intelligence"). Indeed, he rates social issues from AI as a more plausible near-term threat than dangerous AGI takeoff [0], and he is very skeptical about how well any software-based AI can revolutionize the physical sciences [1].
[0] https://titotal.substack.com/p/slopworld-2035-the-dangers-of...
[1] https://titotal.substack.com/p/ai-is-not-taking-over-materia...
it's like asking between the difference between amateur toy audio gear, and real pro level audio gear... (which is not a simple thing given "prosumer products" dominate the landscape)
the only point in betting when "real AGI" will happen boils down to the payouts from gambling with this. are such gambles a zero sum game? does that depend on who escrows the bet??
what do I get if I am correct? how should the incorrect lose?
Most of these models predict superhuman coders in the near term, within the next ten years. This is because most of them share the assumption that a) current trends will continue for the foreseeable future, b) that “superhuman coding” is possible to achieve in the near future, and c) that the METR time horizons are a reasonable metric for AI progress. I don’t agree with all these assumptions, but I understand why people that do think superhuman coders are coming soon.
Personally I think any model that puts zero weight on the idea that there could be some big stumbling blocks ahead, or even a possible plateau, is not a good model.
Pre-CharGPT I very much doubt the bullish predictions on AI would've been made the way they are now.
A human can do a long sequence of easy tasks without error - or easily correct. Can a model do the same?
Of course, they gave it a terrible clickbait title and framed the question and graphs incorrectly. But if they did the study better it would have been "How long of a sequence of algorithmic steps can LLMs execute before making a mistake or giving up?"
Making predictions that are too specific just opens you up to pushback from people who are more interested in critiquing the exact details of your softer predictions (such as those around timelines) rather than your hard predictions about likely outcomes. And while I think articles like this are valuable to refine timeline predictions, I find a lot of people use them as evidence to dismiss the stronger predictions made about the risks of ASI.
I think people like Nick Bostrom make much more convincing arguments about AI risk because they don't depend on overly detailed predictions which can be easily nit-picked at, but are instead much more general and focus more on the unique nature of the risks AI presents.
For me the risk of timelines is that they're unknowable due to the unpredictable nature of ASI. The fact we are rapidly developing a technology which most people would accept comes with at least some existential risk, that we can't predict the progress curve of, and where solutions would come with significant coordination problems should concern people without having to say it will happen in x number of years.
I think AI 2027 is interesting as a science fiction about potential futures we could be heading towards, but that's really it.
The problem with being an AI doomer is that you can't say "I told you so" if you're right so any personal predictions you make have no close to no expected pay-out, either socially or economically. This is different to other risks which if you predict accurately when others don't you can still benefit from.
I have no meaningful voice in this space so I'll just keep saying we're fucked because what does it matter what I think, but I wish there were more people with influence out there who were seriously thinking about how they can best influence rather than stroking their own own egos with future predictions, which even if I happen agree with do next to nothing to improve the distribution of outcomes.
(Im sorry, I know its a crass question)
I think a lot of people who talk about AI risk underweight the fairly likely scenario that highly capable narrow AIs are leveraged in ways that lead to civilisational collapse. Humans getting to ASI assumes that prior advancements are not destabilising or if they are destabilising the advancements happen quickly enough that it doesn't matter.
That said, I think ASI is more likely that not. And I think ASI within 5-10 years is very likely.
To answer your question at very a high-level, I'm pretty depressed so I'm not bothered at all about dying which helps with the emotional side of this. I really am not bothered that I think I'll be dead soon – and a quick death is my highest probability positive outcome for what is coming for me personally.
My primary concern, and the thing that keeps me up at night, is the risk of new forms of torture which are so inconceivably bad that any positive outcomes from ASI would never be worth the risk. In a few years if things go "well" and we get ASI it should be completely feasible to 3d print a torture helmet which will simulate the feeling of being burnt to death for 100,000+ years. Assuming this is an experience that's physically possible to experience then signalling the brain to experience is just an engineering problem – and likely a fairly trivial one for an ASI. Again, death should be seen as a very positive outcome. ASI will create hell in the literal sense. The question here really isn't whether ASI could create hells, or whether someone out there might be messed up enough to send someone to hell, but whether humanity will create ASI. If you think we will then we will create hell and we will banish people to it.
So if there's even a 0.1% chance of something this bad happening to me I plan to check out early. That said, I see this as a low-probability risk on an individual basis (I think it's much much more likely I'll die of a designer virus or something similar). Even more likely is that ASI introduce several new civilisational risks in rapid succession and one or two of them kill most humans alive today. But again, I will remind you that should this not happen then we should assume many people alive today will be subjected to unimaginable horrors. Use cases of new forms of torture are very obvious and will be rapidly adopted in totalitarian states, but I'd also question whether democracies will end up as totalitarian states in a world where ASI exists – I think this depends on whether you believe ASI is likely to concentrate power (which I suspect it probably will). We should neither assume the invention of hell is unlikely or unlikely to used.
I appreciate what I'm saying sounds crazy today, but magical sticks which go "bang" and immediately kill the people they're pointed towards also sounds crazy to most humans who ever lived. We always underestimate how much technology can alter the boundaries of fiction, and ASI will likely change those boundaries faster and more significantly than we can possibly imagine today. Even I as a AI doomer am probably understating the extent of the horror which could be coming to you and your family. Your kids may suffer unimaginably for millions of years, trillions even.
Other low probability outcomes include scenarios where AI advancements are fundamentally destabilising and where events like mass-job loss, rioting and war results in a halting of civilisational progress and mass death – largely from famine. I'm preparing for these outcomes as much as I possibly can since this these are the only outcomes I have any agency over. I grow a lot of food, I have chickens and keep years of food supplies (needed for nuclear winter scenarios). I'm in the process of fitting a water butt in my garden so I'm not dependant on the public water supply. I keep a large sash of firewood and heating fuel for cooking and heat if energy is cut off. I have been amassing tools for a few years now so I can repair things and can produce various items (including weapons). I will continue preparing for these scenarios for as long as I have left.
Realistically though there's very little I can do in most scenarios and I'm no where near as prepared as I'd like to be. But really I just hope to avoid suffering and most accept most outcomes I have very little agency over. I just hope to die quickly or have the strength to check out while I have the chance.
I guess the sad truth is I'd consider being diagnosed with terminal cancer right now as a positive improvement in my expected life outcome. It's quite hard to over state how concerned I am. The end of humanity isn't the bad outcome imo, it's neutral from the perspective of human suffering and dramatically understates the risks coming.
But hey, hopefully I'm wrong =)
Unfortunately there's a huge number of people who get obsessed about details and then nit pick. I see this with Eliezer Yudkowsky all the time where 90% of the criticism of his views are just nit picking the weaker predictions he makes while ignoring his stronger predictions regarding the core risks which could result in those bad things happening. I think Yudkowsky opens himself up to this though because he often makes very detailed predictions about how things might play out and this largely why he's so controversial, in my opinion.
I really liked AI 2027 personally. I thought specifically the tabletop exercises were a nice heuristic for predicting how actors might behave in certain scenarios. I also agree that it presented a plausible narrative for how things could play out. I'm also glad they did wimp out with the bad ending. Another problem I have with people are concerned about AI risk is that they scare away from speaking plainly about the fact if things go poorly your love ones in a few years will probably either be either be dead, in suspended animation on a memory chip, or in a literal digital hell.
I know I sound crazy writing it out, but many of the really bad scenarios don't require consciousness or anything like that. It just requires they be self-replicating and the ability to operate without humans shutting them off.
I'm not sure if the author did anyone a favor with this write-up. More than anything, it buries the main point ("this kind of forecasting is fundamentally bullshit") under a bunch of complicated-sounding details that lend credibility to the original predictions, which the original authors now get to agrue about and thank people for pointing out "minor issues which we have now addressed in the updated version".
- Google buying TiVo is very funny, but ended up being accurate
- Google GRID is an interesting concept, but we did functionally get this with Google Drive
- MSN Newsbotster did end up happening, except it was Facebook circa ~2013+
- GoogleZon is very funny, given they both built this functionality separately
- Predicting summarized news is at least 13 years too early, but it's still correct
- NYT vs GoogleZon also remarkably prescient, though about 13 years too early as well
- EPIC pretty accurately predicts the TikTok and Twitter revenue share, though, again, about 12 years too early
- NYT still hasn't gone offline, and was bolstered by viewership during the first Trump term, and print subscriptions are the lowest they've ever been
Really great video - it does seem like they predicted 2024 more than 2014, where people unironically thought haitians were eating dogs and that food prices had gone up 200% because of what they saw on TikTok and elected a wannabe tyrant as a result
Everyone needs to be planning for this -- all of this urgent talk of "AI" (let alone "climate change" or "holocene extinction") is of positively no consequence compared to the prospect I've outlined here: a mass of HUMAN FLESH the size of THE MOON growing on the surface of our planet!
On a more serious note. Have these AI doom guys ever dealt with one of these cutting edge models on out of distribution data? They suck so so bad. There's only so much data available, the models have basically slurped it all.
Let alone like the basic thermodynamics of it. There's only so much entropy out there in cyberspace to harvest, at some point you run into a wall and then you have to build real robots to go collect more in the real world. And how's that going for them?
Also I can't help remarking: the metaphor you chose is science fiction.
These things are dangerous not because of some sci-fi event that might or might not happen X years from now, they're dangerous now for perfectly predictable reasons stemming primarily from executive and VC greed. They won't have to be hyperintelligent systems that are actually good or better at everything a human is, you just need to sell enough CEOs on the idea that they're good enough now to reach a problematic state of the world. Hell, the current "agents" they're shoving out are terrible, but the danger here stems from idiots hooking these things up to actual real world production systems.
We already have AI systems deciding who does or doesn't get a job, or who gets fines and tickets from blurry imagery where they fill in the gaps, or who gets banned off monopolistic digital platforms. Hell, all the grifters and scammers are already using these systems because what they care about is quantity and not quality. Yet instead of discussing the actual real dangers happening right now and what we can do about it, we're instead focusing on some amusing but ultimately irrelevant sci-fi scenarios that exist purely as a form of viral marketing from AI CEOs that have gigantic vested interests in making it seem as if the black boxes they're pushing out into the world are anything like the impressive hyperintelligences you see in sci-fi media.
I'm as big of a fan as Philip K. Dick as anyone else, and maybe there is some validity to worrying a bit about this hypothetical Skynet/Bladerunner/Butlerian Jihad future, but how about we shift more of our focus on the here and now, where real dangers already exist?
Nuclear reactors are dangerous not because of some sci-fi chain reaction that might or might not happen, they're dangerous now for perfectly predictable reasons stemming primarily from radiation and radioactive waste.”
The straight forward mitigation for the hypothetical situation is to halt development; this is not what the ai companies are pushing for, so I'm not convinced that this line of thinking can be meaningfully attributed to the marketing strategy of ai companies.
How do all those bugs get removed?
brcmthrowaway•4h ago
goatlover•4h ago
evilantnie•4h ago
The back-and-forth over σ²’s and growth exponents feels like theatrics that bury the actual debate.
vonneumannstan•4h ago
Truly a bizarre take. I'm sure the Dinosaurs also debated the possible smell and taste of the asteroid that was about to hit them. The real debate. lol.
evilantnie•1h ago
ysofunny•4h ago