Look, fitting a single metric to a curve and projecting from that only gets you a "model" that conforms to your curve fitting.
"proper" AI, where it starts to remove 10-15% of jobs will cause an economic blood bath.
The current rate of AI expansion requires almost exponential amounts of cash injections. That cash comes from petro-dollars and advertising sales. (and the ability of investment banks to print money based on those investment) Those sources of cash require a functioning world economy.
given that the US economy is three fox news headlines away from collapse[1] exponential money supply looks a bit dicey
If you, in the space of 2 years remove 10-15% of all jobs, you will spark revolutions. This will cause loands to be called in, banks to fail and the dollar, presently run obvious dipshits, to evaporate.
This will stop investment in AI, which means no exponential growth.
Sure you can talk about universal credit, but unless something radical changes, the people who run our economies will not consent to giving away cash to the plebs.
AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.
[1] trump needs a "good" economy. If the fed, who are currently mostly independent need to raise interest rates, and fox news doesn't like it, then trump will remove it's independence. This will really raise the chance of the dollar being dumped for something else (and its either the euro or renminbi, but more likely the latter)
That'll also kill the UK because for some reason we hold ~1.2 times our GDP in US short term bonds.
TLDR: you need an exponential supply of cash for AI 2027 to even be close to working.
AI 2027 is classic Rationalist/LessWrong/AI Doomer Motte-Bailey - it's a science fiction story that pretends to be rigorous and predictive but in such a way that when you point out it's neither, the authors can fall back to "it's just a story".
At first I was surprised at how much traction this thing got, but this is the type of argument that community has been refining for decades and this point, and it's pretty effective on people who lack the antibodies for it.
And you can certainly criticize the research, but you've got the motte and the bailey backwards
The overreaction (on both sides) to be followed by fatigue and disinterest.
* Pro-safety folks could point at it and say this is why AI development should slow down or stop
* LLM-doomer folks (disclaimer: it me) can point at it and mock its pie-in-the-sky charts and milestones, as well as its handwashing of any actual issues LLMs have at present, or even just mock the persistent BS nonsense of “AI will eliminate jobs but the economy [built atop consumer spending] will grow exponentially forever so it’ll be fine” that’s so often spewed like sewage
* AI boosters and accelerationists can point to it as why we should speed ahead even faster, because you see, everyone will likely be fine in the end and you can totes trust us to slow down and behave safely at the right moment, swearsies
Good fiction always tickles the brain across multiple positions and knowledge domains, and AI 2027 was no different. It’s a parable warning about the extreme dangers of AI, but fails to mention how immediate they are (such as already being deployed to Kamikaze drones) and ultimately wraps it all up as akin to a coin toss between an American or Chinese Empire. It makes a lot of assumptions to sell its particular narrative, to serve its own agenda.
One of the best things I've read all day.
Coders won't stop being, they'll just do more, compete at higher levels. The losers are the ones who won't/can't adapt.
The article is not about AI replacing jobs. It doesn't even touch this subject.
Today, anyone can run SOTA open-weights models in the comfort of their home for much less than the price of a ~1929 electric washing machine ($150 then or $2,800 today).
If I didn’t know better, it’s almost like there’s a vested interest in propping these things up rather than letting them stand freely and let the “invisible hand of the free market” decide if they’re of value.
Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?
What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?
And I'm yuge on LLMs.
It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.
As neutrally as possible, I think everyone can agree:
- There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,
- Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.
- Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.
- You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.
It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.
In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago
It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.
In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.
AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.
You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?
Here is the primary author of the timelines forecast:
> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.
> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.
https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...
Here is one staff member at Lightcone, the folks credited with the design work on the website:
> I think the actual epistemic process that happened here is something like:
> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon
> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world
> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to
> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"
https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...
I don't think it changes anything but thanks for the correction.
There's a large overlap with the crypto true-believers who were convinced after seeing "no blockchain --> blockchain exists" that all laws would be enshrined in the blockchain, all business would be done with blockchains, etc.
We've had automation in the past; it didn't decimate the labour-force; it just changed how people work.
And we didn't go from handwashing clothes --> washing machines --> all flat surfaces are cleaned daily by washing robots...
It's easy to lapse into personifying it and caricaturing the-thing-in-toto, but then we end up at obvious absurdities - to wit:
- we're on HN, it'd be news to most readers that there's a "large overlap" of "true-believers", AI was a regular discussion topic here a loooong time before ChatGPT, even OpenAI. (been here since 2009)
- Similarly "AI proponents keep drawing perfectly straight lines...AIs run all governments, write all code, paint all paintings and so on."
The technical term would be "strawmen", I believe.
Or maybe begging the question (who are these true-believers who overlap? who are these AI proponents)
Either way, you're not likely to find these easy-to-knock-down caricatures on HN. Maybe some college hypebeast on Twitter. But not here.
I am certain you have observed N members of each set. It's the rest that doesn't follow.
Unless it hits hard in some of the areas that we have cognitive biases and are not fully rational on the consequences.
f38zf5vdt•6h ago
Every other intellectual job will presumably be gone by then too. Maybe AI will be the second great equalizer, after death.
goatlover•5h ago
dinfinity•4h ago
One can argue all day about timelines, but AI has progressed from being fully inexistent to a level rivaling and surpassing quite some humans in quite some things in less than 100 years. Arguably, all the evidence we have points to AI being able to take over AI research at some point in the near future.
suddenlybananas•3h ago
I don't really think this is true, unless you'd be willing to say calculators are smarter than humans (or else you're a misanthrope who would do well to actually talk to other people).
spongebobstoes•2h ago
Even the chatgpt voice mode is an okay conversation partner, and that's v1 of s2s
variance is still very high, but there is every indication that it will get better
will it surpass cutting edge researchers soon? I don't think in the next 2 years, but in the next 10 I don't feel confident one way or the other
pier25•3h ago
Does it?
That's like looking at a bicycle or car and saying "all the evidence points out we'll be able to do interstellar travel in the future".