I think you would be hard pressed to find someone who was making adequate predictions about where we would be now back in 2020, much less 2015, and if you did, I doubt many people would have taken them seriously.
I’d argue that we can currently speak with some level of confidence about what things will be like in three years. After that, who knows?
Macroeconomic and productivity forecasts from 10-15 years ago are pretty accurate, and if anything, were too optimistic on the productivity front, but there was certainly nothing wrong with taking them seriously.
I was borne in the seventies. Much of what is science fact today was science fiction then. And much of that was pretty naive and enlightening at the same time.
My point is that nothing has changed when it comes to people's ability to predict the future. The louder people claim to know what it all means or rush to man-splain that to others, the more likely it is that they are completely and utterly missing the point. And probably in ways that will make them look pretty foolish in a few decades. Most people are just flailing around in the dark. And some of the crazier ones might actually be the ones to listen to. But you'd be well advised to filter out their interpretations and attempts to give meaning to it all.
Hal, the Paranoid Android, Kitt, C3PO, R2D2, Skynet, Data, and all the other science fiction AIs from my youth are now pretty much science fact. Some of those actually look a bit slow and retarded in comparison. Are we going to build better versions of these? I'd be very disappointed in the human race if we didn't. And I'd be also disappointed if that ends up resembling the original fantasies of those things. I don't think many people are capable of imagining anything more coherent than versions of themselves dressed up in some glossy exterior. Which is of course what C3PO is. Very relatable, a bit stupid, and clownish. But also, why would you want such a thing? And the angry Austrian body builder version of that of course isn't any better.
I think the raw facts are that we've invented some interesting software that passes the Turing test pretty much with flying colors. For much of my life that was the gold standard of testing AIs. I don't think anyone has bothered to actually deal with the formalities of letting AIs take that test and documenting the results in a scientific way. That test obviously became obsolete before people even thought of doing that. We now worry about abuse of AIs to deceive entire populations with AIs pretending to be humans manipulating people. You might actually have a hard time convincing people that have been abused in such a way that what they saw and heard was actually real. We imagined it would be hard to convince them it AIs are human. We failed to imagine the job of convincing them they are not is much harder.
Economists, businesspeople & their ilk have proven time & time again that 99% of them just throw darts at a board & see what sticks. The only ingredients required are money, connections and an extroversion (height helps too). That's not to say that most scientists don't do the same thing, that is science after all.
I doubt many people at all would have expected even the success of LLMs before Google's attention paper. NLP experienced a huge jump, previous models always seemed to me like handwritten sets of statistical rules stringing together text and now we have trained sets of statistical rules orders of magnitudes more complex...I have no idea what we'll end up with next.
It's also worth pointing out that using technology is not the same as the cohort of people that spend their whole lives building and working with technology and dreaming about where the technology can go.
"It is reasonable to suppose that AI’s biggest impact will come from automating some tasks and making some workers in some occupations more productive."
This person needs the Ghost of AI present and future to come show him a bit more of this tech first-hand (try out Google Flow and try to make a statement like the one above, you won't be able to).
---
And oddly, this was just recommended to me on Youtube:
The AI Revolution Is Underhyped | Eric Schmidt (former Google CEO) | TED
https://fivethirtyeight.com/features/the-economics-nobel-isn...
AI doing fantastically better on AI benchmarks is different from AI greasing the wheels of the economy towards greater productivity. Acemoglu doesn't have much to say about the former (he's an economist, after all) and is focusing on the latter.
It is argued even whether and how personal computing has influenced productivity: https://en.wikipedia.org/wiki/Productivity_paradox
Suffice to say that even though these technologies might change life to feel radically different -- it remains to be seen how that finally snowballs into overall productivity. Of course, this is also complicated by questions of whether we're measuring productivity correctly.
Tabs for indentation, spaces for alignment. 100% all the way. Anything else is Heresy... ;)
(Seriously though, tabs all the way for me... It's just less key-presses.)
Provocative question for sure, but how much things changed since 2020 ? Or even 2015 ?
I'm talking about changes in real economy. Except of huge system shock that was Covid, not that much.
EDIT: I see that someone on the thread posted that Krugman doesn't think the internet brought real economic change either apparently.
> “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”
The internets impact on society and business didn’t happen overnight. Naive studies on how people are using it today miss the point. I remember a clip from David letterman where he was mocking bill gates about the internet. Gates said you can get baseball recaps, to which letterman replied “have you ever heard of a radio?”
But maybe I’m just more optimistic because these tools have made a huge impact on my life and productivity gain is more than low single digits. I’m not typical, but I imagine others will catch up.
>Life being what it is, several people came back at me, citing a prediction I made in 1998 that the internet’s growth would soon slow and that “by 2005 or so, it will become clear that the internet’s impact on the economy has been no greater than the fax machine’s.” I did indeed say that, in a throwaway piece I wrote for the magazine The Red Herring — a piece I still don’t remember having written, but I guess I was trying to be provocative.
..
>But how wrong was I, really, about the internet’s economic impact? Or, since this shouldn’t be about me, have the past few decades generally vindicated visionaries who asserted that information technology would change everything? Or have they vindicated techno-skeptics like the economist Robert Gordon, who argued in a 2016 book that the innovations of the late 20th and early 21st century were far less fundamental than those between 1870 and 1940? Well, by the numbers, the skeptics have won the argument, hands down.
>In that last newsletter, we looked at 10-year rates of growth in labor productivity, which suggested that information technology did indeed produce a bump in economic growth between the mid-1990s and the mid-2000s, but one that was relatively modest and short-lived. Today, let me take a slightly different approach. The Bureau of Labor Statistics produces historical estimates, going back to 1948, of both labor productivity and “total factor productivity,” an estimate of the productivity of all inputs, including capital as well as labor, which is widely used by economists as a measure of technological progress. A truly fundamental technological innovation should cause sustained growth in both these measures, especially total factor productivity.
(read the article to see pictures)
>See the great productivity boom that followed the rise of the internet? Neither do I.
Darren acegmolu is making only the latter argument. I’m betting he’s right.
https://thebsdetector.substack.com/p/ai-materials-and-fraud-...
Take a tour of a modern auto assembly line and if you're like me, you'll be shocked by 2 things --- how few people are involved and the lack of lights (robots don't need them).
At the Hyundai assembly plant in Mobile Alabama, only about 24 hours of human labor goes into building each car.
At an average rate of about $30 per hour, less than $1000 of human labor goes into each new car.
This doesn't leave a lot of room for AI to have a major impact.
So how about service jobs? How about one of the lowest level service jobs imaginable --- taking orders at a fast food drive thru?
IBM and McDonalds spent 3 years trying to get AI to take orders at drive-thru windows.
Here are the results:
https://apnews.com/article/mcdonalds-ai-drive-thru-ibm-bebc8...
Or is it that people prefer to preorder on the phone instead and pick up?
Lots of videos on TikTok illustrate the problem.
https://www.businessinsider.com/tiktokers-show-failures-with...
Would they do significantly better with a model like Claude 4 than I’m guessing something worse than GPT3.5?
You would think so --- but well financed tests in the real world suggest otherwise.
If that doesn't sum up AI hype and apologia then I don't know what does.
Much like how when you go to one of these places >>right now<< you just walk up to a kiosk, input your order, pay, then collect your order at the desk.
Couple more years and we'll rediscover that vending machines exist.
"a Taco Bell employee is still always listening on the other end of the ordering system with the ability to intervene"
The issue is a lot of people (especially policymaking adjacent) have an incentive to either use a "skynet is coming" story or a "there is nothing happening" story.
The reality is it's somewhere in the middle, and plenty of white collar jobs are heavily ripe for significant reductions in headcount.
Yeah, significantly before the '70s, unless you're specifically talkin' about robotic automation. Folks been automating human labor with automated machinery of various kinds for quite a long time before that.
AI will probably make music free. But it is already almost free with cheap instruments, recording equipment and distribution. And even before music wasn't that expensive. You can argue that we lose value in not performing it ourselves. That is some impact, but not one that strictly replaces the other. You can choose to have society where you teach music and it will still provide value over AI.
I do realize that the idea is often not that we will have cooking robots, but that AI will change chemistry or biology to where food is something else. Still hard to say if or when that happens, and what impact it would actually have.
That's expecting a lot from something that still struggles to count letters in words or take orders at a fast food drive thru.
You're expecting quite a bit more if you think that here in 2025 we're at the end state of AI development.
[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:
https://www.youtube.com/live/TknN8fCQvRk
[2] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:
https://youtube.com/watch?v=HB5TrK7A4pI
[3] Google OR-Tools:
https://developers.google.com/optimization
[4] MiniZinc:
1) It is, of course, hard to predict major capability advances.
2) But it is also hard to predict capability -> value thresholds. Some large advances won't cross a threshold. While some incremental advancements do.
3) This is all made infinitely harder, because the value chain has many layers and links, each with their own thresholds.
Major capability advances upstream may cross dramatic thresholds, generating reasonable hype, yet still hit downstream thresholds that stymie their impact.
And crossing a few small downstream thresholds can unlock massive latent upstream value, resulting in cascades of impact.
(This is something Apple aims to leverage. Not over-prioritizing major advances vs. the careful identification and clearing of numerous "trivial", but critical, downstream bottlenecks.)
r721•1d ago