Four decades ago was 1985. The thing is, there was a huge jump in progress from then until now. If we took something which had a nice ramped progress, like computer graphics, and instead of ramping up we went from '1985' to '2025' in progress over the course of a few months, do you think there wouldn't be a lot of hype?
Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.
A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.
These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch. Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.
Through this lens it's way more normal
Let's not forget there has been times when if-else statements were considered AI. NLP used to be AI too.
djoldman•51m ago