Not because this is a reflection on the quality and reliability of the AI models of today or tomorrow, but because it is documenting for us, in real time, the beginning of an epic period of collective cognitive decline. We are using these models to think, to write, and even to take actions on our behalf, whether or not they're better than what we could come up with ourselves. Most of the time, it is known to the user that the output is inferior to their best effort.
But we are losing our ability to apply our 'best effort'. Nobody wants to think. Nobody wants to take a long time to do ANY task, especially in the late stage capitalist word we inhabit. You are disincentivized from putting in almost anything above the bare-minimum effort.
We've already begun to stop thinking, and the outputs we're getting from AIs are notably, and often comically, inferior to the output from even a somewhat educated Human. What happens when the outputs become better than most people can come up with, period? We're seeing it.
The time will come when we CANNOT reasonably, and in a Timely Manner, dispute the machine. And for 99% of the world it will become the new truth. Our collective faculties of critical thinking will atrophy. I think that this is a hugely under-discussed AI doom risk scenario.
mzs•3h ago
https://bsky.app/profile/joshuajfriedman.com/post/3lpm5odirq...
https://bsky.app/profile/jmcunning.bsky.social/post/3lpmpxtg...