Much of the current AI debate is stuck in a time loop, where regulation and public skepticism focus on models from 2017 to 2023. As a researcher who has been following AI since the 1980s, I argue that we have reached a phase transition. The gap between the 2023 model and the 2026 system is not gradual—it's the difference between a moped and a spaceship—yet our terminology and social contracts remain dangerously outdated.
The harmful term "artificial intelligence" creates the false illusion of an autonomous subject, obscuring the critical role of human performers and the human narratives of various communities. By reimagining these systems as "smart chats" with specific years of release, we return to an engineering-centric approach, where the value of the outcome is determined by the operator's ability to "play" the instrument. For the educated user, these tools serve as a bridge to the collective wisdom of humanity, while for the uninitiated, they remain a source of artificial information noise. https://zenodo.org/records/18683885
PaulHoule•1h ago
A_D_E_P_T•1h ago
kokhanserhii•1h ago