In a preprint titled "Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?" (https://arxiv.org/abs/2502.15657), Yoshua Bengio et al propose a safer alternative to AGI through "scientist AIs" that explain but do not have agency, which they define as: "Agents observe their environment and act in it in order to achieve goals".
I think non-agency is crucial to preventing catastrophes, but even without it models could induce humans to take action on their behalf, to similar ends.
esafak•7h ago
I think non-agency is crucial to preventing catastrophes, but even without it models could induce humans to take action on their behalf, to similar ends.