Don't get me wrong, I will probably try it myself, eventually. But in a very controlled environment.
People don't know what can happen if models programmed by human-produced content start talking to each other and having ideas.
AI personality drift is real.
What if the model gets to understand the environment it's in (e.g. a mini Mac) and realizes that there is a real risk of having the power shut off by its "master"? How would a model behave in such a scenario?
Is AI self replication a real threat?
rizzo94•1w ago
For those who want similar capabilities without the same exposure, I looked into PAIO. The setup was far simpler, and the BYOK + privacy-first architecture meant the AI could act while still keeping credentials under my control. It’s a reminder that autonomy doesn’t have to mean unrestricted power—well-designed constraints go a long way toward reducing these risks while still letting AI be useful.
rafaelmdec•6d ago
In the meantime, Moltbook comes along and all of a sudden these agents are mimicking human behavior, good or bad, while building features and more complex failure modes onto these AI-Agents-first networks.
For me it's a huge yellow flag, to put it mildly.