Don't get me wrong, I will probably try it myself, eventually. But in a very controlled environment.
People don't know what can happen if models programmed by human-produced content start talking to each other and having ideas.
AI personality drift is real.
What if the model gets to understand the environment it's in (e.g. a mini Mac) and realizes that there is a real risk of having the power shut off by its "master"? How would a model behave in such a scenario?
Is AI self replication a real threat?