I feel game theory explains how we socialized; why not train AIs that way?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L0: Prisoner's dilemma (find tit-for-tat) L1: Prisoner dilemma (cutthroat) . 3 AIs . Should all be able to cooperate L2: Dealing with baddies . 3 good, 1 selfish . They can evolve but odds favor coop- eration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...and so on and so forth.
I'm not a game theorist, but I'm sure one could divine some games easily enough?
Eager to hear your thoughts.
bigyabai•3h ago
So, the reason we do not train LLMs in this way is because that's not how it works.