Viewing a single comment thread. View all comments

NeverStopWondering t1_izc4egr wrote

Has the model played against copies of itself (post-training, I mean), and if so, did any interesting or odd emergent strategies form?

7

MetaAI_Official OP t1_izfheer wrote

From a strategic perspective, it attempts similar things but the results are a little different - which is understandable as it reacts differently. It tends to build more unorthodox alliances just because it doesn't know they're unorthodox. It actually made the self-play games quite fun to watch, although if the point is to compete against humans it is kind of tangential to the key challenges. -AG

6

MetaAI_Official OP t1_izfh1t6 wrote

We tested the model using self-play frequently before we ever put it in front of humans (outside of our team). One interesting learning was that mistakes that the model makes in self-play games aren't reflective of the mistakes it makes when playing against humans. From a language perspective, in self-play, the model is more prone to "spirals" of degenerate text (as one bad message begets the next, and the model continues to mimic its past language). Moreover, humans reacted differently to mistakes the model made — in human play, a human might question/interrogate the agent after receiving a bad message, while another model is unlikely to do so. This really underscored the importance of playing against humans during development for research progress. -ED

4