MetaAI_Official OP t1_izfe82v wrote
Reply to comment by [deleted] in [D] We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything! by MetaAI_Official
The title of the paper doesn't refer to CICERO being "human-like" necessarily (though it does behave in a fairly human-like way). Instead it refers to the agent achieving a score that's on the level of strong human players.
But also, CICERO is not just trying to be human-like: it’s also trying to model how *other* humans are likely to behave, which is necessary for cooperating with them. In one of our earlier papers we show that even in a dialogue-free version of Diplomacy, an AI that’s trained purely with RL without accounting for human behavior fares quite poorly when playing with humans (Paper). The wider applications we see for this work are all about building smart agents that can cooperate with humans (self-driving cars, AI assistants, …) and for all these systems it’s important to understand how people think and match their expectations (which often means responding in a human-like way ourselves, though not necessarily).
When language is involved, understanding human conventions is even more important. For example, saying “Want to support me into HOL from BEL? Then I’ll be able to help you into PIC in the fall” is likely more effective than the message “Support BEL-HOL” even if both express the same intent. -AL
Viewing a single comment thread. View all comments