Comments

You must log in or register to comment.

LithiumToast OP t1_ixhaiaw wrote

Per the article:

On Tuesday, Meta AI announced the development of Cicero, which it claims is the first AI to achieve human-level performance in the strategic board game Diplomacy. It's a notable achievement because the game requires deep interpersonal negotiation skills, which implies that Cicero has obtained a certain mastery of language necessary to win the game.
Even before Deep Blue beat Garry Kasparov at chess in 1997, board games were a useful measure of AI achievement. In 2015, another barrier fell when AlphaGo defeated Go master Lee Sedol. Both of those games follow a relatively clear set of analytical rules (although Go's rules are typically simplified for computer AI).

But with Diplomacy, a large portion of the gameplay involves social skills. Players must show empathy, use natural language, and build relationships to win—a difficult task for a computer player. With this in mind, Meta asked, "Can we build more effective and flexible agents that can use language to negotiate, persuade, and work with people to achieve strategic goals similar to the way humans do?"
According to Meta, the answer is yes. Cicero learned its skills by playing an online version of Diplomacy on webDiplomacy.net. Over time, it became a master at the game, reportedly achieving "more than double the average score" of human players and ranking in the top 10 percent of people who played more than one game.
To create Cicero, Meta pulled together AI models for strategic reasoning (similar to AlphaGo) and natural language processing (similar to GPT-3) and rolled them into one agent. During each game, Cicero looks at the state of the game board and the conversation history and predicts how other players will act. It crafts a plan that it executes through a language model that can generate human-like dialog, allowing it to coordinate with other players.

Meta calls Cicero's natural language skills a "controllable dialog model," which is where the heart of Cicero's personality lies. Like GPT-3, Cicero pulls from a large corpus of Internet text scraped from the web. "To build a controllable dialogue model, we started with a 2.7 billion parameter BART-like language model pre-trained on text from the internet and fine tuned on over 40,000 human games on webDiplomacy.net," writes Meta.
The resulting model mastered the intricacies of a complex game. "Cicero can deduce, for example, that later in the game it will need the support of one particular player," says Meta, "and then craft a strategy to win that person’s favor—and even recognize the risks and opportunities that that player sees from their particular point of view."

Meta's Cicero research appeared in the journal Science under the title, "Human-level play in the game of Diplomacy by combining language models with strategic reasoning."
As for wider applications, Meta suggests that its Cicero research could "ease communication barriers" between humans and AI, such as maintaining a long-term conversation to teach someone a new skill. Or it could power a video game where NPCs can talk just like humans, understanding the player's motivations and adapting along the way.

At the same time, this technology could be used to manipulate humans by impersonating people and tricking them in potentially dangerous ways, depending on the context. Along those lines, Meta hopes other researchers can build on its code "in a responsible manner," and says it has taken steps toward detecting and removing "toxic messages in this new domain," which likely refers to dialog Cicero learned from the Internet texts it ingested—always a risk for large language models.

Meta provided a detailed site to explain how Cicero works and has also open-sourced Cicero's code on GitHub. Online Diplomacy fans—and maybe even the rest of us—may need to watch out.

7

ThatsRobToYou t1_ixhhrhr wrote

An AI that can master Diplomacy should legitimately scare the shit out of everyone. You need to be a sociopath to do exceedingly well at that game!

6

[deleted] t1_ixi9rnt wrote

And this helps us for what reason? Why are we not developing tech to first focus on efficiency of energy and then working jobs that are dangerous to human safety? Why are we doing this?

1

Amiga-Juggler t1_ixkxeqq wrote

To be gods… and sell that technology to companies that are sick of employees… as an example, I think if Bezos could run Amazon with 50 distributed executives and service managers, and everything else with robots, he would. I would be interested to see how that would play out in the end… service contracts for the automation, service visits, software costs, etc. Basically this; what would be the trade-offs?

Edit: I can’t help but think we are headed to some new form of slavery. I know that sounds weird, but I am just suggesting that what they are trying to build is a “human” that can’t demand anything of its owners. Does that make sense? …I emphasize “trying”. I think a lot of this is just noise.

1

Amiga-Juggler t1_ixm6vdx wrote

Because the human mind is complicated. Replacing humans in your workforce is no easy task…and they are working so goddamn hard to change that. As far back as I can remember… robots (blue collar), off-shore (white-ish collar), touch-tone answering services, chat-bots (more white-ish collar) and click-through software (no-code)… all efforts to reduce costs and maximize profits by getting rid of the pesky human factor. I remember my first software project I was on where the goal was clear; they wanted the software to do the heavy lifting so they could take a team of ten down to three…and that was over 20 years ago. What if computers were interpersonal enough to eliminate even those last three? But thank god for the creative process! You can’t replace good ‘ol creative thinking and artistic expression… wait… (Dall-E).

Edit: Just had another thought; I have seen on occasion arguments being made about AI “rights”… something along the line of these AI “beings” getting some kind of “human rights protection” as the line between perceiving these AI bots and actual humans becomes increasingly blurred. My thought was on the Luddites and how they sabotaged modern farming equipment back in the 19th century… I was thinking “I am sure we will see some hacking group go after these technologies as the Luddites did back in the day..”. However, what if, in some weird “blame it on the tree-huggers” way, these AI bots do get some form of legal protection, making it illegal to “kill” them? Yeah, I know… the hackers would have already broken the law by the time they got to the code… but it’s and interesting plot line already been played out in several science fiction movies. But?…

1