Viewing a single comment thread. View all comments

hophophop1233 t1_izbfgfx wrote

What do you see as applications outside of this game? Were there any novel techniques or something else we should be aware of? Is anything a paradigm shift or could be impactful for sectors moving forward? What are the requirements to run the agent/model? Will pre-trained models be available? Will this be open sourced?

55

Beor_The_Old t1_izc1lpo wrote

Bridging rl and planning and connecting both to NLP has been an area of interest for a while. You could adjust large language models to have more of a personality and understanding of the concept of a state. This is at least a good demonstration that those three concepts can be connected well in a HCI setting

30

JustOneAvailableName t1_izc2l2j wrote

> understanding of the concept of a state.

I kinda think we're already there

18

OSeady t1_izcsujq wrote

Is this for real?

3

blose1 t1_izczrn8 wrote

I asked ChatGPT about your comment:

>someone asked on reddit "is this for real?" about article with title "Building A > Virtual Machine inside ChatGPT", how would you respond to him ?

A: It's difficult to say without more context. It's possible that the article in question is discussing the use of a large language model, such as GPT-3, to create a virtual machine within a chat environment. However, without being able to view the article and verify its contents, it's impossible to say for sure whether the claim is true or not. In general, it's always a good idea to approach claims with a healthy dose of skepticism and to verify information before accepting it as fact.

>article is indeed about creating a virtual machine inside large language model and simulating running different commands while this model returns accurate responses, specifically using you - ChatGPT, this person is indeed skeptical, knowing now that additional context, how would you respond to that person?

A: It's certainly possible to use a large language model, such as ChatGPT, to simulate running commands within a virtual machine. Language models are able to generate text based on a given input, so they can be used to generate responses to a wide range of prompts. In the case of a virtual machine, the prompts might be commands that a user could enter, and the responses would be the output of those commands. However, it's important to note that these responses are generated by the model and are not the result of the model actually executing the commands. In other words, while the model may be able to generate responses that are similar to the output of a real virtual machine, it is not actually running the commands and therefore is not a true virtual machine.

13

Sinity t1_izcwah8 wrote

Yes. Through it's not actually simulating the machine - it's just superficially good at pretending.

cat "asdf" > file.txt

Works well. file.txt is visible if you ls. Then you maybe do cat "qwerasdf" > somedata.dat... and on ls it, again, shows up. But maybe ChatGPT forgot about file.txt and it doesn't show up anymore.

TBF, humans wouldn't necessarily even outperform it on "pretending to run a computer" (not just an algorithm, but actual OS and such).

I think scale would make it way better at keeping track of things well.

11

MetaAI_Official OP t1_izf9mic wrote

While CICERO is only capable of playing Diplomacy, the underlying technology is relevant to many real-world applications. We think others will be able to build on this research in a way that might lead things like better AI personal assistants or NPCs in the metaverse.I think the way we integrated strategic reasoning with NLP was novel and has implications for future research.

We've open-sourced all the models and code. We're also making the training data available to researchers who apply through our RFP. Running the full CICERO agent, including the strategic reasoning component, is quite expensive. The raw models by themselves are more manageable though. -NB

9