Viewing a single comment thread. View all comments

Purplekeyboard t1_j0aayw0 wrote

One thing that impresses me about GPT-3 (the best of the language models I've been able to use) is that it is functionally able to synthesize information it has about the world to produce conclusions that aren't in its training material.

I've used a chat bot prompt (and now ChatGPT) to have a conversation with GPT-3 regarding whether it is dangerous for a person to be upstairs in a house if there is a great white shark in the basement. GPT-3, speaking as a chat partner, told me that it is not dangerous because sharks can't climb stairs.

ChatGPT insisted that it was highly unlikely that a great white shark would be in a basement, and after I asked it what would happen if someone filled the basement with water and put the shark there, once again said that sharks lack the ability to move from the basement of a house to the upstairs.

This is not information that is in its training material, there are no conversations on the internet or anywhere about sharks being in basements or unable to climb stairs. This is a novel situation, one that has not been discussed anywhere likely before, and GPT-3 can take what it does know about sharks and use it to conclude that I am safe in the upstairs of my house from the shark in the basement.

So we've managed to create intelligence (text world intelligence) without awareness.

5