Technologenesis

Technologenesis t1_j33y3xa wrote

It doesn't seem to be reflecting on its own thinking so much as reflecting on the output it's already generated. It can't look back at its prior thought process and go "D'oh! What was I thinking?", but it can look at the program it just printed out and find the error.

But it seems to me that ChatGPT is just not built to look at its own "cognition", that information just shouldn't be available to it. Each time it answers a prompt it's just reading back through the entire conversation as though it's never seen it before; it doesn't remember generating the responses in the first place.

I can't be sure but I would guess that the only way ChatGPT even knows that it's an AI called ChatGPT in the first place is because OpenAI explicitly built that knowledge into it*. It doesn't know about its own capabilities by performing self-reflection - it's either got that information built into it, or it's inferring it from its training data (for instance, it may have learned general facts about language models from its training data and would know, given that it itself is a language model, that those facts would apply to itself).

*EDIT: I looked it up and in fact the reason it knows these things about itself is because it was fine-tuned using sample conversations in which human beings roleplayed as an AI assistant. So it doesn't know what it is through self-reflection; it essentially knows it because it learned by example how to identify itself)

3