Viewing a single comment thread. View all comments

monsieurpooh t1_j35wbej wrote

It's probably going to devolve into a semantics debate.

ChatGPT model neurons stay the same until they retrain it and release a new version.

But, you feed it back its own output + more prompt, and now it has extra context about the ongoing conversation.

For now I would have to say it shouldn't be described as "reflecting on its own thinking", since each turn is independent from others and it's simply trying to predict what would've been reasonable to appear in text. For example: It could be an interview in a magazine, etc.

That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

2

LarsPensjo t1_j35yrqx wrote

> That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

I definitely agree. IMO, you see a lot of "AI is not true intelligence", which doesn't really matter.

Eliezer Yudkowsky had an interesting observation:

> Words aren't thoughts, they're log files generated by thoughts.

I believe he meant the written word.

2