Viewing a single comment thread. View all comments

Reddituser45005 t1_j8hedlb wrote

I find the whole hallucination thing fascinating. Researchers are suggesting that LLMs exhibit a theory of mind and that they construct their own machine learning model in its hidden states, the space in between the input and output layers. It is unlikely that machine consciousness would arrive fully developed. Human infants take longer to develop than other primates or mammals. It is unlikely that machine consciousness would just turn on like a switch. It would take time to develop an awareness, to integrate the internal and external worlds, to develop an identity. Are these examples of hallucinations and LLMs developing an internal model the baby steps of developing consciousness?

5