hackinthebochs

hackinthebochs t1_ja6uapk wrote

Any structured data is a language in a broad sense. Tokens identify structural units and the grammar determine how these structural units interrelate. But the grammar can be arbitrarily complex and so can encode deep relationships among data in any domain. This is why "language models" are so powerful in a vast array of contexts.

1

hackinthebochs OP t1_j0aj2im wrote

I think that's a good way to think about it. If we have a reasonably accurate understanding of the work remaining, then the credence is his expectation of how fast progress will proceed. The other relevant dimension is the accuracy of this understanding of how much is left to do. For example, is artificial sentience even possible at all? Is it a few technological innovations away, or very many?

2

hackinthebochs OP t1_izpq7vt wrote

The issue of how to explain consciousness is importantly different than whether an AI can be or is conscious. An explanation of consciousness will identify features of systems that determine their level of consciousness. The hard problem of consciousness places a limit on the kinds of explanations we can expect from just physical dynamics alone. But some theories of consciousness allow that physical or computational systems intrinsically carry the basic properties to support consciousness. For example, panpsychism says that the fundamental properties that support consciousness are found in all matter. This includes various mechanical and computational devices. And so there is no immediate contradiction in being anti-physicalist and also believing that certain computational systems will be conscious.

8

hackinthebochs OP t1_izp7r6g wrote

We can always imagine the behavioral/functional phenomena occurring without any corresponding phenomenal consciousness. So this question can never be settled by experiment. But we can develop a theory of consciousness and observe how well the system in question corresponds to the features our theory says correspond with consciousness. Barring any specific theory, we can ask in what ways are the system similar and different from systems we know that are conscious and whether the similarities or differences bear on the credibility of attributing conscious to the system.

Theory is all well and good, but in the end it will have little practical significance. People tend to be quick to attribute intention or minds to inanimate or random occurrences. Eventually the behavior of these systems will be so similar to humans that most people's sentience-attribution machinery will fire and we'll be forced to confront all the moral questions we have been putting off.

13