jharel

jharel t1_j26okrj wrote

Let me repeat my reply in a different way:

See what you said below. How is that supported by anything else you've said?

>Theoretically, you could just plug ChatGPT (or any other deep learningmodel) to an artificial nervous system and it would be (technically)sentient.

1

jharel t1_j26n9ni wrote

I don't see how the novelty of any of its output, or the lack thereof, have any bearing on sentience.

You can theoretically have output indistinguishable to that of a human being and still have a non-sentient system. Reference Searle's Chinese Room Argument.

1

jharel t1_j26m373 wrote

it's not. If you read an AI textbook it will tell you that it isn't. Even updating a spreadsheet would count in this technical definition but of course that isn't learning.

Personal experience isn't a data model. Otherwise there wouldn't be any new information in the Mary thought experiment https://plato.stanford.edu/entries/qualia-knowledge/

>
Mary is a brilliant scientist who is, for whatever reason, forced to
investigate the world from a black and white room via a black and
white television monitor. She specializes in the neurophysiology of
vision and acquires, let us suppose, all the physical information
there is to obtain about what goes on when we see ripe tomatoes, or
the sky, and use terms like ‘red’, ‘blue’, and
so on. She discovers, for example, just which wavelength combinations
from the sky stimulate the retina, and exactly how this produces
via the central nervous system the contraction of the vocal
chords and expulsion of air from the lungs that results in the
uttering of the sentence ‘The sky is blue’.… What
will happen when Mary is released from her black and white room or is
given a color television monitor? Will she learn anything or
not? It seems just obvious that she will learn something about the
world and our visual experience of it. But then is it inescapable that
her previous knowledge was incomplete. But she had all the
physical information. Ergo there is more to have than that,
and Physicalism is false.

1

jharel t1_j26bkf9 wrote

>Theoretically, you could just plug ChatGPT (or any other deep learning
model) to an artificial nervous system and it would be (technically)
sentient.

The above is a terrible line. You'd have to delete it or risk losing people right then and there.

2

jharel t1_j266y4e wrote

Try asking ChatGPT whether what it does is actually learning, and it'll tell you that it isn't:

It is important to note that the term "learn" in the context of machine learning and artificial intelligence does not have the same meaning as the everyday usage of the word. In this context, "learn" refers specifically to the process of training a model using data, rather than to the acquisition of knowledge or understanding through personal experience.

8