RathSauce

RathSauce t1_jak781t wrote

>So, apologies if you find these answers wanting or unsatisfying, but until there is a testable and consistent definition of consciousness, there is no way to improve them.

There is the full quote, what experiment do you propose to prove that the statement you provided is the correct, and only, definition of consciousness? If this cannot be proven experimentally, it is not a definition, it is just your belief.

If the statement cannot be proven, then people need to stop stating that consciousness has arisen in a computer program. If there is no method to prove/disprove your statement in an external system, it cannot be a definition, a fact, or even a hypothesis.

2

RathSauce t1_jajjwtu wrote

I'll say up top, there is no manner to answer anything you have put forth in regards to consciousness until there is a definition for consciousness. So, apologies if you find these answers wanting or unsatisfying, but until there is a testable and consistent definition of consciousness, there is no way to improve them.

> isn't it possible the AIs we end up creating may have a much different, "unnatural" type of consciousness?

Sure, but we aren't discussing the future or AGI, we are discussing LLMs. My comment has nothing to do with AGI but yes, that is a possibility in the future.

> How do we know there isn't a "burst" of consciousness whenever ChatGPT (or its more advanced future offspring) answers a question?

Because that isn't how feed-forward, deep neural networks function regardless of the base operation (transformer, convolution, recurrent cell, etc.). We are optimizing parameters following statistical methods that produce outputs - outputs that are designed to closely match the ground truth. ChatGPT is, broadly, trained to align well with a human; the fact that it sounds like a human shouldn't be surprising nor convince anyone of consciousness.

Addressing a "burst of consciousness", why has this conversation never extended to other large neural networks in other domains? There are plenty of advanced types of deep neural networks for many problems - take ViT's for image segmentation. ViT models can be over a billion parameters, and yet, not a single person has once ever proposed ViT's are conscious. So, why is this? Likely, because it is harder to anthropomorphize the end problem of a ViT (a segmented image) than it is to anthropomorphize the output of a chatbot (a string of characters). If someone is convinced that ChatGPT is conscious, that is their prerogative but they should also consider all neural network of a certain capacity as conscious to be self-consistent with that thought.

> Even if we make AIs that closely imitate the human brain in silicon and can imagine, perceive, plan, dream, etc, theoretically we could just pause their state similarly to how ChatGPT pauses when not responding to a query. It's analogous to putting someone under anesthesia.

Even under anesthesia, all animals produce meaningful neural signals. ChatGPT is not analogous to putting a human under anesthesia.

2

RathSauce t1_jaj9ml5 wrote

Because we can put a human in an environment with zero external visual and auditory stimuli and one could still collect a EEG or fMRI signal that is dynamic with time and would show some level of natural evolution. That signal might be descriptive of an incredibly frightened person but all animals are capable of computation when deprived of input in the form of visual, auditory, olfactory, etc.

No LLM is capable of producing a signal lacking a very specific input; this fact does differentiate all animals from all LLM's. It is insanity to sit around and pretend we are nothing more than chatbots because there exists a statistical method that can imitate how humans type.

8