shaehl

shaehl t1_j7m4joz wrote

Reply to comment by alanskimp in Artificial Consciousness by alanskimp

Who cares what its purpose is, the test doesn't work. It could have any purpose and it would be irrelevant if it doesn't achieve that purpose. You talk as if the Turing Test is some kind of immutable law of the universe enshrined beside e=mc^2. In reality it's just a faulty mind game devised by someone who couldn't have begun to dream of the sophistication modern programming would achieve, no one takes it seriously.

For the record though, the Turing Test was a flawed and heavily criticized method of attempting to determine if a machine can think, developed by Alan Turing when computers were the size of entire rooms. Its purpose is irrelevant since the method and premise of the test is flawed and ineffective.

0

shaehl t1_j7m07nz wrote

Reply to comment by alanskimp in Artificial Consciousness by alanskimp

Bro if you are going to reply, at least read what you are trying to. Turing Test is in no way capable of determine if anything is human or not, let alone conscious or not.

0

shaehl t1_j7lwsyr wrote

Reply to comment by alanskimp in Artificial Consciousness by alanskimp

The Turning Test is useless for determining whether or not something has consciousness.

Imagine that I have spent 100 years compiling responses to every possible sentence someone might say and stored all of these responses in a computer and programmed it to give one of these prewritten responses when it encounters certain sentences.

A simple if X then Y, computer function could then theoretically pass the Turing Test if the prewritten responses were done well.

Many of the Chatbots out now can already pass the Turing Test if you get some lucky outputs. Yet in reality they are no more "conscious" than your calculator. All they are is a word document with a high powered auto-complete function that compares your text to a database of all the text on the internet, and calculates the "best" response.

1

shaehl t1_j7lqldv wrote

Reply to comment by alanskimp in Artificial Consciousness by alanskimp

Chatbots can already do this. This is not by any means a good indicator of consciousness. For example, the chatbots can describe or explain any concept, be it a dream, a college thesis, or anything really. But all that is happening is we are a inputting text into an algorithm and the algorithm is outputting text that has been weighted by it's coding as a response.

It's basically the same thing as inputting a math equation into a calculator and getting the solution as the output. Just way more sophisticated.

For consciousness equivalent to what we experience as humans you would require at the minimum the following:

  1. Continuity - Memories, desires, goals, experiences etc. would need to be cumulative and persistent. For example ask a chatbots about the dream it had this morning, it might tell you about an apple and maybe explain what it means metaphorically. Close the program and ask it again, now you will get a completely different answer, or maybe it outputs that there was no dream at all. It doesn't have continuity.

  2. Agency - a conscious AI would have to be capable of acting, feeling, aspiring, thinking, etc. on its own. To use the chatbot example again, it does nothing unless prompted. Any thoughts it describes to you is simply a chain of weighted text output as a response to a prompt. Again, it is like a calculator, when you aren't inputting math equations, the calculator isn't ruminating on its existence, it's doing nothing because it is just an elective of programming function designed to respond to inputs.

2