Cizox

Cizox t1_iyccckk wrote

From a theoretical perspective, our continued lack of understanding for what consciousness or intelligence is. Right now our models are nothing more than fancy correlation machines. You never had to shuffle through a million images of dogs or cats to know what each one is, yet the first few times you saw a cat or dog you understood what it was and how to identity it. Maybe we are thinking about intelligence all wrong, that is if you’re interested in machines doing tasks like humans do, instead of just doing tasks that humans do.

We don’t even understand what our own consciousness is yet. There is no grand unified theory in the field of cognitive science to asses whether a machine can even become conscious. Until then we are stuck just doing more mathematical optimization sleigh-of-hands to increase the accuracy of the SotA model by .2%.

10