Viewing a single comment thread. View all comments

Cizox t1_iyccckk wrote

From a theoretical perspective, our continued lack of understanding for what consciousness or intelligence is. Right now our models are nothing more than fancy correlation machines. You never had to shuffle through a million images of dogs or cats to know what each one is, yet the first few times you saw a cat or dog you understood what it was and how to identity it. Maybe we are thinking about intelligence all wrong, that is if you’re interested in machines doing tasks like humans do, instead of just doing tasks that humans do.

We don’t even understand what our own consciousness is yet. There is no grand unified theory in the field of cognitive science to asses whether a machine can even become conscious. Until then we are stuck just doing more mathematical optimization sleigh-of-hands to increase the accuracy of the SotA model by .2%.

10

Suspicious-Yoghurt-9 t1_iycl2lk wrote

In my opinion that is a very good answer not only from philosophical point of view but rather from a mathematical vantage point. Well i think cognitive scientist understand how far we are from a human like machines. However Tech communities sticked to the term intelligence because it is just fancy and because it sounds very nice when you want to promote your research or your product. Like what in the world makes a human driving a car Intelligent whereas a machine is indeed considered such as. There are a lot of fundamentals difference between human brain and machinery "Intelligence". At the end machine learning and AI is constrained and tethered whithin the realem of mathematical framework and you can't bypass it. Which it turns out that we don't even understand the mathematics of these models. For instance we don't know what class of function neural network can approximate, (well we know that we can approximate any continuous function in theory and this is a very broad statement theorem), what kind of regularities the function should have, how these models are exploting the symmetries of the problems, what are the principles or strategies the model is pursuing to solve the task, lets say we had a deep understanding of these topics can we than solve the problem without learning etc.. so i guess our lack for the mathematical understanding of these models is the real obstacle. Understanding intelligence and cognition is important if you want to do human like machine but nowdays i don't think we have such, we rather have a human task imitating machines.

3