Viewing a single comment thread. View all comments

Olive2887 t1_j6n1sji wrote

It's nonsense I'm sorry. Consciousness and complex behaviour have no relationship whatsoever, and designing machines to do sequences of simple things with complex purposes has zero relationship to the evolved nature of consciousness in humans.


AUFunmacy OP t1_j6nb810 wrote

Who said we were designing machines to do sequences of simple things. Complex neuronal activity is the leading biological explanation as to what creates the subjective experience that we call consciousness. AI is constructed in such a way that resembles how our neurons communicate - there is very little abstraction in that sense. I challenge you to tell me why that is absolute nonsense.

I find it purely logical to discuss these things, you will find no where in the post do I claim to know anything or that I claim to believe any one thing.


PsiVolt t1_j6nd9mo wrote

I can assure you that the neuron model used for machine learning is absolutely highly abstracted from what our real brain cells do. The main similarity is the interconnected nature of many point of data. But we don't really inow exactly how our brains do it but it makes a good comparison for AI models. All the machine is doing is learning patterns and replicating them. Albeit in complex and novel ways, but not in such a way that it could be considered conscious. Even theoretically passing a Turing test, it is still just metal mimicking human speech. Lots of media has taken this idea to the extreme, but its all fictional and written by non-tech people.

as someone else said, most of this "AI will gain conciousness and replace humans" scare is people with a severe lack of understanding with the fundamental technologies


AUFunmacy OP t1_j6njsgh wrote

As a neuroscience major who is currently in medical school and someone with machine learning experience (albeit not as much as you) - I respectfully disagree.

Lets assume we have 2 hidden layers in a neural network that is structured like this: FL: n=400, F-HL: n=120, S-HL:n=30, OL: n=10. The amount of neural connections in this network is 400*120 + 120*30 + 30*10 = 63,910 neural connections. This neural network could already do some impressive things if trained properly. I read somewhere that GPT3 (recent/very-similar predecessor to chatgpt which is only slightly optimised for "chat") uses around 175 billion neuronal connections, but GPT 4 will reportedly use 100 trillion.

Now the human brain also uses around 100 trillion neuronal connections and not even close to all of them for thought, perceptions or experiences - "conscious experiences". I know that neuronal connections is a poor way to measure a neural networks performance but I just wanted a way to compare where we are at with AI compared to the brain. So we are not at the stage yet where you would even theorise AI could pass a Turing test - but how about when we increase the number of connections that these neurons are able to communicate with by 500 times, you approach and I think surpass human intelligence. Any intellectual task at that point, an AI will probably do better.

I simply think you are naieve if you think AI won't replace humans in a number of industries, in a number of different ways and to a large extent. Whether or not Artificial Intelligence will gain consciousness is a question you should ask yourself as an observer of the Earth as single celled organisms evolved into complex and intelligent life. at what point did humans, or if we weren't the first then our ancestor species, gain their consciousness? The leading biological theory is that consciousness is a phenomenon that happens as a result of highly complex brain activity and is merely a perception. So who is to say that AI will not evolve that same consciousness that we did, it certainly doesn't mean that they aren't bound by their programming just like we are always bound by physics but maybe they will have a subjectively conscious experience.


Edit: I will note: I have left out a lot of important neuroanatomy that would be essential to explaining the difference between a neural network in and AI vs a brain. But the take home message is, the machine learning model is not a far fetched take whatsoever. But it is important to reign home that software cannot come close to the physical anatomy of neuroscience.


RanyaAnusih t1_j6nlgk7 wrote

Only an understanding of quantum theory has any hope of explaining consciousness. Complexity in networks most likely will not solve the issue.

Life is taking advantage of quantum processes at a fundamental level.

The current model of neuroscience is also misleading. Some kind of enactivism must be considered


bildramer t1_j6okziq wrote

"Complex neuronal activity" is not an explanation, it's basically a restatement of what generates consciousness in us, i.e. you can have complex neuronal activity without consciousness, but not vice versa, unless you do equivalent computations in some other substrate. The specific computations you have to do are unknown to us, but we have some broad hints and directions to look.


AUFunmacy OP t1_j6ophx4 wrote

I’m sorry, but if you think you’re going to persuade me that I’m wrong with this pseudo-intellectual jargon - you need to rethink your approach. All you’ve said is consciousness cannot occur without complex neuronal activity but not vice versa which I did not imply to be false anyway. The rest of your speech was some weird trip you and a thesaurus had together.

Either that or you used an AI to write your comment which I suspect since you said, “but we have some broad hints and directions to follow”, unless you make a leading statement to that odd sentence - it is just such a non-sequitur thing to say.


ExceptEuropa1 t1_j6orzge wrote

AI has many different approaches, and it's not fair to say that it is somehow based on, or that it replicates human cognition. There is so, so much beyond neural networks. Edit: typo.


AUFunmacy OP t1_j6otxi7 wrote

Yes, as a programmer who has experience in machine learning I know there are different approaches, however, ChatGPT uses a parameterised, deep-learning (neural network) approach. And it certainly closely imitates how central nervous system neurons communicate, in the brain specifically (I’m in med school as a neuroscience major). That isn’t to say just because AI imitates human neuronal activity - that they have the same properties, because they don’t.

We should discuss instead of you creating vague rebuttals that provide 0 evidence and 0 explanation.


ExceptEuropa1 t1_j6ozqbc wrote

Rebuttals? You're mistaken, my friend. I simply pointed out that your statement was unfair.

Now, your response was again self-congratulatory. I have completed superior degrees than yours, but I haven't yet dropped them here. Look, if it's true that you knew that AI has different approaches, then you simply misspoke. You said something wrong. Period. Own it up and don't get all offended. Gee...

What the hell are you talking about when you say something about evidence or explanation? I corrected you. What else did you want? A book reference? Any book on AI will show how incorrect your statement was. Open one, in a random page, and will you see.


AUFunmacy OP t1_j6pfm5y wrote


Please tell me which degrees you have completed mate, it’s not self congratulatory it’s providing my credibility to back up the statements I make. What is self congratulatory is you saying “I have completed superior degrees to yours”.

Show me my mistake? I am so confused what you are all hung up about, where did I claim neural networks were the only trading strategy?

In general, the instigator of the debate is required to present their argument, you have no argument if you provide no evidence. You haven’t showed me what you are talking about, I don’t believe you have “superior degrees of either”. Get over yourself mate 😅


bortlip t1_j6oatpx wrote

"Consciousness and complex behaviour have no relationship whatsoever" *

* citation needed


Nervous_Recursion t1_j6nhonk wrote

I don't agree with the article (in its form and content) so this comment is not to defend it, I will also say that it's not making much sense and is disorganized.

But your comment is also incorrect. "No relationship whatsoever" is a strong claim and nothing has been shown one way or the other. There are valid paths of inquiries that are trying to understand consciousness in the light of control theory / cybernetic, which is all about complexity.

While IIT is far too naive and already shown incorrect, I think there is a nugget of sense to take there about the partitioning of the network and measuring the information entropy in each. What it lacks in my opinion is that not only both partitions should have a degree of Shannon entropy, but there should also show tangled hierarchies[1]. I think consciousness is one part of the network building symbolic representations of the states of the other part, being in the same time transformed in its structure (which seems to be how memory works). Having an interpreter running, being itself modified by its input but also issuing orders, is a tangled hierarchy.

There is nothing proven at all of course, it is all personal opinion. But I consider it a much better direction than some other current theories and a more realistic description of how such process could be organized. And in that sense, while causality is definitely not decided, it is absolutely possible that either such level of complexity is necessary for complex behaviour, or that complex behaviour will mechanically create this organization.

Of course designing simple machines for complex purpose is not the point. But designing simple computations to generate complex behaviour might definitely be tightly coupled with how consciousness evolved in humans (and other thinking animals).

[1]: while this paper goes against the idea, it's not contradicting it. Nenu says that Hofstadter didn't prove anything, which is correct. It doesn't mean it's shown incorrect or even less likely. It's still useful though to contextualize and try to formalize the idea.