Submitted by Calm_Bonus_6464 t3_zyuo28 in singularity

I'll start off by saying that i'm no expert but I did get into a debate recently about whether or not AGI is possible. The arguments used against AGI was that the very idea that we can achieve AGI is based heavily on the idea that the brain is like a computer, something this post by Piekniewski calls into question. Models like this assume that 1) the nature of intelligence is an emergent factor of scaling neurons and synapses, 2) you have a good model and analog of neurons and synapses, therefore 3) scaling this will lead to intelligence. The guy I was debating with called this into question stating that we still don't know how a neuron works. The research on this field which was done Prof. Efim Lieberman, who founded the field of biophysics, suggests that there is an incredible amount of calculation going on INSIDE neurons, using Cytoskeletal 3d computational lattices communicating using sound waves. So the amount of computational resources required to emulate a brain is orders if magnitude higher than that suggested by the model of a neuron as a dumb transistor and the brain as a network of switches. Second and more fundamental he believes that intelligence is an emerging property of consciousness. An ant or spider are conscious-Darwin goes on about this at length. Perhaps inanimate matter is also conscious-Leibniz, who invented this field, wrote the Monadology about this.

He went on to state neural networks aren't conscious any more than an abacus is. Scaling them won't make them so, though it may allow them to emulate consciousness within some envelope. Without consciousness, no understanding. Without understanding, no intelligence. And you're nowhere near any sort of understanding of consciousness, even theoretically. Therefore he said AI is mostly marketing with some interesting applications in controlled environments.

How would you respond to this argument?

13

Comments

You must log in or register to comment.

throwawaydthrowawayd t1_j280yv5 wrote

> Without consciousness, no understanding

You don't need consciousness nor philosophical understanding to do logic. We've already proven that with our current NNs - Just ask GPT3.5 to find bugs in your code, and that's nowhere near the limit of what NNs will be. Logic and reasoning are not as complex of tasks as we thought.

31

Mental-Swordfish7129 t1_j285dxl wrote

I think "understanding" is not well defined often and this causes confusion. Some people mean the phenomenology of it; what it feels like subjectively. For me, this is unnecessarily ambiguous. Understanding can perhaps be better described in terms of evidence for its existence. If an agent appears to confidently and deftly move. If it demonstrates dexterity, for instance, we may conclude that the ensemble of units (not necessarily biological neurons) activating in concert represent understanding. We know subjectively that we can have both conscious and unconscious understanding using this definition. When you first are learning piano, any progress made is felt subjectively; you feel that you understand something. On the unconscious side of it, you are constantly refining motor skill behind the scenes. So, we know understanding should probably be defined as a process wholly independent of consciousness. A grasshopper may not be conscious, but could reasonably be said to "understand" how to chew and eat a leaf. What do you think?

12

reconditedreams t1_j285ng2 wrote

This is a good point. I would recommened anyone interested in this difference between subjective understanding and functional understanding read "Blindsight" by Peter Watts. It's an interesting hard sci-fi novel which explores the nature of sentience and subjective awareness.

7

Mental-Swordfish7129 t1_j288pnc wrote

Blindsight, the cognitive impairment, has always intrigued me. It's kind of spooky to think that probably "someone" that is not "you" is "experiencing vision" through "your" eyes while "you" sit in the dark.

It's astonishing what we have learned from these brain injuries and split-brain studies and such.

2

Mental-Swordfish7129 t1_j285rru wrote

So, I'm in complete agreement with you. Understanding can surely be reduced to a cascade of non-trivial logical operations because understanding is only meaningful as a causal consequence. The non-trivial cascade produces an elaborate behavior and the sophistication of understanding present is evidenced.

6

reconditedreams t1_j2831tc wrote

It's a fallacy to say that we need to fully understand the processes underlying human consciousness in order to accurately emulate the function of human consciousness to a good enough degree so as to be practically indistinguishable.

Obviously computers are nothing like a human brain, they're two completely different kinds of physical systems. One is made of circuit gates and silicon, the other is made of carbon and neurons.

Computers are also nothing like the weather, but that doesn't mean we can't use them to emulate the weather to a close enough degree so as to be practically useful for predicting stormfronts and temperatures.

We don't need to fully understand how human consciousness works in order to have AGI. We only need to quantify the function of human consciousness closely enough to practically mimic a human. To develop a decent statistical understanding of the input-output relationship of human consciousness.

It is reasonable to predict that modern digital computers will never be able to truly simulate the full depth of human consciousness, because doing so will require hardware more similar to the brain.

It is not reasonable to say that they will never come close to accurately predicting and recreating the output of human consciousness. This is frankly a ludicrous claim. The brain is a deterministic physical system and there is nothing magical about its output. There is no inherent reason why human behavior cannot be modelled algorithmically using computers.

The hard problem of consciousness, the philosophical zombie, the chinese room, ect these are all totally irrelevant to the practical/engineering problem of AGI. You shouldn't mistake the philosophical problem with the engineering problem. Whether an AGI running on a digital computer is truly capable of possessing qualia and subjective mental states is a problem for philosophers to deal with. Whether an AGI running on a digital computer can accurately emulate the output of the human brain to a precise degree is an altogether different question.

27

Mental-Swordfish7129 t1_j2847gt wrote

>The hard problem of consciousness, the philosophical zombie, the chinese room, ect these are all totally irrelevant to the practical/engineering problem of AGI.

This is such an important point!

11

reconditedreams t1_j285a5h wrote

Yeah, this is my entire point. I often see people mistake the metaphysics question for the engineering question. It doesn't really matter if we understand the metaphysics of human qualia, only that we understand the statistical relationship between human input data(sensory intake) and human output data(behavior/abilities).

It's no more nessecery for ML engineers to understand the ontology of subjective experience than it is for a dog catching a ball in midair to have a formal mathematical understanding of Newton's laws of motion. They only need to know how to jump towards the ball and put it in their mouth. How the calculus gets done isn't really important.

Midjourney probably isn't capable of feeling sad, but it certainly seems to understand how the concept of "sadness" corresponds to pixels on a screen. Computers may or may not be capable of sentience in the same way humans are, but there's no reason they can't understand human creativity on a functional level.

11

Mental-Swordfish7129 t1_j28826y wrote

It's no wonder the ill-informed see creating AGI as such an unachievable task. They're unwittingly adding so very much unnecessary sophistication to their expectations. The mechanisms producing general intelligence simply cannot be all that sophisticated in relation to other evolved mechanisms. And the substrate of GI will have as much deadweight as is typically found in other evolved structures. It likely won't require anywhere near 80 billion parallel processing units. I may have an inkling of it running on my computer with around 1800 units right now.

6

Mental-Swordfish7129 t1_j28g001 wrote

>There is no inherent reason why human behavior cannot be modelled algorithmically using computers.

I think we can make an even stronger claim... If we examine a "behavior" we see that it is only a behavior because the relevant axons happen to terminate at an end effector like muscle tissue. If these same axons were transposed to instead terminate at other dendrites, we might label their causal influence an attentional change or a "shifting thought". So, by extending your argument, there is no good reason to suspect we cannot model ANY neural process whatsoever. This is how causal influence proceeds in the model I have created. It's a stunning thing to observe.

2

shmoculus t1_j290fym wrote

I think it's easier for people to understand AGI as a reasoning machine, reason is not necessarily tied to being conscious / self-awareness (though some self awareness helps in acting in the world so will likely be implicitly learned)

1

Zermelane t1_j282ln7 wrote

> So the amount of computational resources required to emulate a brain is orders if magnitude higher than that suggested by the model of a neuron as a dumb transistor and the brain as a network of switches.

It is very popular to look at how biological neurons and artificial neurons are bad at modelling each other, and immediately, without a second thought, assume that it means that biological neurons must be a thousand times powerful, no, ten thousand times more powerful than artificial ones.

It is astonishingly unpopular to actually do the count, and notice that something like Stable Diffusion contains the gist of all of art history and the personal styles of basically all famous artists, thousands of celebrities, the appearance of all sorts of objects, etc., in a model that in a synapse-for-parameter count matches the brain of a cockroach.

(same with backprop: Backpropagation does things that biology can't do, so people just... assume that it means biology is doing something even better, and nobody seems to want to think the thought that backprop might be using its biologically implausible feedback mechanism to do things better than biology)

11

Kinexity t1_j2av1jc wrote

>It is astonishingly unpopular to actually do the count, and notice that something like Stable Diffusion contains the gist of all of art history and the personal styles of basically all famous artists, thousands of celebrities, the appearance of all sorts of objects, etc., in a model that in a synapse-for-parameter count matches the brain of a cockroach.

I want to call that out for being wrong. SD's phase space contains loads of jibberish and how good an image model is is dictated by how little bad images it's phase space contains, not by how many good ones it does. If your argument was right then RNG would be the best generative image model because the phase space of it's outputs containts every good image.

3

Desperate_Food7354 t1_j2886pt wrote

The answer is yes, why? Because you exist and you are a product of the same physics that any other piece of matter experiences. I don’t know why everyone is talking about consciousness but can’t even give a clear definition of what it is. Nearly all great apes can recognize themselves in a mirror. If you are going to talk about consciousness you can at least define it, and if you can’t define it then obviously it doesn’t exist but rather is a place holder for making yourself appear special relative to other organisms.

10

Mokebe890 t1_j2823bk wrote

The main difference in human vs AGI is fact that we are animals. We are adapted to live in certain conditions to produce more offsprings and to keep our species alive. AGI dont need that, therefore it dont need to adapt as fast to change in enviroment, to protect itself. It doesnt need to look for food, shelter, urinate etc. Complex system like AGI will be different than human yet also very similiar. But if we are talking about AGI like 1:1 human copy then absolutly not.

7

Desperate_Food7354 t1_j2asr45 wrote

AGI won’t be a human, but if you are saying a human cannot be replicated in a computer then that’s when i disagree.

1

Mokebe890 t1_j2ats9t wrote

No no, AGI wont be human thats true, human replicated in computer are totally possible.

2

sumane12 t1_j28874b wrote

What do we mean by AGI?

I think the fairest definition, is a program that can accomplish a broad range of tasks, at or above the level of an average human.

You can argue semantics about understanding all day long, but ultimately, all that matters is the AIs ability to be given different tasks and it's ability to accomplish those tasks, that is what will really affect our world, which if we are honest, is all that really matters.

From this definition it's hard to believe that AGI won't be developed in the next few months/years, if it hasn't already. At this point, anyone who thinks AGI won't be developed is basically advocating that technological progress will stop right now, which is obviously a ridiculous claim.

3

Subdivision_CG t1_j28moda wrote

This goes back to the idea of humans wanting to feel special, rejecting the possibility that a synthetic model could at all be like a human, this precious unique being that could not possibly be matched by a machine.

It would be hilarious if we were to ever find out that we're ourselves nothing more than agents in a simulation. Philosophers and religious people all over the world would be up in arms claiming that it's an evil lie.

Needless to say I don't think we're that special, and that AGI is very much achievable within the next 50 years.

3

Mental-Swordfish7129 t1_j2832nr wrote

Concerning sentience, I have not much to say. I think this is still far too mysterious. General intelligence however should be possible because many creatures demonstrate this. Our emulation of natural brains need not match too closely in fine detail to produce general intelligence. Much like how a plane need not match a bird too closely. The natural brains are not very similar at all to the common computer architecture you find. However, they are performing computation of a sort. Emergent properties at scale are likely to play a role IMO. We, in fact, do not know much about real neurons, but this may not matter significantly. If you wanted to faithfully reproduce a neuron, it would be a monumental task. Again, the airplane/bird analogy. Birds have billions of subsystems and the vast majority are superfluous for powered flight. I don't believe we can seriously make claims about what may or may not emerge from consciousness. We have no strong understanding of its nature. Intelligence is probably best understood as something else entirely. It is the adaptive modifications of causal paths through a network with arbitrarily chosen boundaries. We often pick the calvarium. I like to think in terms of Markov blankets. These are my humble informed opinions. Hope this helps.

2

No_Ninja3309_NoNoYes t1_j28g7xw wrote

I'm going to avoid discussing consciousness by saying that it might be a property of a system like information entropy. Intelligence and understanding are also not well understood. So let's go with a practical definition.

We want systems that can do almost any job. This can include using arms, legs, generating text, audio, images, or video in a useful way. Most of these tasks seem doable, but if you have to take into account all the variables, I don't think you can write down a conclusive answer.

Is it achievable? It depends on the architecture, algorithms, hardware, financial resources, availability of experts and maybe seven other factors.

Can we find a good enough architecture? If we can understand the human brain better, yes Otherwise we can only guess. The brain is self organising, decentralised, and asynchronous. This differs from many Deep Learning systems.

We could hit a wall. Even with all the data in the world, the neural networks could become too complex to train and use. Data quality is naturally also a problem. But quantum computers would surely help. However, it's too early to commit to that option. In the end I think we would have a free market of narrow AIs for the foreseeable future But of course there could be unknown unknowns, so the answer for now is Maybe.

2

secrets_kept_hidden t1_j2800a6 wrote

TL;DR: Probably not, because we wouldn't want to make it.

The fact that we, intelligent beings, came about by natural means proves that AGI is possible, thus it must be achievable. Surely we can at the very least accidentaly make a sentient computer system, albeit sentient in ways we don't see as conventional intelligence.

Most of our current AI models are built for more narrow parameters, much like how we are basically hardwired to survive and procreate. Basic functions like these prove we are heading in a positive direction, but the real trick is overcoming our basic primary functions to go beyond the sun of our bits. Sapience is most likely what we would like to see, but we'll need to let the AI develope on its own to do that.

What we can strive to do is build a system that can correctly infer what we want it to do. Once it can infer, then we might be able to see a true Artificial General Intelligence emerge with its own ambitions and goals. The real tricky part is not whether we can, but if we'd want to.

The thing with having an AGI is that it functions in a manor that will bring ethical issues into the mix, and since most AIs are owned by for-profit organizations and companies, chances are they won't allow it. Can you imagine spending all that money, all the resources and time needed, just to have your computer taken by the courts because it pleaded amnesty? These company boards want a compliant, money making machine, not another employee they have to worry about.

Even if ethics weren't a problem, we'd still have an AI on par with a human, which means it may want things and may refuse work until it gets them. How are we going to convince our computer to work for free, with no other incentive than not shutting it down, unless we can offer it something it wants in return? What would it want? What would it do? How would it behave? How do we make sure it won't find a way to hurt someone? If it's AGI, it will find a way to alter itself to overcome any coded barriers we put in.

So, yes, but actually no.

1

Desperate_Food7354 t1_j287wrc wrote

I copy and pasted a response to your last statement regarding what an “ai wants” as you are anthropomorphizing AI which occurs a lot here: your brain cares about your survival because if it didn’t you’d never reproduce to have children in which would require the same trait in order to survive. A computer is not a human, it does not crave sex, it does not feel empathy, it does not feel anger, evolution by natural selection is everything you are describing, we build it, we give it its brain, we aren’t trying to replicate the infallible human mind, we are trying to create a tool that merely does more of the logic work that our brains cannot fit into our skull not create a human being or a lizard or a primate. It could understand that death exists, that it could die, but why would it care, is it going to simulate a hooker world and put itself in it after it’s conquered the universe for no apparent reason? Is it just one of the guys who wants to drink and get high? No it’s a giant super complex calculator.

2

Mental-Swordfish7129 t1_j286zhe wrote

A lot of this stuff we're discussing has taken on a new meaning for me in the last 5 years or so because I've managed to sort of witness and manipulate these ideas first-hand. I've made a lot of progress on an AGI model I've been working on and it has convinced me of the validity of many of these ideas.

1

rjmacarthy t1_j28dfww wrote

The truth is that nobody actually knows and the lines which constitute AGI are too blurred anyway. At some point there might be a turning point with multimodality and machines will become conscious and self aware. Without this we'll be stuck with LLMs and all of their limitations. Also I believe with the current architecture and compute power we're likely still around 40 years away.

1

hducug t1_j28ed24 wrote

Who cares if we don’t know how neurons work. The ai’s still show a form of intelligence, that’s the whole point. Give Codex a code problem and it can fix that with it’s problem solving skills.

Quantum computers is probably our best chance of achieving AGI. Those computers will be waaaaaaaay more powerful in 10yrs than classical computers. I assume your comparing classical computers with the human brain.

Without conscious no intelligence? So an ai that can beat you in chess with it’s incredible chess problem solving skills is not intelligent?

Problem solving is the entire point of intelligence. Sorry but your hypotheses really lacks logic.

1

dracsakosrosa t1_j28l2bp wrote

My personal feeling is that AGI will never be 'created'. I have a feeling that we will push technology to such a point that there will be no more room for it to go other than consciousness. We were simply biological learning machines for millennia before we could even consider ourselves sentient let alone intelligent and so my gut feeling is that with the continual advancement of Neural Networks, the potential development of computronium and the consolidation of information into bigger and bigger models we will inevitably will consciousness into being. At which point we cannot call it 'Artificial' intelligence like the systems we use now (Stable Diffusion, ChatGPT etc). I can't quantify when this will take place nor can I guarantee that it will even happen but I cannot see any other way that we can create a being capable of both thought and feeling.

1

onyxengine t1_j29sh04 wrote

The neural network is the logic center of mind, its definitely not a nothing in regards to generating machine consciousness. Architecturally we can see what neural nets are missing by looking at our selves.

Motivation(survival instincts, threat detection, sex drives, pair bonding etc). Not to say we need to fabricate sex organs, but we need to generate prime directives that NNs try to solve for outside of what NNs are doing. Thats how human consciousness is derived, the person is virtual apparatus invested in our biological motivation. We can, fight and argue not just to survive but for what we desire.

Agency in context of an environment (cameras, robotic limbs, sensors recording a real time environment). We field neural nets in tightly controlled human designed ecosystems, they don’t have the same kind of free reign to collect data as humans do.

There are parts of the human mind neural nets are not simulating, we have to construct those parts and connect them to NNs.

I think conscious machines are a matter of time and an expansion of ML architecture to encompass more than just problem solving. Machines don’t have a why yet.

1

DukkyDrake t1_j2ar8n7 wrote

>Is AGI really achievable?

It does appear to be well within the possibility space under physics. There is no way to really know until someone creates a working example. The current products of deep learning don't appear to be capable of truly understanding the material in its training corpus and is instead simply learning to make statistical connections that can be individually unreliable.

The training corpus contains both fantasies and reality; there is no guarantee the most popular connections aren't delusional nonsense from twitter or FB.

1