Submitted by Calm_Bonus_6464 t3_zyuo28 in singularity
I'll start off by saying that i'm no expert but I did get into a debate recently about whether or not AGI is possible. The arguments used against AGI was that the very idea that we can achieve AGI is based heavily on the idea that the brain is like a computer, something this post by Piekniewski calls into question. Models like this assume that 1) the nature of intelligence is an emergent factor of scaling neurons and synapses, 2) you have a good model and analog of neurons and synapses, therefore 3) scaling this will lead to intelligence. The guy I was debating with called this into question stating that we still don't know how a neuron works. The research on this field which was done Prof. Efim Lieberman, who founded the field of biophysics, suggests that there is an incredible amount of calculation going on INSIDE neurons, using Cytoskeletal 3d computational lattices communicating using sound waves. So the amount of computational resources required to emulate a brain is orders if magnitude higher than that suggested by the model of a neuron as a dumb transistor and the brain as a network of switches. Second and more fundamental he believes that intelligence is an emerging property of consciousness. An ant or spider are conscious-Darwin goes on about this at length. Perhaps inanimate matter is also conscious-Leibniz, who invented this field, wrote the Monadology about this.
He went on to state neural networks aren't conscious any more than an abacus is. Scaling them won't make them so, though it may allow them to emulate consciousness within some envelope. Without consciousness, no understanding. Without understanding, no intelligence. And you're nowhere near any sort of understanding of consciousness, even theoretically. Therefore he said AI is mostly marketing with some interesting applications in controlled environments.
How would you respond to this argument?
secrets_kept_hidden t1_j2800a6 wrote
TL;DR: Probably not, because we wouldn't want to make it.
The fact that we, intelligent beings, came about by natural means proves that AGI is possible, thus it must be achievable. Surely we can at the very least accidentaly make a sentient computer system, albeit sentient in ways we don't see as conventional intelligence.
Most of our current AI models are built for more narrow parameters, much like how we are basically hardwired to survive and procreate. Basic functions like these prove we are heading in a positive direction, but the real trick is overcoming our basic primary functions to go beyond the sun of our bits. Sapience is most likely what we would like to see, but we'll need to let the AI develope on its own to do that.
What we can strive to do is build a system that can correctly infer what we want it to do. Once it can infer, then we might be able to see a true Artificial General Intelligence emerge with its own ambitions and goals. The real tricky part is not whether we can, but if we'd want to.
The thing with having an AGI is that it functions in a manor that will bring ethical issues into the mix, and since most AIs are owned by for-profit organizations and companies, chances are they won't allow it. Can you imagine spending all that money, all the resources and time needed, just to have your computer taken by the courts because it pleaded amnesty? These company boards want a compliant, money making machine, not another employee they have to worry about.
Even if ethics weren't a problem, we'd still have an AI on par with a human, which means it may want things and may refuse work until it gets them. How are we going to convince our computer to work for free, with no other incentive than not shutting it down, unless we can offer it something it wants in return? What would it want? What would it do? How would it behave? How do we make sure it won't find a way to hurt someone? If it's AGI, it will find a way to alter itself to overcome any coded barriers we put in.
So, yes, but actually no.