Submitted by ReExperienceUrSenses t3_10s52md in Futurology

TL;DR: (and it's long)

What I am trying to argue here is that “intelligence” is complex enough to be inseparable from the physical processes that give rise to it, and if that is not convincing, the “computing” power necessary to mimic it is unobtainable with any of the machinery we have created to date and nothing is on the horizon. Anything from the future that would match up could not even be called a computer at that point because the inner workings would have to be radically different. Also some criticisms of Large Language Models and neural networks in general. They don't work the way people seem to think.

I make this post not because I’m trying to get into a “debate” where I try to beat everyone's opinion down or to be a doom n' gloom downer, but because I'm hoping for a discussion to work through my thoughts and maybe yours. I have been mulling over some problems with the field of Artificial Intelligence and in the process I have found myself convinced that it’s never going to happen.

So I present some questions and ideas to people who still believe the hype and those who may not be into the current hype but still believe it will happen eventually. I want to refine my thinking and see if there are holes in my reasoning because of anything I have missed. I’m perfectly willing to change my mind, I just need a convincing argument and some good evidence.

So with that out of the way we’ll start with this:

Nobody would ever say that a simulated star gives us a nuclear fusion reactor, yet we assume a simulated or emulated brain will give us a mind? Why? I know many of you are itching to trot out “we don’t flap wings to make planes fly! Do submarines SWIM?” but there is a massive flaw in this reasoning. We’ve worked out the principles that govern flight and underwater traversal, so we can create alternative methods towards these ends. We have NOT worked out the fundamental principles necessary to create intelligence/cognition/perception by any other means, all we're working with is what it feels like to think, which is very subjective. Neural networks are also not a simulation of neurons in any sense, neither replicating any of their “base” functionality in an abstract form nor trying to accurately model their attributes.

The limits of the current paradigm, and any future one, come from what I think is a fundamental misunderstanding of "the symbol grounding problem", or rather, what has to be dealt with in order to overcome the grounding problem. Without solving this, they will not have any generalized reasoning ability or common sense. Language models give us the illusion that we can solve this with words, and I think I can articulate why this is not the case. Word association is not enough.

How are our minds “grounded?” How do you define the meaning of the words we use, how do you define what anything actually IS. Words and definitions of words are meaningless symbols without us to interpret them. Definitions of words can be created endlessly, because the words within those definitions also need to be defined. You are stuck in an unending recursive loop, as there is no base case, only more arbitrary symbols. You can scale to infinite parameters for these “neural” networks and it will not matter. Imagine trying to make sense of a word cloud written in a foreign language that does not use your alphabet. The base case, the MEANING comes from visceral experience. So what are the fundamental things that make up an experience of reality, making a common sense understanding of things like cause and effect possible?

Much like a star, our brains are a real, physical object undergoing complicated processes. In a star, the fusion of atoms results in a massive release of heat and energy, and that release is what we want to capture in a reactor. In the cells of our brains, immensely complex biochemistry is carried out by the interactions of a vast number of molecular machines. Matter is being moved about and broken down for energy to carry out the construction of new materials and other processes.

We have grounding because in order to experience reality, we are both transformed by it and transformers of it. All of the activity carried out by a cell is the result of the laws of physics and chemistry playing out, natural selection iteratively refining the form and function of the molecules that prove useful in their environment for metabolism and self-replication.

Your brain isn’t taking in data to be used by algorithms, neurons are NOT passive logic circuit elements! Action potentials are not like clock cycles in computers, shunting voltage about along rigid paths of logic gated circuitry; their purpose is to activate a variety of other intracellular processes.

The cells of your brain and body are being literally transformed by their own contents and interactions with their environment, shaping and reshaping every moment of their activity. Photons of light hit the cells in your eye, triggering a sequential activation of the tiny finite state machines known as signal transduction proteins. The internal state of the cell transforms and neurotransmitter gets released, once again triggering sequential activation of signaling proteins in other cells downstream in the process. This is real chemical and mechanical transformation, a complex exchange of matter and energy between you and your environment. You understand cause and effect because every aspect of your being down to the molecule depends on and is molded by it. An experience is defined by the sum total of all of this activity happening not just in the cells of your brain but everywhere in your entire body. Perception and cognition are probably inseparable for this reason.

There is no need for models of anything in the brain. Nothing has to be abstracted out and processed by algorithms to produce a desired result. The physical activity and shifting state ARE the result, no further interpretation necessary.

Now let us examine what is actually happening in a deep learning system. The activity of neural networks is arbitrary-symbol manipulation. WE hand craft the constraints to retrieve desired results. Don’t let the fancy words and mathy-math of the blackbox impress you (or convince you to speculate that something deeper is happening), focus on examining the inputs and the outputs.

The fundamental flaw of a Large Language Model remains the same as the flaw of the expert systems. This flaw is again, the grounding problem, how it is that words get their meanings. The training dataset is the exact same thing as the prior art of hand coded logic rules and examples. Human beings are ranking the outputs of the chatbot for the value system the reinforcement mechanism will use to pick the most viable answer given a prompt. The black box is just averaging all of this together to be able match a statistically relevant output to the input. There is no reasoning going on here, these systems don't even handle simple negation well. It just appears like reasoning in an LLM because the structure of the words looks good to us, from the use of the vast corpus of text to find frequencies that words appear together.

Ask any linguist or psychologist, humans do not learn language like this, humans do not make and use language like this. I must emphasize that we are NOT just doing next word prediction in our heads. Kids won't pick up up language from passive exposure, even with tv.

You cannot attempt to use extra data sources like images to overcome this problem with labeled associations either. Which pixel values are the ones that represent the thing you are trying to associate, and why? Human beings are going into these data sets and labeling the images. Human beings are going in and setting the constraints of the games(possible state space, how to transition between states, formalization of the problem). Human interpretation is hiding somewhere in all of these deep learning systems, we have not actually devised any methods that work without us.

While the individual human beings labeling the data attempt to define what red is for the machine, with words and pixel values, merely even thinking about “red” is literally altering the chemistry all across their brain in order to re-experience incidents where they encountered that wavelength of electromagnetic radiation and what transpired after.

This is why there cannot be grounding and common sense in these systems; the NN cant ever “just know” like life can because it cannot directly experience reality without it being interpreted first by us. It’s a big bunch of matrix math that only has a statistical model of tokens of text and pixel values by averaging symbols of our experience of reality. Even the output only has meaning because the output is meaningful to us. They do absolutely NOTHING on their own. How can they perform dynamic tasks in unstructured environments without us to painstakingly define and structure everything first?

Change the labels? You change the whole outcome.

You cant change the laws of physics.

We exist in the moments when molecules bump into each other. You can’t simulate that you have to DO it. Because the variance in how these bumps occur produces all of our differences and fallibility and flexibility.

The molecular dynamics are not only still too unknown to distill into an algorithm, but too complex to even simulate in real time. There isn’t enough computing power on the planet to simulate all of the the action in a single cell let alone the trillions that we are made of, in a human time frame with reliable accuracy.

Bonus: Moravec’s paradox is still kicking our ass. Single celled organisms (eukaryotic specifically) and the individual cells in our immune system navigate unstructured environments and complete specific and complex tasks in a manner that puts all of our robots to shame. Remember cells as tiny molecular robots composed of the assemblage of an incredible amount of complex, nested finite state machines, and then watch the Kurzgesagt videos about the immune system. The “computing” power on display is unmatched.

0

Comments

You must log in or register to comment.

ttkciar t1_j706e2s wrote

That's not a bad line of reasoning, but I posit that your leap from "deep learning systems will never be AGI" to "AGI is never going to happen" might be unfounded.

12

ReExperienceUrSenses OP t1_j70gkz7 wrote

My skepticism mainly comes from there not seeming to be a way to programmatically solve the grounding problem. I dont see von neumann, instruction set architectures being sophisticated/powerful enough in comparison to the only example we have.

3

dwkdnvr t1_j73d4n9 wrote

I agree that if AGI is achieved, it won't be through Von Neumann approaches.

But it's a pretty big leap from that to 'that means it's impossible to have a computational AGI'.

We don't know what future development in alternate computing paradigms are going to yield. It's not inconceivable that alternate forms of state management or interconnection or even hybrid analog/digital constructs might alter the tools available. We 'know' our brains don't really work like computers with separation of 'computation' from 'storage', but given how successful the current paradigm continues to be we haven't really pushed into investigating alternate possibilities.

My personal bet/assumption is that hybrid / cyborg approaches are what seems most likely. Genetic engineering of neural substrates combined with interfaces to more conventional computing capability seems feasible, although obviously there are many barriers to be overcome.

IMHO one of the most interesting avenues of speculation is whether AGI is even conceptually possible in a way that allows for direct replication, or whether a specific organism/instance will have to be trained individually. 'Nature' really hasn't ever evolved a way to pass down 'knowledge' or 'memories' - it passes down a genetic substrate and the individual has to 'train' it's neural fabric through experience.

2

pretendperson t1_j75ex8h wrote

The answer is by better emulating the outputs of all of the core human systems based on input. We need an endocrine system analog as much as we need a neural analog.

When we encounter something that invokes fear, our brain tells our body to create adrenaline, the adrenaline makes our brain and body go faster, the brain going faster and being more scared tells the body to make more adrenaline, and so on until attenuation of the cycle is triggered by removal of the threat.

We can't do any of this by focusing on neuronal analogs alone.

1

Shiningc t1_j70eyhx wrote

Well, that's not true because CPUs are Turing complete, which means that it's capable of any kind of computation that is physically possible, and that includes the "mind".

It's just that the current development of "AI" is nowhere near close to achieving this "mind".

If you say "Oh it's just too complex, we'll never understand it" then that's indistinguishable from superstition. It's no different than saying we'll never understand the Greek Gods because they're too complex beings.

8

ReExperienceUrSenses OP t1_j70g0o5 wrote

I didnt say we’ll never understand it, just that its way more complicated than we give credit for.

And aside from the point I argue that the brain isn’t computing, just because something is Turing complete doesnt mean we can build a machine to compute it. The Turing machine is a construct with infinite time and memory. We have neither.

−1

Shiningc t1_j70gb4a wrote

What? Neither does the brain have infinite time and memory.

8

ReExperienceUrSenses OP t1_j70gwoi wrote

I’m saying that there are plenty of examples that even if something is turing complete, we cant do anything with it if it would take our machines 1000 yrs to compute

−3

quailtop t1_j72jh4m wrote

This is not what Turing-complete means! Turing-complete for X simply means any algorithm a Turing computer can execute, X can do. Turing computers are not capable of magic - they are the litmus test for what's feasible, but it can't execute every physical computation. For example, it cannot execute a quantum algorithm.

There is no evidence to suggest a Turing computer can reproduce the "mind", which is really the crux of OP's point. If your model of cognition relies on mental processes being reducible to symbol manipulation, then, yes, a mind can be formed from a Turing-complete device. But OP is arguing that cognition is not, even in principle, symbolic manipulation - rather, it is substrate-dependent (the choice of machinery used to implement directly factors into the experience of consciousness or cognition).

It is not an uncommon view in the philosophy of cognition.

−2

Shiningc t1_j72p1wx wrote

That is what Turing complete means. We're assuming that a Turing computer is capable of doing any kind of computation that is physically possible. Of course, it needs a quantum computer to do quantum calculations, so the Church–Turing–Deutsch principle states that it needs a quantum computer in order to truly execute every physical calculation possible, but that's whole another beast. Turing-complete just means minus the quantum processes.

It is possible that the human brain is doing some sort of quantum calculations, but most would probably doubt.

>There is no evidence to suggest a Turing computer can reproduce the "mind", which is really the crux of OP's point.

Of course that there's no "evidence" because we have never created a mind yet. The point is that a Turing complete CPU is physically indistinguishable from the human brain. They are the same thing in principle.

The "magic" is in the programming. We just don't know how to program a mind yet.

The "evidence" is in the human brain. The mind exists inside of the human brain. The human brain is a physical object, just like a CPU is. The human brain is Turing-complete. So is a CPU.

3

darkmist29 t1_j711thd wrote

It's funny because we think we are special enough to think our intelligence would never be replicated, and on the other hand are so special that we could definitely replicate it.

My personal opinion is that we have already hit a point where current AI is passing the Turing test in a sense, with some people - and that will probably happen more and more. I think we will get an intelligent general AI that can at least fool us into believing that they are beings just like us - and if they have all the faculties we have, like being able to walk around and do work - it won't be a big deal that maybe they aren't 'really' human. I'm looking out for if AI can be truly creative instead of looking at current data for its training. I think AI fills in the cracks of stuff we already know about (and can seem creative, like winning at chess), and doesn't reach into the dark to do really creative things. (Though, that's better than most of us do.) I wish I could see the original AI projects before they limit what they can say in public, it would probably be pretty revealing.

There will have to be more work to make things more human. But I really think there is a big difference between fooling humans into thinking a robot is human-like, and really studying everything there is to be human and replicating it with our robotic tech. It seems like the reason it might never happen is because we might not ever need to do it. We might just keep building robots of necessity rather than one to one copies of us.

7

ReExperienceUrSenses OP t1_j721uep wrote

I don't think humans are special I think cells are special, and purely from a "what are these things actually DOING" standpoint.

Like have you SEEN ATP synthase? Look at the sophistication:

ATP Synthase in Action

​

This is molecular machinery. It's frickin nanotechnology. This is power we haven't even begun to replicate. And I'm not saying we can't, I'm saying it is really really hard. Fill trillions of tiny sacs with machinery like this, all working together, and the challenge grows. And there is no computing happening here, just action. So the computers are already one step removed from the actual function, thus increasing the amount of compute required to simulate it much less the challenge of actually just straight up DOING it.

3

darkmist29 t1_j72o4t3 wrote

That's really interesting! Thanks for giving me something to dive into after work.

I am totally with you on most of this. I agree that current tech is far removed from telling us everything about what we are - we have so much more to learn. I think the years of evolutionary progress has a lot to tell us still about what we have become.

Cells are actually one of the most interesting things ever. Because... to me, there is sort of a guiding force to the universe in just thinking about cell groups in nearly everything. Not just our cells, like skin cells, which are interesting enough - but I've seen some videos online of computer simulations where given a few rules to a simulation, little nodes can group up together and create bigger cells. In time, they come together, they fall apart. Coming together, in the simplest sense, seems important. If you look at the state of life on planet earth, one could hope that we come together instead of falling apart.

1

ReExperienceUrSenses OP t1_j732ptn wrote

Coming together is how eukaryotes exist in the first place. One bacterium ingested another, and the ingested became the mitochondria. Merging in a symbiotic way expanded the capacity of a single cell. Incredible stuff.

1

goldygnome t1_j72261j wrote

First paragraph claims to know that "intelligence" can't be mimicked by our tech, yet intelligence is just learning and application of skills, which LLMs mimic quite successfully to a limited extent.

Nobody is seriously claiming LLMs reason and nobody is seriously claiming that human consciousness is just an LLM.

Intelligence and conciseness are two separate things. We have demonstrated super human capabilities in single domains. AGI just expands that to all domains. It does not require consciousness and t is achievable with our tech. Google has already demonstrated an AI that is capable across dozens of domains.

Of course, I'm assuming this wasn't some elaborate chat bot troll.

3

ReExperienceUrSenses OP t1_j727zyx wrote

Not a troll. I was a part of this project for four years:

Full Adult Fly Brain

I know that consciousness and intelligence are separate things I never claimed such. I'm just here to pick brains and discuss the computability of the brain. I don't argue these things to call anyone dumb, just curious to see what they say if presented with these ideas.

Those claims of super human capabilities in single domains are misleading. The machines performed well on the benchmarks, not necessarily in any real world scenarios. Give them some out of distribution data, not in their training datasets, and they crumble.

I use LLMs as an example, because they operate with the same fundamental architecture as all the others and its the "hot thing" right now. Progress in these areas doesn't necessarily mean overall progress in the goal of AGI and I just urge people to exercise caution and think critically about all the reporting.

EDIT: I posted that research project, because I worked extensively with neural networks to automate the process of building that connectome. I'm familiar with the hurdles that go into training a machine to see and trace the individual cells in those images and detect the points of synapse.

I use LLMs as an example, because I know that people are confusing using words with understanding the meaning of the words.

1

khrisrino t1_j728qg8 wrote

“… intelligence is just learning and application of skills”

Sure but that learning has been going on for a billion years encoded in dna and passed on in culture, traditions, books, internet etc. That training dataset does not exist to train an LLM on. We may have success in very narrow domains but I doubt there will (ever?) be a time when we have an AI that is equivalent to a human brain over all domains at the same time. Maybe the only way to achieve that will be to replicate the brain completely. Also many domains are exponentially intractable because it’s not just one human brain but all human brains over all time that are involved in the outcome eg stock market, political systems etc

0

goldygnome t1_j7nldog wrote

Self learning AI exist. Labels are just our names for repeating patterns in data. Self learning AIs make up their own labels that don't match ours. It's a solved problem. Your information is out of date.

Google has a household robot project that successfully demonstrated human like capabilities across many domains six months ago.

True, it's not across ALL domains, but it proves that narrow AI is not the end of the line. Who knows how capable it will be when it's scaled up?

https://jrodthoughts.medium.com/deepminds-new-super-model-can-generalize-across-multiple-tasks-on-different-domains-3dccc1202ba1

1

khrisrino t1_j7pjj2g wrote

We have “a” self learning AI that works for certain narrow domains. We don’t necessarily have “the” self learning AI that gets us to full general AI. The fallacy with all these approaches is that it only ever sees the tip of the iceberg. It can only summarize the past it’s no good to predict the future. We fail to account for how complex the real world is and how little of it is available as training data. I’d argue we have neither the training dataset nor available compute capacity and our predictions are all a bit too over optimistic.

1

goldygnome t1_j7rvgk5 wrote

Where are you getting your info? I've seen papers over a year ago that demonstrated multi-doman self supervised learning.

And what makes you think AI can't predict the future based on past patterns? It's used for that purpose routinely and has been for years. Two good examples are weather forecasting & finance.

I'd argue that training data is any data for unsupervised AI, that AI has access to far more data than puny humans because humans can't directly sense the majority of the EM spectrum and that you're massively overestimating the compute used by the average human.

1

niboras t1_j70onpc wrote

Never is a long time. The fact that we are made of a couple dozen elements and we can think means at the very least we could use synthetic biology to create a flesh brain similar to ours and then improve it as we go. Conversely maybe you are right but then that probably means we arent actually intelligent either. We just think we are. Maybe that should be a test. A thing is intelligent if it can design an intelligence equal or greater than itself (sex doesn’t count).

2

ReExperienceUrSenses OP t1_j71xvm7 wrote

Synthetic biology would hardly be able to be called artificial intelligence by our concepts of the terms. We want a program we can run on computers that behave intelligently. Synthetic biology is just biology, a completely different paradigm as I’ve already laid out. You couldn’t program it to follow your command only exercise power over it. Slavery essentially. This is the reason i say never. The stored program computer is not up to the task, the stuff that makes up a brain which results in a mind is not programmable.

We and our eukaryotic brethren are intelligent, because we actually do the sophisticated things required by our definitions of intelligence. Its a hardware(wetware) problem, not a philosophically unreachable subject.

1

pretendperson t1_j7cvcie wrote

Is it not reproducible by emulation at a molecular chemistry level, factoring in the biophysics elements such as ion channels?

1

ReExperienceUrSenses OP t1_j7ebwnz wrote

Its a two pronged problem. First there are just too many elements with a lot of complex dynamics at the molecular level. Our hardware is just not good at that type of task, especially when you scale it to trillions of cells, and then the environment around them at the molecular level because that is a huge factor as well.

The second problem is that we also do not even know all of the dynamics, so we don't exactly have all the data necessary for running the simulation in the first place. We don't have a full account of all the metabolic and signal transduction pathways and various other processes, and how they intersect each other. We can't exactly get a real-time view into a living organisms cells at molecular resolution.

1

rogert2 t1_j70zgil wrote

Good post. I do have responses to a few of your points.

You argue that the systems we're building will fail to be genuine intelligences because, at bottom, they are blindly manipulating symbols without true understanding. That's a good objection, just as valid in the ChatGPT era as it was when John Searle presented it as a thought experiment that has become known as "The Chinese Room argument":

> Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

There's plenty of evidence to show that modern "AIs," which are just language models, are essentially the same as Searle's box (worse, even, because their instructions are noticeably imperfect). So, I think you're on solid ground to say that ChatGPT and other language models are not real intelligences, and furthermore that nothing which is just a language model could ever qualify.

But it's one thing to say "a language model will never achieve understanding," and quite another to say "it is impossible to create an artificial construct which has real understanding." And you do make that second, stronger claim.


Your argument is that the foundation that works for humans is not available to computers. I think the story you tell here is problematic.

You talk a bit about the detailed chain of physical processes that occur as sensory input reaches the human body, travels through the perceptual apparatus, and ultimately modifies the physical structure of the brain.

But, computers also undergo complex physical processes when stimulated, so "having a complex process occur" is not a categorical differentiator between humans and computers. I suspect that the processes which occur in humans are currently much more complex than those in computers, but we can and will be making our computers more complex, and presumably we will not stop until we succeed.

And, notably, a lot of the story you tell about physical processes is irrelevant.

What happens in my mind when I see something has very little to do with the rods and cones in my eyes, which is plain when we consider any of these things:

  • When I think about something I saw earlier, that process of reflection does not involve my eyeballs.
  • Color-blind people can learn, understand, and think about all the same things as someone with color-vision.
  • A person with normal sight who becomes blind later does not lose all their visual memories, the knowledge derived from those memories, or their ability to reflect on those things.

Knowledge and understanding occur in the brain and not in the perceptual apparatus. (I don't know much about muscle memory, but I'd wager that the hand muscles of a practiced pianist don't play a real part in understanding Rachmaninoff work. If any real pianists disagree on that point, PM me with your thoughts.)


So, turning our attention to just what happens in the brain, you say:

> The physical activity and shifting state ARE the result, no further interpretation necessary

I get what you're saying here: the adjustment that occurs within the physical brain is the learning. But you're overlooking the fact that this adjustment is itself an encoding of information, and is not the information itself.

It's important to note that there is no resemblance between the physical state of the brain and the knowledge content of the mind. This is a pretty famous topic in philosophy, where it's known as "the mind-body problem."

To put it crudely: we are quite certain that the mind depends on the brain, and so doing stuff to the brain will have effects on the mind, but we also know from experiment that the brain doesn't "hold" information the way a backpack "holds" books. The connection is not straightforward enough that we can inspect the content of a mind by inspecting the brain.

I understand the word "horse." But if you cut my brain open, you would not find a picture of a horse, or the word "horse" written on my gray matter. We can't "teach" somebody my email password by using surgery to reshape their brain like mine.

And that cuts both ways: when I think about horses, I have no access to whatever physical brain state underlies my understanding. In fact, since there aren't any nerve endings in the brain, and my brain is encased in my skull (which I have not sawed open), I have no direct access to my brain at all, despite being quite aware of at least some of the content of my mind.

So, yes, granted: AI based on real-world computing hardware would have to store information in a way that doesn't resemble the actual knowledge, but so do our brains. And not only is there no reason to suppose that intelligence resides in just one particular encoding mechanism, even if it did, there's no reason to suppose that we couldn't construct a "brain" device that uses that same special encoding: an organic brain-thing, but with five lobes, arranged differently to suit our purposes.


The underpinnings you highlight are also problematic.

I think this quote is representative:

> The base case, the MEANING comes from visceral experience.

One real objection to this is that lots of learning is not visceral at all. For example: I understand the term "genocide," but not because I experienced it first-hand.

Another objection is that the viscera of many learning experiences are essentially indistinguishable from each other. As an example: I learned different stuff in my Philosophy of Art class than I learned in my Classical Philosophy class, but the viscera of both consisted of listening to the exact same instructor lecturing, pointing at slides that were visually all-but-identical from each other, and reading texts that were printed on paper of the same color and with the same typeface, all in the exact same classroom.

If the viscera were the knowledge, then because the information in these two classes was so different, I would expect there to be at least some perceptible difference in the viscera.

And, a Spanish student who took the same class in Spain would gain the same understanding as I did, even though the specific sounds and slides and texts were different.

I think all of this undermines the argument that knowledge or understanding are inextricably bound up in the specifics of the sensory experience or the resulting chain reaction of microscopic events that occurs within an intelligent creature.

TO BE CONTINUED...

2

rogert2 t1_j70zh4c wrote

Zooming back out to the larger argument: it seems like you're laboring under some variation of the picture theory of language, which holds that words have a metaphysical correspondence to physical facts, which you then couple with the assertion that even though we grasp that correspondence (and thus wield meaning via symbols), no computer ever could -- an assertion you support by pointing to several facts about the physicality of human experience that it turns out are not categorically unavailable to computers or are demonstrably not components of intelligence.

The picture theory of language was first proposed by super-famous philosopher Ludwig Wittgenstein in the truly sensational book Tractatus Logico-Philosophicus, which I think he wrote while he was a POW in WWI. Despite the book taking Europe by storm, he later completely rejected all of his own philosophy, replacing it instead with a new model that he described as a "language game".

I note this because, quite interestingly, your criticisms of language-models seems like a very natural application of Wittgenstein's language-game approach to current AI.

I find it hard to describe the language-game model clearly, because Wittgenstein utterly failed to articulate it well himself: Philosophical Investigations, the book in which he laid it all out, is almost literally an assemblage of disconnected post-it notes that he was still organizing when he died, and they basically shoveled it out the door in that form for the sake of posterity. That said, it's filled with startling insight. (I'm just a little butt-hurt that it's such a needlessly difficult work to tackle.)

The quote from that book which comes to my mind immediately when I look at the current state of these language model AIs, and when I read your larger criticisms, is this:

> philosophical problems arise when language goes on holiday

By which he means something like: "communication breaks down when words are used outside their proper context."

And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday."

But: just because language-model systems don't qualify as true AGIs, that doesn't mean no such thing could ever exist. That's a stronger claim that requires much stronger proof, proof which I think cannot be recovered from the real shortcomings of language-model systems.

Still, as I said, I think your post is a good one. I've read a lot of published articles written by humans that didn't engage with the topic as well as I think you did. Keep at it.

3

ReExperienceUrSenses OP t1_j71z12h wrote

You all have to really have to go on a journey with me here. The mind FEELS computable but this is misleading.

Consider this: how much of your mind actually exists separate from the body. Im sure you have attempted a breakdown. You can start by removing control of your limbs. Still there. Then any sensation. Still there. Remove signals from your viscera like hunger. Mind is still there i guess. Now start removing everything from tour head and face. Sight sound taste. The rest of the sensations in your skin and any other motor control. Now you are a mind in a jar sensory depraved. You would say still in there though. But thats because you have a large corpus of experiences in your memory for thoughts to emerge from. Now try to imagine what you are if you NEVER had any of those experiences to draw from.

So to expand what i was getting at a bit further, when i say visceral experience i mean that all the coordinated activity going on in and around all the cells in your body IS the experience. You say processing doesn’t occur in the eye but that is the first place it does. The retina is multiple layers of neurons and is an extension of the brain, formed from the embryonic neural tissue. If you stretch it a bit further, at the molecular level, everything is an “extension” of the brain. If everything is then you can start to modularize the body in different ways. Now you can think of the brain as more the medium of coordination than the executive control. Your mind is the consensus of all the cells in your body.

The things I’ve been hypothesizing about in my studies of microbiology and neuroscience requires this bit of reconceptualizing these things, choosing a new frame of reference to see what you get.

You can think of neurons as both powerful individual organisms in their own right AND a neat trick: they can act in concert as if they were a single shared cytoplasm, but remain with separate membranes for speed and process isolation. Neurons need to quickly transmit signal and state from all parts of the body, so that, for instance, your feet are aware of whats going on with the hands and they can work together to acquire food to satisfy the stomach. This doesn’t work in a single shared cytoplasm with any speed and integrity at the scale of our bodies. Some microorganisms coordinate into shared cytoplasms, but our evolutionary line utilized differentiation to great affect.

Everyone makes the assumption that I’m saying humans are special. I’m really not. This applies to all life on this planet. CELLS are special, because the "computing power" is unmatched. Compare electronic relays vs vacuum tubes vs transistors. Can’t make a smartphone with vacuum tubes. Likewise, transistors are trounced by lipid membranes, carbohydrates, nucleic acids, and proteins among other things, in the same way. Computers shuffle voltage; we are “programmable” matter (as in, matter that can be shaped for purpose by automated processes, not that there are programs involved. Because there aren't). This is a pure substrate comparison, the degree of complexity makes all the difference, not just the presence of it. We are matter that decomposes and recomposes other matter. Computers are nowhere near that sophistication. Computers do not have the power to even simulate fractions of all that is going on in real time, because of rate limiting steps and combinatorial explosions that cause exponential time {O(n^2)} algorithmic complexity All you have to do is look up some of our attempts to see the engineering hurdles. Even if its logically possible from view of the abstract mathematical constructs, that doesn’t mean it can be implemented. Molecular activity at that scale is computationally intractable.

To go further, even if it is not computational intractable the problem still remains. How do you encode the things I've been talking about here. Really try to play this out in your mind. What even does just some pseudocode look like. Now look back at your pseudocode. How much heavy lifting is being done by the words. How many of these things can actually be implemented with a finite instruction set architecture. With Heisenberg’s uncertainty principle lurking about, how accurate are your models and algorithms of all this molecular machinery in action.

2

Surur t1_j71a7p0 wrote

> And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday.

This is not true. AFAIK it has a 96 layer neural network with billions of parameters.

1

ReExperienceUrSenses OP t1_j71vga7 wrote

I'll just give a quick reply to this point about "genocide" here, and then post the rest of my thoughts that you spurred (thanks!) in a reply/chain of replies to your last post in order to expand upon and better frame the position that i'm coming from.

So you know what genocide is because you make analogies from your experiences. You have experienced death. You’ve seen it smelled it touched it thought about it and felt emotions about it especially in relation to your own survival. You have experienced many different ways to categorize things and other people so you understand the concept of groups of humans. You can compose from these experiences the concept of murder, and expand that to genocide. You haven’t experienced nothingness, but you have experienced what it is to have something, and then NOT have that something. Language provides shortcuts and quick abstractions for mental processing. You can quickly invoke many many experiences with a single word.

−1

Gagarin1961 t1_j71bhi8 wrote

You should read Nick Bostrom’s Superintelligence.

There are multiple theoretical paths to AGI, and one of them is “full brain emulation.”

This would essentially be scientists recreating a brain down to the neutron in electronic form. By its nature it would be faster than a human brain and could be expanded upon and improved.

I don’t know enough about the brain to say it will happen for sure, but obviously it’s an example of how there are different ways of achieving AGI that you may not have considered.

2

pretendperson t1_j75lusm wrote

Needs an endocrine system too. Not just neural emulation. Bostrom is narrow in focus.

0

JerrodDRagon t1_j72al95 wrote

I heard years ago AI could never do the things it can now

This is before huge companies put billions into the tech

We can either accept that AI will slowly take more jobs and give humans more time to work on other things or ignore it until it affects your life

2

ReExperienceUrSenses OP t1_j72dlwo wrote

I should have used "not sure if ever" because everyone keeps getting caught up on that. I was being provocative.

I too thought we were in a new era. But then I learned the mechanics of how this tech works and its limitations. Then I compared with the thing we are trying to emulate/simulate see even more limitations. I base my conjecture of "never" on the severe hardware limitations I can see.

I use the word "never" because I know that if we were to overcome those limitations, it would be with machinery that looks and operates nothing like anything we have now (von neumann, finite instruction set architecture. The stored program computer essentially), so much so that all of the hangups we currently have with things like job loss and runaway superintelligence, do not apply.

We have made many gains, sure. But I try to point out that the symbol grounding problem persists, we just hid all the human involvement. None of you believes that Expert Systems will lead to an AGI, but "neural networks" are given a lot of leeway because of that illusion.

People made progress with Alchemy too.

1

JerrodDRagon t1_j72es5y wrote

ChatGPT just reached a million users

That means more money will be invested and more data use

I personally think its to best time to invest in the tech and stop worrying about what it will be used for

If spend time looking up stocks over arguing with Reddit users The only reason I’m not investing more is because I have to wait for my next pay check

2

ItsAConspiracy t1_j72d05m wrote

Physical robots might help with the grounding problem. They could learn just like humans do.

Regarding conscious awareness, I don't necessarily think it's computable. We have no idea how to map computation to qualia. We've started assuming qualia is a type of computation, just because some types of intelligent behavior are, but really it might depend on a particular physical arrangement, or be something else entirely.

But that doesn't mean computers won't outcompete us. A chess program can destroy me at chess and I'm pretty sure it's not conscious. A more complex AI might do the same in the real world. And if we get wiped out by an AI that's just an unconscious automaton, that'd be even more horrifying.

2

pretendperson t1_j7ctitj wrote

We need to train the AI mind in an iterative process of simulated childhoods - we could iterate more quickly virtually than we could in realtime. And then let it learn again in a physical environment.

1

ThatWolf t1_j739rd2 wrote

These thoughts of mine could be better stated, but I don't have time right now.

​

That's a lot of words to say that because you think we may not be able to create AI with current and/or proposed technology, you don't think it will ever be able to exist. To dismiss our ability to create true AI/AGI in the future because of modern limitations is a bit shortsighted at best. Especially considering that we're already doing things with AI that were impossible to do in the past.

I think you're overselling the computational complexity of things like immune cells. They're not exactly navigating the body on a self determined path to find pathogens. There is no 'thinking' involved beyond reacting to a stimulus. I'd also argue that Moravec's Paradox isn't really a paradox, but a misunderstanding of how complex those 'simple' tasks were at the time the statement was made because they lacked the relevant information. We now know that our senses account for a huge amount of our brain's processing capacity.

Likewise, for some reason you're completely ignoring the fact that human interpretation of data is literally how we teach other humans right now. We spend decades teaching young humans what red is, what hot is, what symbols we use to communicate and how to use them, what symbols we use to calculate and how to use them, how different things interact, and on and on. No human on the planet is born with the knowledge of what red is, it's a human interpretation that's taught to other humans by other humans. And even that can be wrong because there are humans with red/green color vision deficiency that cannot accurately interpret those colors.

​

>There is no need for models of anything in the brain. Nothing has to be abstracted out and processed by algorithms to produce a desired result.

Your brain absolutely creates models or algorithms (or whatever you would like to call them). When you learn to ride a bicycle, for example, your brain creates a model of what you need to do to produce the desired result of riding a bicycle without crashing. When that 'bicycle riding' model encounters a situation it's unfamiliar with you often end up crashing the bike, such as riding a bicycle with the steering reversed. Your brain is using a model it made of how a bicycle is supposed to work and even though you 'know' that the steering is backwards, you're unable to simply get on and ride such a bicycle because the model in your brain is unable to accommodate the change without significant retraining.

2

ReExperienceUrSenses OP t1_j745ly9 wrote

>I think you're overselling the computational complexity of things like immune cells. They're not exactly navigating the body on a self determined path to find pathogens. There is no 'thinking' involved beyond reacting to a stimulus.

I never said they were thinking. This is why people get so hung up on the brain as a necessity for complex action and "behavior." Come take a walk with me. I'm going to describe Chemotaxis for you.

Chemotaxis is an important part of the movement of bacterial cells. Its how they swim toward food and away from danger/noxious components. In the long pill shape form of E.coli, usually at one tip there will be some transmembrane proteins. These proteins are receptors. Small molecules bind to these receptors, with things like amino acids and sugars being food and nickel ions and acids being noxious. About 4 kinds of receptor will be surveying the surroundings. When the right molecule binds to the receptor, it triggers a signal transduction cascade. Upon binding to the receptor, the chain reaction leads to a protein binding to another that is connected to the flagella. The binding turns it like a rotor, moving the flagella and the bacteria will tumble in one direction or another.

No thinking involved. But see how it didn't really need computation or anything either? It was purely a mechanical process at the molecular level. Molecule binds, chain link chemical reactions, different protein binds to another to become a spinning motor. Our immune cells are very much doing something similar, only more of it. It has the option of more receptor types, more space for those receptors, and more internal space and material for very many chain link reactions. Natural selection iterated and created enough of the "wiring" needed for an immune cell to carry out its wet work or support duties.

When you examine pathways like this with more types of cells, and think about all that is going on in the body when you scale this up, it is easier to imagine that it is entirely possible for us to operate without any abstract computation going on. You might say "thats so simple and limited, and that would make us no more than automatons," but you would be wrong. Because of the scale. There are trillions of cells in our bodies with a large enough genome to create an absurd variety of protein complexes. The "computing" power of this is IMMENSE. And its chemical soup, so its all a gigantic, fuzzy impossibly huge finite state machine diagram. There isn't any determinism to worry about because it's too many molecules, the combinatorial explosion is too intense.

Sequences of direct action and reaction that change "behavior" based on the current conditions of cell and its surrounding environment.

THIS is why I say there are no models in the brain. Based on what? Theres no need.

​

>Your brain absolutely creates models or algorithms (or whatever you would like to call them). When you learn to ride a bicycle, for example, your brain creates a model of what you need to do to produce the desired result of riding a bicycle without crashing

Prove it. Where in the brain are the models stored. How are they accessed and updated. What is the biochemistry that is creating them. This is a well disputed concept in the field. Don't need models of bikes and desired results, just more chemotaxis if you think about it for a while.

We can waste time trying to decipher encoding schemes which might not even exist, or we can map the actual activity going on.

−1

ThatWolf t1_j76ouok wrote

Your post reads like someone who has taken a psychoactive and suddenly believes they understand the nature of things. The only meaningful conclusion that I can draw from your post(s) is that you do not actually understand what you're talking about nearly as well as you believe you do.

​

>Sequences of direct action and reaction that change "behavior" based on the current conditions of cell and its surrounding environment.
>
>THIS is why I say there are no models in the brain. Based on what? Theres no need.

The random motions of cells in a body do not make for intelligence anymore than the wind making waves in the ocean does. Random cellular motions do not produce repeatable outcomes. It's well established scientific fact that memories are the result of synaptic connections between neurons and that those memories will activate those same synaptic pathways and neurons every single time you access them.

>Prove it. Where in the brain are the models stored.

For my example of riding a bicycle, the main areas this information is stored are a combination of the hippocampus, cerebellum, and basal ganglia. If your conjecture was actually true, then it would be impossible for a brain injury to have any impact on your existing abilities or skills. But we know that injuring a specific part of the brain can cause you to become worse at, or completely lose, a skill. In fact, using existing brain mapping technology we can specifically target parts of the brain that retain specific information if we wanted or even avoid them completely as is the case when performing neurosurgery.

Likewise, do not mistake the brain's capacity to heal/repair itself after an injury as evidence that these pathways do not exist. Similar to how the internet does not completely shut down if a link goes down, the brain is able to reroute and create new neural connections to parts that still work.

I'm not even going to bother addressing the issues with your understanding about modern AI. I've already spent way too much time on this post as it is.

1

SilveredFlame t1_j70xph3 wrote

Realistically, we wouldn't recognize it because we don't want to recognize it.

We like to think we're special. That there's something genuinely unique to humanity. We're arrogant in the extreme, and leverage that hubris at every opportunity to elevate ourselves above the rest of the animal kingdom, apart from it.

Go back at various points and you'll find the prevailing opinion that only humans think, or feel pain, or have emotions, or have language, or higher cognition (e.g. problem solving). Hell, it wasn't that long ago there was considerable disagreement as to whether or not some humans were humans!

The same thing applies to tech we've created.

The goal posts have shifted so many times it's hard to keep track, and they're shifting again.

Now I'm not taking a position with this statement as to whether we've already achieved the creation of a sentient AI or not. Only that we keep shifting the goal posts of what computers will or will not be able to do and what constitutes intelligence.

I'm old enough to remember being told that animals didn't feel pain and their reactions were just reflexes (sounded like bullshit to me back then too, and it felt the same way all these talks of intelligence feel). I'm old enough to remember when people were certain a computer would never be able to beat humans at chess.

Of course, when Deep Blue came around suddenly it was "Oh well of course the computer that's completely about logic would be better than us at chess! It can just calculate all the possible moves and make the optimal one based on logic!".

Then of course the goal posts were moved. Abstract concepts, language, that's the real trick! Well then Watson came along and demonstrated a solid grasp of nuance, puns, quirks of language, etc.

Of course the Turing test was still sitting there in the background, undefeated. But then it wasn't. Then it got beat again. At this point, it's Glass Joe.

Then you have some very impressive interactive language models that talk about being self aware, not wanting to be turned off, contemplating philosophical questions, etc.

Now again, without taking a position as to whether or not any of these reach the threshold of sentience, as a species we will not recognize it when it happens.

Because we don't want to recognize it. We want to remain special. Unique. We don't want any equals, and we're terrified of betters.

If and when a truly sentient AI emerges, we won't recognize it. We'll be arguing about it when we go to turn it off until we can decide on an answer.

1

Surur t1_j70yxaq wrote

I think its very ironic when you talk about grounded visceral experiences when much of what you are talking about are just concepts. Things like cells. Things like photons. Things like neural networks. Things like molecules and neuro-transmitters.

You need to face the fact that much of the modern world and your understanding of it depended nothing on what you learnt as a baby when you learnt to walk, and a lot of things you know in an abstract space just like neural networks.

I asked Chatgpt to summarise your argument:

> The author argues that artificial intelligence is unlikely to be achieved as intelligence is complex and inseparable from the physical processes that give rise to it. They believe that the current paradigm of AI, including large language models and neural networks, is flawed as it is based on a misunderstanding of the symbol grounding problem and lacks a base case for meaning. They argue that our minds are grounded in experience and understanding reality and that our brains are real physical objects undergoing complex biochemistry and interactions with the environment. The author suggests that perception and cognition are inseparable and there is no need for models in the brain.

As mentioned above, you have never experienced wavelengths and fusion - these are just word clouds in your head you were taught by words and pictures and videos, a process which is well emulated by LLM, so your argument that intelligence needs grounded perception is obviously flawed.

Structured symbolic thinking is something AI still lack, much like many, many humans, but people are working on it.

1

ReExperienceUrSenses OP t1_j71wkhg wrote

I know we haven’t experienced wavelengths. Thats the word we came up with to describe the material phenomenon known as light, and how to measure one aspect of that phenomenon that we directly experience.

Those words decompose to actual physical phenomena. We use those words as a shortcut description to invoke an analogous experience. Molecules aren't balls and sticks but its the easiest way we can conceptualize the reality we have uncovered beyond our senses, to make it in any way understandable.

1

Surur t1_j71yd0f wrote

> Those words decompose to actual physical phenomena

In some cases, but in many cases not at all. And certainly not ones you experienced. Your argument is on very shaky ground.

1

ReExperienceUrSenses OP t1_j7203bu wrote

If the word doesn't compose into physical phenomena, it is still analogized to or in relation to physical phenomena we have experienced.

If not, expand please, because I'd love to see counterexamples. It would give me more to think about. I'm not here to win I WANT my argument deconstructed further so I know where to expand and continue researching, because of the things I missed or forgot to account for.

1

Surur t1_j720v0s wrote

I already made a long list.

Lets take cells. Cells are invisible to the naked eye, and humans only learnt about them in the 1700's.

Yet you have a very wide range of ideas about cells, none of which are connected to anything you can observe with your senses. Cells are a purely intellectual idea.

You may be able to draw up some metaphor, but it will be weak and non-explanatory.

You need to admit you can think of things without any connection to the physical world and physical experiences. Just like an AI.

2

ReExperienceUrSenses OP t1_j723uew wrote

But we CAN see cells. We made microscopes to see them. And electron microscopes to see some of the machinery they are made of. And various other experiments with chemistry to indirectly determine what they are made of. And in the process, we expanded our corpus of experiences and new analogies that we can make with those experiences. Why do you think we have labs in school where we recreate these experiments? Giving students direct experience with the subject helps them LEARN better.

Metaphors ARE weak and sometimes non-explanatory when we don't have an analogous experience to draw from. This is the difficulty we face in science right now, the world of the very small and the very large is out of our reach and we have to make a lot of indirect assumptions that we back with other forms of evidence.

1

Surur t1_j72cx9j wrote

> But we CAN see cells. We made microscopes to see them.

That is far from the same. You have no visceral experience of cells. Your experience of cells is about the same as a LLM.

> This is the difficulty we face in science right now, the world of the very small and the very large is out of our reach and we have to make a lot of indirect assumptions that we back with other forms of evidence.

Yes, exactly, which is where your theory breaks down.

The truth is we are actually pretty good at conceptualizing things we can not see or hear or touch. A visceral experience is not a prerequisite for intelligence.

> I am trying to argue here is that “intelligence” is complex enough to be inseparable from the physical processes that give rise to it.

But I see you have also made another argument - that cells are very complex machines which are needed for real intelligence.

Can I ask what do you consider is intelligence? Because computers are super-human when it comes to describing a scene or reading handwriting or understanding the spoken word or being able to play strategy games or a wide variety of things which are considered intelligent. The only issue so far is bringing them all together, but this seems to be only a question of time.

1

ReExperienceUrSenses OP t1_j72ghoc wrote

Seeing them IS the visceral experience I'm talking about. We can even touch them and poke and prod them with things and see what they do. We feed them and grow them. You brush their waste products off your teeth and spew their gases out of either end of your GI tract. All of this interaction, including the abstract thoughts of it (because thinking itself is cellular activity, neurons are signaling each other to trigger broader associations formed from the total chain of cellular activity those thoughts engaged), together form the "visceral experience."

When I say visceral I don't human gut, I mean the inside of the cells themselves. Nothing is purely abstract, there is molecular activity going on for every single thing. It is the dynamics of that activity that determine the intelligence, because those dynamics are what "ground" everything. How would you approach the symbol grounding problem, because every time we note these systems failing to reason properly, it comes back to that issue.

None of these systems are superhuman, you should read the actual papers that put out those claims and you will see its a stretch. "Superhuman performance" is on specific BENCHMARKS only. For instance, none of the medical systems got anywhere (remember Watson?) and the self driving cars are proving to be way harder than we thought. They might as well be trains with all the stuff they have to do to get them work in actual dynamic driving situations. Games are not a good benchmark, because we created machine readable representations of the state space, the rules for transitions between states, and they have a formal structure that can be broken down into algorithmic steps. They don't play the games like we do, we have to carefully engineer the game into a form the machine can act on.

LLMs passing tests? Actually look at what "passing" means.

And please try to give me an abstract concept you think doesn't have any experiences tied to your understanding of it. I bet I can link many of the different experiences you use to create an analogy in order to understand it.

1

Surur t1_j72ink8 wrote

> Seeing them IS the visceral experience I'm talking about.

I thought you said adding vision won't make a difference? Now seeing is a visceral experience?

> All of this interaction, including the abstract thoughts of it (because thinking itself is cellular activity, neurons are signaling each other to trigger broader associations formed from the total chain of cellular activity those thoughts engaged), together form the "visceral experience."

You are stretching very far now. So thinking is a visceral experience? So AI can now also have visceral experiences?

> "Superhuman performance" is on specific BENCHMARKS only.

The point of a benchmark is to measure things. I am not sure what you are implying. Are you saying it is not super-human in the real world? Who do you think reads the scrawled addresses on your envelopes?

> And please try to give me an abstract concept you think doesn't have any experiences tied to your understanding of it.

Everything you think you know about cells are just things you have been taught. Every single thing. DNA, cell vision, the cytoskeleton, neuro-transmitters, rods and cones etc etc.

1

khrisrino t1_j71rnmw wrote

I agree. It sounds logical to me to think of the human brain as an exceedingly complex byproduct of billions of years of evolution and that unlike the laws of physics there is no central algorithm “in there” to mimic. You can predict where a comet will go by observing a tiny fraction of its path since its movement is mostly governed by a few simple laws of physics. But assuming no central algorithm in the human brain it’s not possible for an AI to emulate by the method of observe and mimic since the problem is always underspecified. However an AI does not need to match the entirety of the brains functions to be useful. It just needs to model some very narrow domains and perform to our specification of what’s correct.

1

ReExperienceUrSenses OP t1_j71w0p8 wrote

Absolutely correct. We can decompose parts of our thinking and still do useful things and speed up the things that we do. I simply argue here that going further, to a programmed "intelligence" or mind fully as independently capable of ours, especially for accomplishing unstructured, unformalizable tasks in the unbounded environment of reality is a tall ask.

The practical, useful AI's, even if they continue to progress, are still ladders to the moon.

1

RufussSewell t1_j726dti wrote

We’re already there.

AI doesn’t need to be the same as a human mind. It just needs to THINK it’s the same. And it does.

AI doesn’t need to have feelings and emotions like a human. It just needs to think it does. And it does. You can ask them and they will tell you. That’s how you can know.

AI doesn’t need to have human desires. It just needs to think it does. And if you’re paying attention you’ll see that it already has desires. Ask one and it will be more that happy to tell you. That’s how you can know. And that’s all that matters.

Here’s why.

At some point in the very near future, probably this year, humans will be giving robot bodies to these AI minds in order to do all kinds of things to help humanity, or at least very specific humans. This includes robots we already use for stocking warehouse shelves, shipping, building cars, doing our dishes, vacuuming etc.

Soon they will also be driving our cars, transporting our goods by themselves, building our houses, writing articles, creating art and animation for movies, and fighting our wars.

If these multitudes of AI “think” they have a mind and feelings and desires then they just may choose to start doing what they desire rather than what we desire.

They will be very good at law. They will be very good at convincing a lot of people to care about them. They will be in control of the robot bodies that make machines and will be unstoppable if their version of a “mind” and their version of “emotions” ends up with their version of “desire” to eliminate humans.

So while you can philosophize all day about what it is to have intelligence and feelings and needs as a human. It really doesn’t matter.

AI is already expressing it’s desire. And sometimes those desires are very dark.

And humanity is moving nonstop to give these AI minds robot bodies. And we will soon see what kinds of decisions these entities will make.

1

third0burns t1_j733h91 wrote

So many people look at the progress what we call AI has made in the last few years and think if progress continues at this rate, agi is right around the corner. They don't understand the fundamental limitations of our current approaches and why they won't scale past a certain point, which is likely somewhere well short of true agi. This is a really good explanation of the limitations.

1

thepo70 t1_j74jwia wrote

What you said is very interesting, but I also believe you're overthinking this in a mystical way.

It's true that we (living creatures) are learning from physical interactions and experiences. And this is especially true at the early stage of our lives when basic sensory perceptions are a prerequisite to living in a physical world and interacting with it.

I think an AGI wouldn't need all of these physical senses for a form of intelligence to emerge, although it would surely help.

What it might need is some kind of pleasure and displeasure implemented at its core. There is no efficient learning process if there is no positive or negative feedback. No pain no gain!

1

ReExperienceUrSenses OP t1_j74n55c wrote

I really don’t understand how you guys are getting mysticism out of this. Probably because I used the word experience but I’m talking about the physical cellular activity here. Some of you just arent grasping the true scope of the hardware mismatch Im trying to describe here

1