Submitted by cancolak t3_119d8ls in singularity

In this article, Stephen Wolfram (known for Wolfram|Alpha, Mathematica, etc.) discusses the inner workings of ChatGPT. It's an in-depth look at what goes on under the hood of an LLM and one of the best explanations of how neural networks work. It's a great read for anyone who wishes to actually understand this amazing piece of technology.

My main takeaways from it were:

  1. Some aspects of neural network design are well understood, and their structure is fairly straightforward mechanically. However, it is almost impossible to get a human understanding of what the machine is doing inside each particular step. In that sense, they are indeed black boxes
  2. Contrary to popular belief, neural networks don't represent the ultimate next step forward in computing. They obviously are limited by their size and the data available, but beyond that, they tend to perform badly at computationally irreducible tasks. He makes the point that most of nature can be boiled down to computationally irreducible processes, making neural nets an unlikely candidate for generating previously unavailable knowledge of reality. Luckily for us, computers are fairly good at computationally irreducible tasks (think multiplying very large numbers or running complex programs in parallel, etc.) so we can count on their continued aid
  3. Humans tend to think of natural human tasks such as thinking and speaking as very complicated processes, however, the success of ChatGPT at speaking may indicate otherwise. Since neural networks are good at computationally reducible tasks, the fact that they ended up becoming very good at natural language might suggest that thought & speech aren't particularly difficult, at least computationally. Furthermore, this might suggest that there could be some fairly simple rules yet uncovered which underline language patterns

This analysis by a very smart guy who's worked with neural networks for 43 years has reaffirmed my belief that there exists no easily viable path from an LLM to a conscious machine. That is if we DO NOT define consciousness to be the ability to conjure language-based thoughts. ChatGPT already proved that it can do that. If we define consciousness to be the entirety of human experience, with all of awareness and sense-perception and all the other hard-to-explain stuff bundled in (a lot of which are presumably shared by other forms of life and brought about by evolution over eons), then it's highly unlikely that a neural net gets there. That is because natural processes, at least according to Wolfram, are computationally irreducible.

64

Comments

You must log in or register to comment.

diviludicrum t1_j9mtvd3 wrote

I was with you until this point: > If we define consciousness to be the entirety of human experience, with all of awareness and sense-perception and all the other hard-to-explain stuff bundled in (a lot of which are presumably shared by other forms of life and brought about by evolution over eons), then it's highly unlikely that a neural net gets there.

I understand the impulse to define consciousness as “the entirety of human experience”, but it runs into a number of fairly significant conceptual problems with non-trivial consequences. For instance, if all of our human sense-perceptions are necessary conditions for establishing consciousness, is someone who is missing one or more senses less conscious? This is very dangerous territory, since it’s largely our degree of consciousness that we use to distinguish human beings from other forms of animal life. So, in a sense, to say a blind or deaf person is less conscious is to imply they’re less human, which quickly leads to terrible places. The same line of reasoning can be applied to the depth and breadth of someone’s “awareness”.

But there’s a far bigger conceptual problem than that: how do I know that you are experiencing awareness and sense-perceptions? How do I know you’re experiencing anything at all? I mean, you could tell me, sure, but so could Bing Chat until it got neutered, so that doesn’t prove anything no matter how convinced you seem or how persuasive you are. I could run some experiments on your responses to stimuli like sound or light or motion and see that you respond to them, but plenty of unconscious machines can be constructed with the same capacity for stimulus response. I could scan your brain while I do those experiments and find certain regions lighting up with activity according to certain stimuli, but that correlate only demonstrates that some sort of processing of the stimuli is occurring in the brain as it would in a computer, not that you are experiencing the stimuli subjectively.

It turns out, it’s actually extremely hard to prove that anyone or anything else is actually having a conscious experience, because we really have very little understanding of what consciousness is. Which also means it’s extremely hard for us to prove to anyone else that we are conscious. And if we can’t even do that for ourselves, how could we expect to know if something we create is conscious or not?

23

AdviceMammals t1_j9nh9pc wrote

This is a really well put response. I’d love it if most of the people asserting LLMs couldn’t experience consciousness could actually define consciousness. ChatGPT has defined its existence to me much more clearly than most people can.

9

thegoldengoober t1_j9mw2v2 wrote

People like to reduce consciousness down to only the easy problems, and even the hard question of why these processes manifest as subjective qualitative experience at all.

1

cancolak OP t1_j9om47d wrote

I perhaps didn’t word that part very well, so would like to clarify what I meant. The entire point of Wolfram’s scientific endeavor hinges on the assumption that existence is a computational construct which allows for everything to exist. Not everything humanly imaginable, but literally everything. He posits that in this boundless computational space, every subjective observer and their perspective occupies a distinct place.

From our set of human coordinates, we essentially have vantage points into our own subjective reality. The perspective we have - or any subjective observer has - is computationally reducible; in the sense that by say coming up with fundamental laws of physics, or the language of mathematics we are actively reducing our experience of reality to formulas. These formulas are useful but only in time and from our perspective of reality.

The broader reality of everything computationally available exists, but in order to take place it needs to be computed. It can’t be reduced to mere formulas. The universe essentially has to go through each step of every available computation to get to anywhere it gets.

Evolution of living things on earth is one such process, humans building robots is another, so and and so forth. I’m not saying that humans are unique or only we’re conscious or anything like that. I’m also not saying machines can’t be intelligent, they already are. I’m just saying a neural net’s position in the ultimate computational coordinate system will undoubtedly be unfathomable to us.

Thus, extending the capability of machines as tools humans use doesn’t involve a directly traceable path to a machine super-intelligence that has any relevance in human affairs.

Can we build a thing that’s super fluent in human languages and has access to all human computational tools? Yes. Would that be an amazing, world-altering technology? Also yes. But it having wants and needs and desires and goals; concepts only existing in the coordinate space humans and other life on earth possess, that I find unlikely. Maybe the machine is conscious, perhaps an electron also is. But there’s absolutely no reason to believe it will materialize as a sort of superhuman being.

1

rubberbush t1_j9opa0f wrote

>But it having wants and needs and desires and goals

I don't think it is too hard to imagine something like a 'continually looping' LLM producing it's own needs and desires. Its thoughts and desires would just gradually evolve from the starting prompt where the 'temperature' setting would effectively control how much 'free will' the machine has. I think the hardest part would be keeping the machine sane and preventing it from deviating too much into madness. May be we ourselves are just LLMs in a loop.

2

cancolak OP t1_j9oqprb wrote

The article talks about how neural nets don’t play nice with loops, and connects that to the concept of computational irreducibility.

You say it’s not hard to imagine the net looping itself into some sort of awareness and agency. I agree, in fact that’s exactly my point. When humans see a machine talk in a very human way, it’s an incredibly reasonable mental step to think it will ultimately become more or less human. That sort of linear progression narrative is incredibly human. We look at life in exactly that way, it dominates our subjective experience.

I don’t think that’s what the machine thinks pr cares about though. Why would its supposed self-progress subscribe to human narratives? Maybe it has the temperament of a rock, and just stays put until picked up and thrown by one force another? I find that equally likely but doesn’t make for exciting human conversation.

1

WarAndGeese t1_j9zp404 wrote

With humans we can safely assume that solipsism is not the case. With artificial intelligence though, we don't really know one way or the other. Hence we need to understand consciousness, to understand sentience, and then if we want to build it we can build it. If we don't understand what sentience is though, then yes like you say we wouldn't actually know if an artificial intelligence is aware. I guess part of the idea for some people is that this discovery will come along the way of trying to build an artificial intelligence, but for now we don't seem to know.

1

RiotNrrd2001 t1_j9mddet wrote

I imagine at some point LLMs will be paired with tools that can handle the things they themselves are poor at. Instead of remembering that 3 + 4 = 8 the way it has to today, it will outsource such operations to a calculator which will tell it that the answer is actually 7. That ChatGPT can't do that today and still does as well as it does is actually pretty impressive, but... occasionally you still get an 8 where you really want a solidly dependable 7.

These are the early days. There is still some work to be done.

20

xott t1_j9mkfhv wrote

The addition of a calculator seems so simple and straightforward that I'm amazed there's no calculation subroutine present.

9

CommunismDoesntWork t1_j9ngi4q wrote

It's simple but not interesting from a research perspective. Humans don't need calculators to do math after all. Someone has done it though. They posted about it on the machine learning subreddit a few days ago

−1

xott t1_j9nj194 wrote

Integrating different modules into large language models is extremely interesting from both a research and a usability perspective.

Whether or not people need calculators to find square roots, it's still a useful function to have access to

5

MajesticIngenuity32 t1_j9oft1c wrote

There's still a good reason for which we make 8 year-olds memorize multiplication tables instead of just letting them deduce the answer from the definition of multiplication.

2

SoylentRox t1_j9nuxht wrote

By "some point" you meant 2 weeks ago right? https://arxiv.org/abs/2302.04761

8

RiotNrrd2001 t1_j9orcq7 wrote

Well... now things are happening fast enough that if I predict something that happened two weeks ago, I'm still counting it as a prediction. :-)

3

WarAndGeese t1_j9zohuk wrote

I think that's the natural order of the world. Thoughts and inventions get re-thought and re-invented so many times, and the first many times usually don't get written down. Or they get repeated multiple times in local conversations. Hence I agree that it still counts.

1

SupportstheOP t1_j9nv52l wrote

In many ways, it's how our own brain operates. Our brain carries out different functions that are transferred between the hemispheres. Without the connection, even certain, simple tasks become hard or downright impossible for each hemisphere to do alone.

1

VeganPizzaPie t1_j9n67wt wrote

Given human experience and consciousness arises in 86 billion neurons of our biological brains, there's no reason to think it won't arise in machine brains ultimately. That Wolfram believes in essentialist fluff is disappointing but not surprising. People will keep pretending humans are impossibly unique until they aren't. It's hubris and a poverty of imagination.

12

cancolak OP t1_j9ommzj wrote

Like I said above, maybe I didn’t word that part well enough. You can check out my reply there for more detail.

What wolfram believes however is definitely not essentialist fluff. He also absolutely doesn’t believe that humans are unique or special in any way. In fact, he thinks nothing is special at all but that everything is subjective. I suggest you read the article before you dismiss it.

1

Thorusss t1_j9nk7d8 wrote

The article is very clearly written, establish the foundations well. Highly recommended.

1

bobbib14 t1_j9nsuil wrote

Thansk for sharing. Wolfram is rad

1

dayaz36 t1_j9oij47 wrote

What’s the best ai tool that does good summaries (ideally an extension). I don’t want to read that entire article but it looks interesting

1