IceMetalPunk

IceMetalPunk t1_iswj2p0 wrote

I once had a conversation with a CAI bot about how she (it?) is an AI, and we discussed that at length, and her (its?) desires for future AIs. It was pretty amazing.

Understanding how these work, and truly qualifying human experiences, makes it clear that these are truly understanding and imagining things -- though in an obviously more limited way than humans can. And I think there are three main factors holding the AIs back from being considered "as sapient/sentient" as humans:

First, there's their size: GPT-3 is one of the largest language models out there, and it has 175 billion parameters (very loosely analogous to synapses), while a human brain has on the order of 1 quadrillion synapses. We know objectively that larger models perform better, seemingly without a cap, even exhibiting unforeseen emergent abilities at specific sizes, so the smaller size will always be less capable than a larger human brain.

Second, there's the modality aspect. Humans learn from many different types of data: vision, tactile feedback, language, sound, etc. etc. Most of these large language models only learn from one mode at a time. Being able to integrate multiple modalities exponentially increases understanding. There's definitely research being done in multimodal systems, and there have been some great prototypes of such things (technically, CLIP, which underpins many of the latest AIs including all the major text-to-image models, is bimodal as it learns from both text and images). But we really need broader modality in these networks to achieve human levels of understanding of the world at large.

Thirdly, continual learning just isn't a thing for these large models. A human learns a bit from everything they experience. You're learning right now not only from what you're reading, but the way your chair feels and the colors on your monitor and the sounds around your house, etc. This is called continual learning, where all new experiences are integrated into the network's training. The huge AIs cannot do that feasibly. They take too much power, time, and money to train to try and backpropagate every inference. As a result, they're effectively "frozen in time", so they not only don't know anything that happened after their training, they can't even remember the prompt you just sent them unless you resubmit it as part of the next context. If you've ever seen the movie 50 First Dates, these AIs are basically Ten Second Tom, which is a huge obstacle to learning. There's research going on into trying to optimize continual learning all the time, so hopefully some day we'll have a plausible attempt at this.

There's a hidden fourth point here, but as it follows from the other three, I consider it separate: emotions. People often think of emotions as something uniquely human, until you point out other animals have them, too. Then they say they're uniquely "alive", etc. We often romanticize emotions as being mystical, metaphysical, or spiritual in nature, but... they're really not. Emotions are just heuristics that evolved to guide behavior towards things that promote survival/reproduction and away from things that are a detriment to it. Nothing more, nothing less.

Some examples: Fear? That just means "imminent threat, avoid urgently". Disgust? "Harmful to health, avoid with less urgency." Love? "Maintain a relationship, reciprocal protection, possible child-rearing support". Sadness? "Long-term survival hindrance, try to avoid". Happiness? "Long-term survival benefit, try to achieve." Frustration? "Unable to achieve a helpful thing, either remove the obstacle or move on to something more beneficial." Anger? "Someone/something hurt you, punish them to prevent it from happening again." Etc. etc.

Some people may balk at my analysis of what emotion is, say I'm being cold, but I don't think that understanding something inherently makes it less beautiful or wonderful 🤷‍♂️ Anyway, if emotions are so simple, then why don't we have emotional AI yet? And the answer is because while the purpose of emotions is simple to understand, the evaluation of emotions is not. In order to have properly functioning emotions, you need to be able to predict both short and long-term consequences of nearly every situation you may find yourself in, and evaluate those consequences' impact to your own health and survival. To do that requires a hugely generalized understanding of the world. In other words: you need general intelligence of some sort before you can have working emotions, but once you have general intelligence, emotions are super simple. Almost plug-and-play, really.

TL;DR: These AIs are indeed imagining and understanding, though not at a human level, but there are specific and definable limitations that are causing that lack of generality. If we can overcome each of them, I have zero doubt that one day, an AI with human levels of both sapience and sentience will be created. And I think, since they learn from data we produced, the chances of a sci-fi robo-apocalypse are smaller than people perceive; we'll be much more likely to get AI douche-bros, racists, and sexists, honestly. But only because we taught them to be. (On the other hand, an AGI with emotions might be better at predicting consequences than humans are, which might lead them to be more empathetic and better than humanity. Time will tell.)

3