Viewing a single comment thread. View all comments

ihateshadylandlords t1_ivy9nzj wrote

Why would the creators of AGI want consciousness/sentience in their AGI? I think the AGI creators would want to keep it under their control as long as possible.

15

AdditionalPizza OP t1_ivyic1c wrote

There's arguments for it, there's also the argument that sentience just comes with adding senses like vision and hearing to an intelligent enough model. And consciousness may just be a certain level of intelligence, as in we may not have a choice when exploring AGI.

But who knows.

12

solidwhetstone t1_iw0mugl wrote

While we may not know for sure the argument that a sufficiently advanced latent network results in consciousness lines up with the complexities of our brains compared to other brains in the animal kingdom, no?

2

BreadManToast t1_iw3nll8 wrote

Would it really make a difference though? Whether or not consciousness appears at a certain point doesn't change the AI's capabilities

1

AdditionalPizza OP t1_iw3q6ha wrote

No idea, it's too theoretical to really discuss. I would assume that sentience/consciousness would have a major impact on the AI's abilities. It would also probably have a profound impact on the AI's motivations. You're now "gifting" the AI with the ability to choose what it want's to do based on it's own rationale and emotion.

1

BreadManToast t1_iw3utk5 wrote

Ahh, personally I don't believe in free will so I guess we'll have to wait and see

1

visarga t1_iw01ajr wrote

There are some classes of problems where you need a "tool AI", something that will execute commands or tasks.

But in other situations you need an "agent AI" that interacts with the environment over multiple time steps. That would require a perception-planning-action-reward loop. It would allow interaction with other agents through the environment. The agent would be sentient - it has perception and feelings. How could it have feelings? It actually predicts future rewards in order to choose how to act.

So I don't think it is possible to put a lid on it. We'll let it loose in the world in order to act as an agent, we want to have smart robots.

3

AdditionalPizza OP t1_iw0eblq wrote

>It actually predicts future rewards in order to choose how to act.

I do believe some version of this will ring true. It may be required to go beyond prompting for an answer. While that can be powerful on its own, I personally think some kind of self-rewarding system will be necessary. Consequences and benefits.

But, I left it out of this discussion, specifically because a sort of "pre-AGI" won't quite require it I don't think. I think the moment we are legitimately discussing AI consciousness being created, we are beyond initial prototypes.

1

phriot t1_ivz7mt9 wrote

Maybe I'm wrong, but I've always understood AGI to be "a roughly human-level machine intelligence." How can something be roughly human without consciousness and at least the appearance of free will?

0

kaushik_11226 t1_ivzik2s wrote

>How can something be roughly human without consciousness and at least the appearance of free will?

It doesn't have to be human. A intelligence machine that can solve major problem's and discoveries doesn't really need to have a human personality and emotion's.

10

phriot t1_ivzmj0u wrote

I feel like you focused on me leaving "level" out of that sentence, where I included it earlier in my comment. You're basically just saying that your definition of AGI is more literal than the one I use. The point of my comment was just that, up until maybe finding this subreddit, every time I saw AGI used, it had the connotation of consciousness.

It's probably splitting hairs, but it seems like people here just want to call any sufficiently good general piece of software "AGI." Yes, a really great General Artificial Intelligence will help us in many areas, but it's not what I've always understood "AGI" to be.

2

AdditionalPizza OP t1_ivzza75 wrote

The definition of AGI is an AI that can learn any task a human can. Most people presume that would mean the AI would also have to be equal or greater to a human at those tasks.

I don't know where the idea came that AGI has to be conscious. As far as I'm aware that's never been the definition. It's a talking point often associated with AGI and mentioned for Turing Tests, but contrary to your experience I've never heard anyone claim it's a requirement of AGI outside of this sub.

I also see other mixed up definitions in this sub. A lot of people refer to the singularity as the years (or decades) leading up to the actual moment of the singularity.

7