Viewing a single comment thread. View all comments

visarga t1_iw01ajr wrote

There are some classes of problems where you need a "tool AI", something that will execute commands or tasks.

But in other situations you need an "agent AI" that interacts with the environment over multiple time steps. That would require a perception-planning-action-reward loop. It would allow interaction with other agents through the environment. The agent would be sentient - it has perception and feelings. How could it have feelings? It actually predicts future rewards in order to choose how to act.

So I don't think it is possible to put a lid on it. We'll let it loose in the world in order to act as an agent, we want to have smart robots.

3

AdditionalPizza OP t1_iw0eblq wrote

>It actually predicts future rewards in order to choose how to act.

I do believe some version of this will ring true. It may be required to go beyond prompting for an answer. While that can be powerful on its own, I personally think some kind of self-rewarding system will be necessary. Consequences and benefits.

But, I left it out of this discussion, specifically because a sort of "pre-AGI" won't quite require it I don't think. I think the moment we are legitimately discussing AI consciousness being created, we are beyond initial prototypes.

1