Viewing a single comment thread. View all comments

Surur t1_j6w14rs wrote

> In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

I believe it is much more likely we will produce a black box which is an AGI, that we then employ to do specific jobs, rather than being able to turn an AGI into a classic rule-based computer. It's likely the AGI we use to control our factory knows all about Abraham Lincoln, because it will have that background from learning to use language to communicate with us, and knowing about public holidays and all the other things we take for granted with humans. It will be able to learn and change over time, which is the point of an AGI. There will be an element of unpredictability, just like humans.

1

AsheyDS t1_j6xdpl6 wrote

>I believe it is much more likely we will produce a black box which is an AGI

Personally, I doubt that... but if current ML techniques do somehow produce AGI, then sure. I just highly doubt it will. I think that AGI will be more accessible, predictable, and able to be understood than current ML processes if it's built in a different way. But of course there are many unknowns, so nobody can say for sure how things will go.

1