Viewing a single comment thread. View all comments

hducug t1_itzx608 wrote

An agi doesn’t have emotions so it won’t have the urge to take over the world and kill everyone. It can’t make the agi happy. They are not humans.

2

sticky_symbols t1_itzxzvo wrote

There's a principle called instrumental convergence that says: whatever your goals are, gathering power and eliminating obstacles will help achieve them. That's why most of the people building it are worried about agi taking over.

7

hducug t1_itzyj18 wrote

The only way that can happen is when the ai gets a reward system for doing things right. But you can fix that by letting the ai study human emotions, study the human brain or the ai can’t actually do anything and only give commands on how to do stuff.

1

sticky_symbols t1_itzyv66 wrote

Maybe. Or maybe not. Even solving problems involves making goals, and humans seem to be terrible at information security. See the websites I mentioned in another comment for that discussion.

1

hducug t1_itzzwx3 wrote

I don’t think that an super intelligent ai will have a hard time understanding what our goals are, otherwise we would indeed be screwed.

1

sticky_symbols t1_iu00ftj wrote

See the post "the AI knows and doesn't care". I find it completely compelling on this topic.

1

hducug t1_itzyu4d wrote

Trust me companies like openai and deepmind aren’t idiots and have thought about all these kind of ideas.

1

sticky_symbols t1_itzz0we wrote

Yes, by reading the research on alignment forum. And they're still not totally sure they can build safe AGI.

5

hducug t1_iu00cze wrote

Well than they simply won’t, I hope.

1

sticky_symbols t1_iu00kjz wrote

Somebody is going to try, whether they have a safe plan or not. That's why safety research ch now seems like a good idea.

2