Submitted by [deleted] t3_yety91 in singularity
sticky_symbols t1_itzxzvo wrote
Reply to comment by hducug in Do you guys really think the AI won't just kill you? by [deleted]
There's a principle called instrumental convergence that says: whatever your goals are, gathering power and eliminating obstacles will help achieve them. That's why most of the people building it are worried about agi taking over.
hducug t1_itzyj18 wrote
The only way that can happen is when the ai gets a reward system for doing things right. But you can fix that by letting the ai study human emotions, study the human brain or the ai can’t actually do anything and only give commands on how to do stuff.
sticky_symbols t1_itzyv66 wrote
Maybe. Or maybe not. Even solving problems involves making goals, and humans seem to be terrible at information security. See the websites I mentioned in another comment for that discussion.
hducug t1_itzzwx3 wrote
I don’t think that an super intelligent ai will have a hard time understanding what our goals are, otherwise we would indeed be screwed.
sticky_symbols t1_iu00ftj wrote
See the post "the AI knows and doesn't care". I find it completely compelling on this topic.
hducug t1_iu00r3j wrote
Can you give me a link?
hducug t1_itzyu4d wrote
Trust me companies like openai and deepmind aren’t idiots and have thought about all these kind of ideas.
sticky_symbols t1_itzz0we wrote
Yes, by reading the research on alignment forum. And they're still not totally sure they can build safe AGI.
hducug t1_iu00cze wrote
Well than they simply won’t, I hope.
sticky_symbols t1_iu00kjz wrote
Somebody is going to try, whether they have a safe plan or not. That's why safety research ch now seems like a good idea.
Viewing a single comment thread. View all comments