Submitted by [deleted] t3_10xoxh6 in Futurology
Baturinsky t1_j7ttnra wrote
You are right, but it's quite hard to implement.
There is a whole science, called AI Alignment Theory, which is TRYING to figure how to make AGI without destroying the humanity.
There is https://www.reddit.com/r/ControlProblem/ subreddit about it
It's half-dead, and admins there are quite unfriendly to noobs posting (and I suspect those two things are somehow related to each other), but it has a good introduction info on it's sidebar.
There is also https://www.lesswrong.com/tag/ai with a lot of articles on the matter.
Viewing a single comment thread. View all comments