Viewing a single comment thread. View all comments

sticky_symbols t1_itzxmxi wrote

OP is absolutely correct. Naturally, there are arguments on both sides, and it probably matters a good deal how you build the AGI.There is a whole field that thinks about this. The websites LessWrong and Alignment Forum offer brief introductions to AI safety thinking.

2