Viewing a single comment thread. View all comments

linearmodality t1_j9segxb wrote

I don't worry much at all about the AI safety/alignment concerns described by Eliezer Yudkowsky. I don't find his arguments to be particularly rigorous, and his arguments in this space are typically based on premises that are either nonsensical or wrong and that don't engage meaningfully with the current practice in the field. This is not to say that I do not worry about AI safety: Stuart Russell has done good work in this space towards mapping out the AI alignment problem. And if you're looking for arguments that are more rigorous leading to more sound conclusions on AI alignment and which people in the field do seem to respect, I'd recommend you look into Stuart Russell's work. The bulk of opinions I've seen from people in the field on the positions of Yudkowsky and his edifice range from finding the work to be of dubious quality (but tolerable) to judging it as actively harmful.

20