needlzor t1_j9sspwd wrote
Reply to comment by adventurousprogram4 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Surprised I had to scroll down this much to see this opinion, which I agree completely with. The danger I worry about most isn't superintelligent AI, it's people like Yudkowsky creating their little cults around the potential for superintelligent AI.
Viewing a single comment thread. View all comments