Viewing a single comment thread. View all comments

sticky_symbols t1_j9gwa67 wrote

He's the direct father of the whole AGI safety field. I got interested after reading an article by him in maybe 2004. Bostrom credits him with many of the ideas in Superintelligence, including the core logic about alignment being necessary for human survival.

Now he's among the least optimistic. And he's not necessarily wrong.

He could be a little nicer and more optimistic about others' intelligence.

46

GuyWithLag t1_j9j9l56 wrote

>He could be a little nicer and more optimistic about others' intelligence.

Apologies for sounding flippant, but the whole political situation since '15 or so has shown that he's too optimistic himself...

5