Viewing a single comment thread. View all comments

fmai t1_j9sns4a wrote

IMO, even if ML researchers assigned only an 0.1% chance to AI wiping out humanity, the cost of that happening is so unfathomably large that it would only be rational to shift a lot of resources from AI capability research to AI safety in order to drive that probability down.

If you tell people that society needs to do a lot less of the thing that is their job, it's no surprise they dismiss your arguments. The same applies to EY to some extent; I think it would be more reasonable to allow for a lot more uncertainty on his predictions, but would he then have the same influence?

Rather than giving too much credit to expert opinions, it's better to look at the evidence from all sides directly. You seem already be doing that, though :-)

2

Simcurious t1_j9umv53 wrote

That's Pascal's wager and could be used to justify belief in hell/god.

0