Viewing a single comment thread. View all comments

NdGaM t1_iu0ore0 wrote

I’m not sure I understand your post. Could you clarify a few questions for me?

  1. Are you making a case for or against EA?
  2. Is there information I’m missing that shows EA supporting a population that doesn’t exist? As an American that phrasing immediately makes me think about abortion but I suspect you might mean something else that I’m misinterpreting.
  3. I don’t understand what you mean by x-risk analysis in this context, particularly because I’m not sure if “biggest cause area” is a typo or not. I apologize if that was rude but it would help me if you offered an example of how risk analysis ties in, per your understanding. In my mind the equation is set up one way, but I am uncertain whether or not your understanding conflicts with mine.
1

colinmhayes2 t1_iu0p1mv wrote

  1. I’m not really making a case for or against ea, just saying that you seem to misunderstand their cause prioritization.

  2. Some effective altruists are very concerned for the potentially trillions of people who will exist in the future. You see this in their extensive work on nuclear non proliferation, ai safety, biohazard safety, climate change, and more in the long term as well as political action in the medium term. X risk stands for existential risk, people who care about x risk work to ensure the survival of humanity over the next thousand plus years.

Givewell tends to focus on short term welfare because it’s easier to convince people to donate to lower risk causes, but the community spends a huge amount of time working to ensure future welfare too.

1