Submitted by cleboomusic t3_y1f08n in philosophy
water_panther t1_is2vhj1 wrote
Reply to comment by Tinac4 in The philosophy of "longtermism" and Stoicism by cleboomusic
I'd argue again that a lot of that has more to do with longtermism being a pretty PR-savvy movement than with any actual philosophical aversion. That is to say, I don't know how meaningful an argument it is to say they don't explicitly describe themselves as "pro-genocide" — even a lot of people actively perpetrating genocides wouldn't advertise themselves as genocide advocates. It's a pretty negative term that virtually nobody goes out of their way to be associated with. The problem is not that longtermists go around saying "We really ought to do more genociding," it's that longtermists go around arguing that essentially any present-day sacrifice short of human extinction is trivially easy to justify according to their deeply wonky assumptions about the future.
Which brings us to the next point. As you say, this is the kind of thing that has always posed a problem for utilitarianism, but I'd argue that there is a salient difference in the case of longtermism simply because the its ethical calculus is predicated on the kind of abject loonytoons gibberish that has to be forced onto utilitarians. In other words, while it may be possible to argue against both with "weird, unrealistic hypotheticals," the difference is that longtermism is specifically built around those very hypotheticals; the whole "digital people" thing from my prior post wasn't some weird thought I experiment I made up to put longtermists in a bind, it is a thing longtermists themselves bring up as part of their arguments.
And, to your final point, I'd generally agree, but I think it also encapsulates what's wrong with longtermism: it takes the implications of its "weird, hypothetical thought experiments" very seriously. Premises that others would hesitate to accept even for the sake of a purely theoretical or counterfactual debate, longtermists often treat as concrete assumptions on which to base our actual day-to-day ethical decisions. If you don't take the hypotheticals seriously, you can't take longtermism seriously. If you do take them seriously, longtermism is horrifying.
Tinac4 t1_is4ve4q wrote
>The problem is not that longtermists go around saying "We really ought to do more genociding," it's that longtermists go around arguing that essentially any present-day sacrifice short of human extinction is trivially easy to justify according to their deeply wonky assumptions about the future.
I don't think they do this either.
Like, if I go to the EA forum (probably the online community with the biggest longtermist population) and look for highly-upvoted posts about longtermism and ethics, I find things like this and this and this and this. There is a lot of skepticism about naive utilitarianism, and an abundance of people saying things like "being a hardcore utilitarian can lead you to weird places, therefore we should find a theory of ethics that avoids doing that while keeping the good parts of longtermism and utilitarianism intact". Conversely, there's a total lack of posts or responses that take the opposite stance and say, actually, we should accept literally every crazy consequence of naive longtermism, and it's completely morally okay to sacrifice millions of people if it reduces the odds of humanity's extinction by 0.01%. Seriously, I swear I'm not cherrypicking examples, this is what the average post about longtermist ethics looks like.
You insist that longtermism is intrinsically built around exotic hypotheticals and a willingness to make horrible sacrifices--but if that's true, then why do they spend even more time picking those claims apart than their biggest critics do?
I think you could reasonably argue that longtermists need to spend more time working on the philosophical foundations of their movement, to find a way to reconcile the good parts of utilitarianism with the bad parts (and I bet most longtermists would agree!). I think you can't argue that the core of longtermism--"the view that positively influencing the long-term future is a key moral priority of our time"--is a stance that can only be justified by the bad parts.
Viewing a single comment thread. View all comments