Submitted by cleboomusic t3_y1f08n in philosophy
Comments
water_panther t1_iryaz68 wrote
The only parts of longtermism that aren't predicated on dubious assumptions are the parts predicated on absolutely bonkers assumptions. The core idea sounds reasonable enough, but once you start digging into the actual arguments, it quickly devolves into clownshoes nonsense. The public face of the movement is the banal and agreeable assertion that it's important to care about future generations, then the actual ideological content of the movement is largely stuff like arguing it's okay to use slavery and genocide today to help pave the way for something like a galaxy-spanning network of planet-sized servers that host a population of trillions upon trillions of "digital people" living in simulated utopias. It's basically just the philosophy of using faintly ludicrous assumptions about the future as an excuse not to care about the suffering caused by our actions today.
Tinac4 t1_irzt5sn wrote
The only cases where I've seen longtermist reasoning used in favor of genocide are when non-longtermists try to reductio longtermism and/or utilitarianism with weird unrealistic hypotheticals. These problems aren't new to utilitarianism, but I don't think it makes much sense to be concerned about them when 1) most longtermists I've read about aren't actually hardcore utilitarians, 2) real-life, non-strawmen longtermists don't advocate for genocide, and 3) real-life hardcore utilitarians that I'm familiar with about spend zero time thinking about genocide and quite a lot of time stressing about whether they should be donating more to charity.
The implications of weird hypothetical thought experiments are only as serious as people take them to be; i.e. not very.
water_panther t1_is2vhj1 wrote
I'd argue again that a lot of that has more to do with longtermism being a pretty PR-savvy movement than with any actual philosophical aversion. That is to say, I don't know how meaningful an argument it is to say they don't explicitly describe themselves as "pro-genocide" — even a lot of people actively perpetrating genocides wouldn't advertise themselves as genocide advocates. It's a pretty negative term that virtually nobody goes out of their way to be associated with. The problem is not that longtermists go around saying "We really ought to do more genociding," it's that longtermists go around arguing that essentially any present-day sacrifice short of human extinction is trivially easy to justify according to their deeply wonky assumptions about the future.
Which brings us to the next point. As you say, this is the kind of thing that has always posed a problem for utilitarianism, but I'd argue that there is a salient difference in the case of longtermism simply because the its ethical calculus is predicated on the kind of abject loonytoons gibberish that has to be forced onto utilitarians. In other words, while it may be possible to argue against both with "weird, unrealistic hypotheticals," the difference is that longtermism is specifically built around those very hypotheticals; the whole "digital people" thing from my prior post wasn't some weird thought I experiment I made up to put longtermists in a bind, it is a thing longtermists themselves bring up as part of their arguments.
And, to your final point, I'd generally agree, but I think it also encapsulates what's wrong with longtermism: it takes the implications of its "weird, hypothetical thought experiments" very seriously. Premises that others would hesitate to accept even for the sake of a purely theoretical or counterfactual debate, longtermists often treat as concrete assumptions on which to base our actual day-to-day ethical decisions. If you don't take the hypotheticals seriously, you can't take longtermism seriously. If you do take them seriously, longtermism is horrifying.
Tinac4 t1_is4ve4q wrote
>The problem is not that longtermists go around saying "We really ought to do more genociding," it's that longtermists go around arguing that essentially any present-day sacrifice short of human extinction is trivially easy to justify according to their deeply wonky assumptions about the future.
I don't think they do this either.
Like, if I go to the EA forum (probably the online community with the biggest longtermist population) and look for highly-upvoted posts about longtermism and ethics, I find things like this and this and this and this. There is a lot of skepticism about naive utilitarianism, and an abundance of people saying things like "being a hardcore utilitarian can lead you to weird places, therefore we should find a theory of ethics that avoids doing that while keeping the good parts of longtermism and utilitarianism intact". Conversely, there's a total lack of posts or responses that take the opposite stance and say, actually, we should accept literally every crazy consequence of naive longtermism, and it's completely morally okay to sacrifice millions of people if it reduces the odds of humanity's extinction by 0.01%. Seriously, I swear I'm not cherrypicking examples, this is what the average post about longtermist ethics looks like.
You insist that longtermism is intrinsically built around exotic hypotheticals and a willingness to make horrible sacrifices--but if that's true, then why do they spend even more time picking those claims apart than their biggest critics do?
I think you could reasonably argue that longtermists need to spend more time working on the philosophical foundations of their movement, to find a way to reconcile the good parts of utilitarianism with the bad parts (and I bet most longtermists would agree!). I think you can't argue that the core of longtermism--"the view that positively influencing the long-term future is a key moral priority of our time"--is a stance that can only be justified by the bad parts.
Tinac4 t1_irzsu70 wrote
Isn't the assumption that humanity is probably going to get wiped out within the next few thousand years also bold? It's far from impossible, but so is humanity's survival--I'd call being highly confident about either possibility bold.
Adventurous_Teach381 t1_is11run wrote
I mean considering how large other species can get, I don't think its that bold. And there's emergent technology to make it a possibility. That being said, demographic trajectories are trending down, not up.
TheConjugalVisit t1_is0dj5o wrote
It think stoicism is a beautiful endeavor, but lacks completeness.
MrTessTickle t1_is0nyal wrote
Elaboration?
TheConjugalVisit t1_is0xd5j wrote
Sure, thanks for asking. Stoicism is about control of our minds but we should pour our hearts out to others. The mind and the heart are most congruent.
MrTessTickle t1_is0yg37 wrote
I don't have a complete idea of stoicism myself but I heard that stoicism is about "not caring about the things we cannot control". We can advice people and offer to help them which is in our control but others changing and getting help are in their hands.
TheConjugalVisit t1_is0zti3 wrote
There's this difference of knowing when it's time to care, It'll take time to know and understand.
Hungry-Pitch9230 t1_isfoeyw wrote
I am realizing more and more how very recent the old testament times happened, like the 4 empires of Daniel
Green_End8154 t1_is09m8c wrote
descartes20 t1_is1xkod wrote
This is being partially blocked by a demand for a paid subscription
cleboomusic OP t1_is21js3 wrote
It should not be. This publication is not a paid one.
MyNameIsNonYaBizniz t1_irxn8y0 wrote
Antinatalism would say this is just prolonging the suffering of unlucky people, we will never solve suffering and its not worth it.
koron123 t1_irx8ytg wrote
"Most of the people who will ever live haven't been born yet. Our present population may be a rounding error compared to the number of people in future generations."
That's a bold assumption