Aggravating_Roe

Aggravating_Roe t1_it7cwl5 wrote

I believe that there are somewhat a false dichotomy here. Taking stock of the future does not rule out the present and vice versa.

Plus, anti-longtermism is not intuitive. If future generations counts for nothing, then we have no real incentive for true sustainability. Also, by the fact that generations progresses continuously we are connected with the long term already.

My interpretation of longtermism is as a attempt to come to terms with the current human condition under the anthropocene, where resources have to be viewed as finite and externalities have nowhere to be pushed.

Further, the article in general misses the mark and the good form of giving one’s philosophical opponent it’s best interpretation: the argument that a person in the 1900:s could not have foresight of the coming century is kind of irrelevant. The point to steel man is that we should think seriously about alternative futures and consequences when implementing technology or making policy. And the AI issue does not rely on AGI - rather on alignment and optimization to what values under who’s control.

20