Viewing a single comment thread. View all comments

enternationalist t1_iynhi6p wrote

They just specifically said that this wouldn't tell us an "ideal" set of morals.

1

[deleted] t1_iynllw1 wrote

[deleted]

1

enternationalist t1_iynwp28 wrote

Yep. So, that being the case, I'm not sure I understand who your question was directed to?

2

[deleted] t1_iynxehh wrote

[deleted]

1

enternationalist t1_iyocmv0 wrote

I suppose I wouldn't infer that, but I see how you are reading it; if I say "Look, this blender can't make a perfect smoothie that everyone would like", to me that doesn't imply that I think a perfect smoothie liked by everyone can exist; I'm just clarifying that such a concept isn't the goal.

I think what they are really trying to say is that the method constrains morality such that there only a few local maxima of stability - only some moral systems can be stable. It's not that it says that these systems are or are not morally good; in fact it doesn't assign them any sort of "goodness" score - it only tells us what is socially stable enough to be perpetuated as a moral system.

So, if our goal is to arrive at a moral system, this method theoretically lets us discard many unstable possibilities.

In this way, this method can reject a common set of suboptimal ("non-ideal") solutions, even if "ideal" solutions are totally unique for each person as you suggest, so long as we all agree with the premise that stability is good. It relies on that common criterion, even if all other criteria are totally unique.

That's how some "non-ideal" solutions can be consistently identified even if "ideal" is highly personal - it cannot identify ALL non-ideal solutions for all people; that can't be done without asking literally every human what they'd prefer - but it CAN identify a consistent subset of those solutions that will not be functional, regardless of personal views (unless you disagree with the basic premise of stability!)

1