Viewing a single comment thread. View all comments

gian_mav t1_j360io3 wrote

>My substack argues that objective morality does exist (its wrong to torture babies for fun for example, regardless of one's own opinion).

It is immoral only if you value human life and consider causing suffering to humans immoral. Imagine an intelligent alien that holds that only aliens of its species have inherent value and everything else has value insofar as it effects the lives of other aliens. How could you convince him that his morality is "wrong"?

>The last section of asks whether you would force others to accept the utility coach. I even state: "My question is whether you would force other people to sign-up for the lifeplan." I'm not interested in one's personal choice, but how far this personal choice should be imposed onto others. If satisfaction is all you care about, then people would be obligated to force others to accept the utility coach's offer. However, I argue that people should be free to make their own decisions, regardless of the amount of welfare on the table. And this personal freedom is valuable beyond personal welfare. Its something to be respected for its own sake, and its fundamental to ethics.

The one you presented and the one I would be ok with are fundamentally different. The questions "would you force someone to maximise their personal happiness" and "would you force someone to increase the happiness of humans collectively" are incomparable. I think the second is moral, but in no way is it the same coach as the one you presented.

1

contractualist OP t1_j36wx5m wrote

Yes, I agree, there is an is-ought distinction. I'm not a moral naturalist. I discuss the values necessary to create morality here. Morality is those principles that cannot be reasonably rejected in a hypothetical bargain behind a veil of ignorance. You have to value human freedom and reason to be motivated to obey that agreement, but morality exists in that sense whether or not someone has the requisite values to be moral.

>The questions "would you force someone to maximise their personal happiness" and "would you force someone to increase the happiness of humans collectively" are incomparable.

If you are a utilitarian, and welfare is your only standard of ethics, then there is no difference. Both questions only weigh an increase in welfare against coercion. I would argue that coercion in both questions is unjustified, but is there a principled distinction that you have between the two questions where they should be differentiated?

1