Comments

You must log in or register to comment.

contractualist OP t1_j2u9zn1 wrote

Hello all. I am writing a substack newsletter on contractualist ethics. The linked article describes a thought experiment that intends to show the importance of freedom in this philosophy, which is basically a revised version of Nozick’s “experience machine.”

Summary: Imagine a "utility coach" who is able to maximize a person’s utility so long as the person delegates all their decisions to the utility coach. If welfare is the basis of morality, are people morally bound to subject themselves to the utility coach's commands or should people be free to make their own choice of whether to accept the utility coach's bargain? What is more important, welfare or freedom?

5

gian_mav t1_j2w8vs2 wrote

I am an amateur in philosophy and haven't really read much but I'll try to give this a go.

From my point of view, the utility coach would not just deprive me of some of my freedom but also the illusion of free will all of us have. You argue that the coach maximises utility by making us pick the best choices, but I disagree that this is the case. I can easily imagine a different utility coach that instead of giving you a zap to force you to pick the "correct" choice, he has instead installed a device on your brain that releases certain hormones to adjust your mood and brain chemistry for the "correct" choice to be picked organically. These two coaches make you pick the exact same choices and thus limit your freedom in the same degree, but my version doesn't shatter your belief of the existence of free will.

At least personally I would be much more amicable to the second version because, even though I don't really believe in the existence of free will, the first option would force me in a fatalistic mindset and that would degrade my quality of life, while the second one wouldn't. Perhaps you could argue in that sense that the illusion of free will exists independently of utility. Then again I can imagine someone sacrificing their life for their ideals and their beliefs, which inherently includes their ability to make choices and what they perceive as free will, so it doesn't seem that this is the case either.

What I mean by this is that perhaps you failed to consider that utility can also be perceived outside of the self in this bargain of sorts. Say some kind of deity offers you this: you will never be able to make a free choice in your life ever again but ALL humans will be happy forever. Let's assume that maybe they send them to heaven or something. Now, not everyone will pick to "sacrifice" themselves for everyone else, but I think a lot of people would. It seems to me that if you balance your freedom or free will with something great enough, there is a tipping point somewhere and in that sense I fail to see how they are independent of utility.

Feel free to point out anything you might disagree with, I am open to criticism.

5

contractualist OP t1_j2zmv9w wrote

Although the utility coach would maximize your utility, which would factor in your preference for a belief in free will. You would be getting the highest utility possible under the coach.

And the important question isn't what you would choose, but what you would choose on behalf of others. If you're a utilitarian, then it would be a duty to take the coach's offer. But if you believe they should have a right to make the choice themselves, then you value freedom even above utility.

1

gian_mav t1_j317s5c wrote

>Although the utility coach would maximize your utility, which would factor in your preference for a belief in free will. You would be getting the highest utility possible under the coach.

But I disagree that's the case and I gave you an example of a different coach that would provide greater utility. If the second coach can provide greater utility, the first one must necessarily provide less than the highest possible. Unless I misunderstood what you are saying, Idk.

For the record, I would probably be ok with the one that respects my illusion of free will. It essentially would be equivalent to being able to predict the future subconsciously.

>And the important question isn't what you would choose, but what you would choose on behalf of others. If you're a utilitarian, then it would be a duty to take the coach's offer. But if you believe they should have a right to make the choice themselves, then you value freedom even above utility.

But a lot of people might share the same preference like me, given that pretty much everyone experiences the illusion of free will. I would be much more likely to give the second one, as that also respects their current state of mind.

Also, just because I am a utilitarian, it doesn't mean that I want everyone's utility to be maximised, MY utility has to be. In my case, I think that human fulfilment must be maximised. Therefore, if giving the utility coach to a fascist would result in him creating a dictatorship because that maximises his utility, it would be worth negative utility as that would drastically reduce human fulfilment for a lot more people.

In conclusion, if I could know the result of giving the utility coach that respects free will to someone were worth positive utility to me (maximizing human fulfilment collectively) I would. I wouldn't give anyone the first one, as that entails the removal of the feeling of free will which is very important in most people and therefore doesn't maximise their utility. It could potentially be negative utility if someone values his perception highly enough.

1

contractualist OP t1_j31hnn7 wrote

Although now you are fighting the hypothetical by saying the utility coach doesn’t maximize utility, when the hypothetical says it does.

And the question is about ethics, not about preference. So being a utilitarian isn’t about satisfying ones own preferences, but satisfying the preferences of the greatest number. This includes creating a sense of free will as far as necessary to maximize utility. And the question asks whether you would force others to accept the utility coach. This answer is obvious if you’re a utilitarian, in which case everyone must accept the utility coach, but not if you value freedom for its own sake.

1

gian_mav t1_j31sza2 wrote

>Although now you are fighting the hypothetical by saying the utility coach doesn’t maximize utility, when the hypothetical says it does.

I didn't reject the hypothetical though. As I have said twice by now, I don't think yours did maximise utility, so I changed the hypothetical in a way so that it does satisfy your assumption and then answered that I would be ok with that version personally.

>And the question is about ethics, not about preference. So being a utilitarian isn’t about satisfying ones own preferences, but satisfying the preferences of the greatest number. This includes creating a sense of free will as far as necessary to maximize utility. And the question asks whether you would force others to accept the utility coach. This answer is obvious if you’re a utilitarian, in which case everyone must accept the utility coach, but not if you value freedom for its own sake.

Utilitarians want to maximise happiness for the greatest amount of people. Your coach maximises personal utility. Giving the utility coach to everyone does not in any way guarantee that maximum utility is achieved for the greatest amount of people, because to achieve ones maximum personal utility often comes to the detriment of others. It logically follows that if you are utilitarian you would not choose to give this coach to everyone, especially not those who don't share your own view of morality (utilitarianism) as that would result in less utility for your subjective preference of maximising human happiness.

To give an example: If it is deemed by the utility coach that I would be the happiest by becoming a serial killer, the coach would force me to become one. That is opposed to utilitarianism (less utility for the maximum amount of people) but satisfies the coach.

A utilitarian would think everyone must accept the coach only if the coach would sometimes result in less than maximum utility for the individual, which would be impossible given that it is supposed to give the maximum utility.

1

contractualist OP t1_j32ofvn wrote

Although the question isn’t what satisfies yourself personally, but what you would force others to choose. If the utility coach would maximize a persons utility, without harming others, would you force that person to accept the utility coach’s offer? That’s what ethics is, our duties rather than our preferences.

1

gian_mav t1_j331p6u wrote

Ethics are subjective. What each of us see as a duty is an arbitrary preference. Morality satisfies the worldview of its beholder.

>If the utility coach would maximize a persons utility, without harming others, would you force that person to accept the utility coach’s offer?

Well at that point the coach is inherently not maximising for personal utility but for collective utility instead, so the question really becomes "do you think it is moral to drastically limit people's freedom if you were guaranteed to achieve the greatest amount of human fulfilment" to which I would say yes. This isn't the original question asked though.

Also I would like to point out that this would be moral only in a hypothetical, because in reality there are no guarantees and the risk would be too big for comfort to me. I wouldn't want to trust a single human with that kind of power, no matter how noble their intentions.

I have to say though, this was a pretty interesting and thought provoking discussion. Have a good one man.

1

contractualist OP t1_j34i9xu wrote

My substack argues that objective morality does exist (its wrong to torture babies for fun for example, regardless of one's own opinion).

The last section of asks whether you would force others to accept the utility coach. I even state: "My question is whether you would force other people to sign-up for the lifeplan." I'm not interested in one's personal choice, but how far this personal choice should be imposed onto others.

If satisfaction is all you care about, then people would be obligated to force others to accept the utility coach's offer. However, I argue that people should be free to make their own decisions, regardless of the amount of welfare on the table. And this personal freedom is valuable beyond personal welfare. Its something to be respected for its own sake, and its fundamental to ethics.

1

gian_mav t1_j360io3 wrote

>My substack argues that objective morality does exist (its wrong to torture babies for fun for example, regardless of one's own opinion).

It is immoral only if you value human life and consider causing suffering to humans immoral. Imagine an intelligent alien that holds that only aliens of its species have inherent value and everything else has value insofar as it effects the lives of other aliens. How could you convince him that his morality is "wrong"?

>The last section of asks whether you would force others to accept the utility coach. I even state: "My question is whether you would force other people to sign-up for the lifeplan." I'm not interested in one's personal choice, but how far this personal choice should be imposed onto others. If satisfaction is all you care about, then people would be obligated to force others to accept the utility coach's offer. However, I argue that people should be free to make their own decisions, regardless of the amount of welfare on the table. And this personal freedom is valuable beyond personal welfare. Its something to be respected for its own sake, and its fundamental to ethics.

The one you presented and the one I would be ok with are fundamentally different. The questions "would you force someone to maximise their personal happiness" and "would you force someone to increase the happiness of humans collectively" are incomparable. I think the second is moral, but in no way is it the same coach as the one you presented.

1

contractualist OP t1_j36wx5m wrote

Yes, I agree, there is an is-ought distinction. I'm not a moral naturalist. I discuss the values necessary to create morality here. Morality is those principles that cannot be reasonably rejected in a hypothetical bargain behind a veil of ignorance. You have to value human freedom and reason to be motivated to obey that agreement, but morality exists in that sense whether or not someone has the requisite values to be moral.

>The questions "would you force someone to maximise their personal happiness" and "would you force someone to increase the happiness of humans collectively" are incomparable.

If you are a utilitarian, and welfare is your only standard of ethics, then there is no difference. Both questions only weigh an increase in welfare against coercion. I would argue that coercion in both questions is unjustified, but is there a principled distinction that you have between the two questions where they should be differentiated?

1

rvkevin t1_j334vey wrote

>If the utility coach would maximize a persons utility, without harming others

This seems be a fundamental flaw in the argument; this is patently anti-utilitarian. Individuals should even experience negative utility when it is to the greater benefit of others (e.g. isolating when sick with a contagious disease.). Utilitarians use the utility of the individual in their calculations, but they don’t focus on the individual to the exclusion of all others. A utility coach trying to maximize an individual’s utility is not following utilitarian principles.

1

contractualist OP t1_j33vyyl wrote

Many utilitarians would disagree and wouldn't consider any utility resulting from harming another as factoring within their utilitarian calculus. I don't believe this distinction has any principle, but for the purpose of this thought experiment, one person's utility doesn't require harming another.

1

rvkevin t1_j34aj3c wrote

>Many utilitarians would disagree and wouldn't consider any utility resulting from harming another as factoring within their utilitarian calculus.

It wouldn't factor into their own utility, but it would certainly factor into the utilitarian calculus for what they should do. Utilitarians are interested in maximizing utility in general, not just their individual utility (that would be egoism). So if you ask whether a utilitarian should hire the coach for themselves or others, the answer is probably no to both because doing so probably doesn't result in higher utility for society.

>I don't believe this distinction has any principle, but for the purpose of this thought experiment, one person's utility doesn't require harming another.

Given the additional assumption, why wouldn't this be forced on everyone? I fail to see any reason otherwise. We already have analogs for society forcing such decisions on people. The coach is 100% accurate and the thought experiment is basically saying that you aren't mature enough to know what's best for you, you're just a child with a guardian making the best decisions for you. You occasionally make poor decisions like trying to touch a hot stove so there's some pain when your hand is swatted away, but that pain is nothing compared to touching a hot stove, just like how the pain of the electric stimuli is nothing compared to the pain of your otherwise poor choices.

1

contractualist OP t1_j34h8zw wrote

>So if you ask whether a utilitarian should hire the coach for themselves or others, the answer is probably no to both because doing so probably doesn't result in higher utility for society.

Although it would. Whether for yourself or someone else or society as a whole, the utility coach would increase utility.

And it wouldn't be forced on anyone because peoples free choices are to be respected. Paternalism is justified to a very limited extent, but not for all possible decisions.

1

rvkevin t1_j3b3h19 wrote

>Although it would. Whether for yourself or someone else or society as a whole, the utility coach would increase utility.

With this stipulated, the decision is a no-brainer; it should be forced on everyone.

>And it wouldn't be forced on anyone because peoples free choices are to be respected.

Based on what justification? Typically we respect people’s free choices because they know their preferences better than we do, but that doesn’t apply in this hypothetical. Even if you say that freedom is a good in itself has its own utility, we have already considered that utility when taking away their free will (in that the loss of that utility is overcome by the gain in utility by having the utility coach). You basically have to treat freedom as having infinite value, but as you start out saying: “No value is ever so sacred that it can never be exchanged for another value.” What is special about freedom that makes it override all other welfare considerations?

When a moral system places freedom on a pedestal above all other values, you get moral issues relating to criminals. Should we respect a criminals free choice to harm and not restrict their freedom? Either freedom is sacrosanct and can’t be traded with other values and we should let criminals run free or freedom is something that can be exchanged with other welfare considerations and allows us to trade it for the higher utility that the utility coach gives them.

1

contractualist OP t1_j3dg9tj wrote

It shouldn't be forced because people would reasonably reject giving up their freedom of conscious for welfare (principles that can't be reasonably rejected are ethical principles). Because of that, no one has the right to coerce someone else's conscious.

People would agree to principles that would allow for criminal law (as well as a welfare state and a duty to rescue). However, they wouldn't allow their freedom of conscious to be controlled by another. Whether to accept the utility coach's lifeplan is their own decision. This isn't to say that freedom has infinite value, but its not subject to the will of another based on ethical principles.

1

rvkevin t1_j3fwubj wrote

> It shouldn't be forced because people would reasonably reject giving up their freedom of conscious for welfare (principles that can't be reasonably rejected are ethical principles).

It's stipulated in the hypothetical that following the utility coach would increase the utility of anyone using him, so all reasonable people would give up their freedom because that's their actual preference. If you say that they prefer their freedom more than being forced to used a utility coach, you're violating an assumption of the hypothetical.

1

contractualist OP t1_j3ipvbd wrote

No, the hypothetical hasn't changed. If people prefer the utility coach, then they have the right to choose for themselves. But because their freedom of conscience wouldn't be given up in the social contract, it would be immoral to take this freedom away. The argument is that people shouldn't be forced to be happy.

1

rvkevin t1_j3j12ev wrote

> It shouldn't be forced because people would reasonably reject giving up their freedom of conscious for welfare

According to the hypothetical, the bolded part is false. According to the hypothetical, every time you offer it to a reasonable person, that person would choose welfare over freedom of conscious. That's what it means for the utility coach to increase their utility, it means that the person prefers the utility coach over freedom of conscious.

>But because their freedom of conscience wouldn't be given up in the social contract, it would be immoral to take this freedom away.

When you say "No value is ever so sacred that it can never be exchanged for another value," that also applies to valuing any sort of social contract. Why would anyone care about the social contract in the hypothetical since it comes with a severe cost to society?

1

contractualist OP t1_j3k33kc wrote

Because people wouldn’t want to be forced to be happy. That’s reasonable.

1

rvkevin t1_j3l1a23 wrote

If people wouldn't want to be forced to be happy, then it's not the case that forcing the utility coach on people would raise their utility since utility is a direct measure of that individual's wants. However, the hypothetical assumed that forcing the utility coach on people would increase their utility, so your reasoning directly contradicts an assumption of the hypothetical.

1

contractualist OP t1_j3ox7u6 wrote

I don't make the statement. I ask the question and if people have the intuition that forcing someone to be happy is wrong, I explain that intuition via the social contract. It doesn't violate the assumption given that what is right isn't solely determined by reference to utility, which is the point of the hypothetical.

1