rvkevin

rvkevin t1_j3l1a23 wrote

If people wouldn't want to be forced to be happy, then it's not the case that forcing the utility coach on people would raise their utility since utility is a direct measure of that individual's wants. However, the hypothetical assumed that forcing the utility coach on people would increase their utility, so your reasoning directly contradicts an assumption of the hypothetical.

1

rvkevin t1_j3j12ev wrote

> It shouldn't be forced because people would reasonably reject giving up their freedom of conscious for welfare

According to the hypothetical, the bolded part is false. According to the hypothetical, every time you offer it to a reasonable person, that person would choose welfare over freedom of conscious. That's what it means for the utility coach to increase their utility, it means that the person prefers the utility coach over freedom of conscious.

>But because their freedom of conscience wouldn't be given up in the social contract, it would be immoral to take this freedom away.

When you say "No value is ever so sacred that it can never be exchanged for another value," that also applies to valuing any sort of social contract. Why would anyone care about the social contract in the hypothetical since it comes with a severe cost to society?

1

rvkevin t1_j3fwubj wrote

> It shouldn't be forced because people would reasonably reject giving up their freedom of conscious for welfare (principles that can't be reasonably rejected are ethical principles).

It's stipulated in the hypothetical that following the utility coach would increase the utility of anyone using him, so all reasonable people would give up their freedom because that's their actual preference. If you say that they prefer their freedom more than being forced to used a utility coach, you're violating an assumption of the hypothetical.

1

rvkevin t1_j3b3h19 wrote

>Although it would. Whether for yourself or someone else or society as a whole, the utility coach would increase utility.

With this stipulated, the decision is a no-brainer; it should be forced on everyone.

>And it wouldn't be forced on anyone because peoples free choices are to be respected.

Based on what justification? Typically we respect people’s free choices because they know their preferences better than we do, but that doesn’t apply in this hypothetical. Even if you say that freedom is a good in itself has its own utility, we have already considered that utility when taking away their free will (in that the loss of that utility is overcome by the gain in utility by having the utility coach). You basically have to treat freedom as having infinite value, but as you start out saying: “No value is ever so sacred that it can never be exchanged for another value.” What is special about freedom that makes it override all other welfare considerations?

When a moral system places freedom on a pedestal above all other values, you get moral issues relating to criminals. Should we respect a criminals free choice to harm and not restrict their freedom? Either freedom is sacrosanct and can’t be traded with other values and we should let criminals run free or freedom is something that can be exchanged with other welfare considerations and allows us to trade it for the higher utility that the utility coach gives them.

1

rvkevin t1_j34aj3c wrote

>Many utilitarians would disagree and wouldn't consider any utility resulting from harming another as factoring within their utilitarian calculus.

It wouldn't factor into their own utility, but it would certainly factor into the utilitarian calculus for what they should do. Utilitarians are interested in maximizing utility in general, not just their individual utility (that would be egoism). So if you ask whether a utilitarian should hire the coach for themselves or others, the answer is probably no to both because doing so probably doesn't result in higher utility for society.

>I don't believe this distinction has any principle, but for the purpose of this thought experiment, one person's utility doesn't require harming another.

Given the additional assumption, why wouldn't this be forced on everyone? I fail to see any reason otherwise. We already have analogs for society forcing such decisions on people. The coach is 100% accurate and the thought experiment is basically saying that you aren't mature enough to know what's best for you, you're just a child with a guardian making the best decisions for you. You occasionally make poor decisions like trying to touch a hot stove so there's some pain when your hand is swatted away, but that pain is nothing compared to touching a hot stove, just like how the pain of the electric stimuli is nothing compared to the pain of your otherwise poor choices.

1

rvkevin t1_j334vey wrote

>If the utility coach would maximize a persons utility, without harming others

This seems be a fundamental flaw in the argument; this is patently anti-utilitarian. Individuals should even experience negative utility when it is to the greater benefit of others (e.g. isolating when sick with a contagious disease.). Utilitarians use the utility of the individual in their calculations, but they don’t focus on the individual to the exclusion of all others. A utility coach trying to maximize an individual’s utility is not following utilitarian principles.

1

rvkevin t1_ixjrv88 wrote

>But that's the thing: unless someone's getting hit with completely falsified evidence, the arrest itself doesn't become less valid.

It still doesn’t represent actual crime; it represents crime that the police enforced (i.e. based on police interactions). For example, if white and black people carry illegal drugs at the same rate, yet police stop and search black people more, arrests will show a disproportionate amount of drugs among black people and therefore devote more resources to black neighborhoods even when the data doesn’t merit that response.

> It's irrelevant to the data whether or not a crime is uncovered because of a biased interaction or an unbiased one.

How is a prediction model supposed to function when it doesn’t have an accurate picture of where crime occurs? If you tell the model that all of the crime happens in area A because you don’t enforce area B that heavily, how is the model supposed to know that it’s missing a crucial variable? For example, speed trap towns that gets like 50% of their funding from enforcing speed limits in a mile stretch of highway. How is the system supposed to know that speeding isn’t disproportionately worse there despite the mountain of traffic tickets given out?

>The issue isn't measuring the data, it's getting you to start acknowledging data accuracy.

How you measure the data is crucial because it’s easy to introduce selection biases into the data. What you are proposing is exactly how they are introduced since you don’t even seem to be aware it’s an issue. It is more than just whether each arrest has merit. The whole issue is that you are selecting a sample of crime to feed into the model and that sample is not gathered in an unbiased way. Instead of measuring crime, you want to measure arrests, which are not the same thing.

1

rvkevin t1_ixgogni wrote

>The AI can see crime at points X, Y and Z in neighborhood B but crime in Q in neighborhood A.

The AI doesn't see that. The algorithm is meant to predict crime, but you aren't feeding actual crime data into the system, you're feeding police interactions (and all the biases that individual officers have) into the system. More data doesn't always fix the issue because the issue is in measuring the data.

0