Viewing a single comment thread. View all comments

cutelyaware t1_iylax98 wrote

You can't solve moral problems with math. You can only express your moral beliefs in symbolic forms and manipulate them with the tools of mathematics. If you describe a moral problems in utilitarian terms, then you'll get utilitarian results. But who is to say that a moral problem requires a utilitarian result? That's just begging the question.

95

Tinac4 t1_iylruah wrote

Math isn't only a tool for utilitarians, though. The real world is fundamentally uncertain--people are forced to make decisions involving probability all the time. To use an example in the essay, consider driving: If there's a 0.0000001% chance of killing someone while driving to work, is that acceptable? What about a 5% risk? Most deontologists and virtue ethicists would probably be okay with the first option (they make that choice every day!), but not the second (also a choice commonly made when e.g. deciding not do drive home drunk). How do they draw the line without using numbers on at least some level? Or what will they do when confronted with a charitable intervention that, because the details are complicated, will save someone's life for $1,000 with 50% probability?

A comprehensive moral theory can't operate only in the realm of thought experiments and trolley problems where every piece of the situation is 100% certain. They have to handle uncertainty in the real world too, and the only way to do this is to be comfortable with probabilities, at least to some extent.

19

cutelyaware t1_iyls8ml wrote

> How do they draw the line without using numbers on at least some level?

You can't use numbers to justify your morality. You can only optimize it if your morality happens to be purely utilitarian.

10

Tinac4 t1_iylss65 wrote

I didn’t say anything about using numbers to justify morality, and neither did the OP. My point is that a lot of real-life moral dilemmas that involve uncertainty, and it’s very hard to resolve them if your moral framework isn’t comfortable with probabilities to some extent. For instance, how would you respond to the two scenarios I gave above?

15

[deleted] t1_iylzkab wrote

[removed]

−12

BernardJOrtcutt t1_iyn76pb wrote

Your comment was removed for violating the following rule:

>Argue your Position

>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

MightyTVIO t1_iylw6nl wrote

If you can describe a precise way of determining what is most moral or even just more moral of 2 choices then you have an algorithm. If it is not precisely defined then it's ambiguous in which case it's not justified I'd argue? Since you haven't even defined what it even is.

6

cutelyaware t1_iylyu0s wrote

A person's morality is simply their sense of right and wrong behaviors. It doesn't matter if you have an algorithm or not. That would only let you be more consistent with your actions. My point is that you don't get to choose your morality any more than you get to choose your sexual orientation.

−4

MightyTVIO t1_iym5j0l wrote

If you're arguing for lack of free will then I'd agree but it makes the whole point moot. A moral theory is generally independent of any specific persons so not sure why that's relevant? Furthermore you absolutely could reduce a persons actions to an algorithm just maybe a complicated one.

4

autonomicautoclave t1_iymm2tq wrote

If that is true then philosophers have been wasting our time with moral arguments. If you can’t choose what morality to believe in, it’s no use trying to convince someone to follow your morality. It would be like trying to convince a heterosexual person to become homosexual.

3

cutelyaware t1_iyooqnt wrote

Can you give me an example of how you've changed your mind and adopted a different morality, or convinced someone else to change theirs? For example I see plenty of arguments of the form "If you believe killing is wrong, then..." I've never seen someone decide "Yes, I suppose killing is fine". I've only seen them decide that it's OK or not OK to kill in some specific situation.

2

VitriolicViolet t1_iyt2gmk wrote

>If you can’t choose what morality to believe in, it’s no use trying to convince someone to follow your morality.

does anyone 'choose' which moral theory to follow?

i would argue the one you pick is merely the one that you feel is best ie you wont be convinced by a rational argument since you never reasoned yourself into your belief in the first place.

logic works from emotion ie if you think utilitarianism is best its because you feel its best, reasoning and logic happen after the fact.

i never reasoned myself into my morals, i pick and choose based on context and use my emotions to guide my reasoning (you cannot determine which is 'better' without use of emotion)

1

Wizzdom t1_iymx5r7 wrote

Except information can change your sense of right and wrong. Take smoking/drinking while pregnant. It didn't feel wrong until we learned it's harmful to the baby. You can absolutely change your sense of right and wrong by studying and thinking about it.

1

cutelyaware t1_iyomja5 wrote

The moral position involved in your example is that it's wrong to harm fetuses. If you learn that drinking harms fetuses, then you haven't changed your position by then concluding that's wrong behavior. You started off believing it was wrong to harm fetuses, and you ended up still believing that. You've just updated your opinion based on new information.

2

VitriolicViolet t1_iyt343x wrote

and? i knew smoking was harmful before i tried it (read the studies) and yet ive been doing for 10 years with no intention to stop.

some people value different things, resulting in different morals. personally safety and security arent even in my top 3 values (honesty, integrity and personal freedom) hence why smoking being factually bad hasnt changed my behavior.

what is more moral? allowing children to teach their kids anything or having the state determine what age certain concepts like sexuality and religion should be taught? your answer will 100% be determined by your values and if you ask 10 people all will be different and none will be wrong.

1

Wizzdom t1_iytga9f wrote

First of all, that has nothing to do with what I said. Second, I was talking about smoking while pregnant. Surely you at least value not harming others unnecessarily. If you don't, then you are immoral.

1

enternationalist t1_iyngww6 wrote

Huh? You've never changed your mind on what you think is morally acceptable??

1

cutelyaware t1_iyokehg wrote

I've had new situations come to light that cause me to rethink my proper responses to moral questions, but I can't think of anything that changed my morality. For example I still think that it should be a woman's right to choose abortion, but I've come to believe that pro-life people have a point.

How have you changed your morality?

2

enternationalist t1_iyq1glb wrote

How is changing your answer to a moral question distinct from your morality changing? Per your own definition, your sense of right and wrong has shifted to give you a different answer.

I used to believe making others happy as a priority was the moral choice, now I think people should generally be more self centered. I used to oppose any sort of violence; now I believe it is sometimes necessary or justified. By what definition are these not a change in morality?

1

cutelyaware t1_iyq2kx1 wrote

I suppose it's fine to call such a shift a movement in one's morality.

1

VitriolicViolet t1_iyt3hz2 wrote

not that i can think of.

personally everything is morally permissible in context (no system of morality ever conceived actually works, any system that has inflexible rules is destined to failure ie is genocide always wrong? if a nation tries to genocide you and will not stop no matter what, collectively, then surely killing them all is morally correct?).

theft, murder, lies, all are moral in certain scenarios.

1

enternationalist t1_iyt85ro wrote

Sure, but changing your mind about scenarios they are acceptable in counts.

1

experimentalshoes t1_iymj8p8 wrote

Probability is part of what makes us human though, as with the ability to describe our odds of survival somewhere rather than simply feeling it in our bodies.

Our awareness of uncertainty and risk are rooted in emotion, or basic drives, and they later became quantitative disciplines, similar to psychology. Likely or unlikely outcomes have always shaped our actions and our beliefs, sometimes also in contrast to the odds, where things may become heroic, irresponsible, etc.

You might look to numbers not to justify your morality, which is a precise form of argument, but to investigate it. Numbers can bring you back in touch with basic human drives we may have forgotten in the realm of abstract thought. Justification can then be built on top of the findings of that investigation.

3

cutelyaware t1_iyoq1g1 wrote

> You might look to numbers not to justify your morality, which is a precise form of argument, but to investigate it.

Certainly, math is very useful in lots of moral situations, but I'm making a different claim which is that it can't be used to decide your moral foundation. If you feel that you've done that, then please tell me how it happened.

2

chrispd01 t1_iym79yj wrote

Its not really a mthematical calculation though … the driver thinks one course is safe enough and the other not.

You could i suppose do some kind of statistical study to try to get some parameters but that is independent of the driver’s thinking

6

Tinac4 t1_iym8rv6 wrote

How does the driver decide that one situation is “safe enough” while the other one isn’t? What’s the right choice if the odds of an accident were somewhere in the middle like 0.01%?

I’m not saying that there’s an objective mathematical answer to what “safe enough” means. There isn’t one—it’s a sort-of-arbitrary threshold that’s going to depend on your own values and theory of ethics. However, these situations do exist in real life, and if your theory of ethics can’t work with math and probabilities to at least some extent, you’re going to get stuck when you run into them.

6

chrispd01 t1_iymqet8 wrote

I think in reality it comes under intuition. You have an idea experientially as to what is a reasoanble course of action to take. Tonthe extent a mathematical decision gets made, its at the level of “i probably ought to be ok”

Thinking about, there is a good analogy in the world of sports - look at the change in basketball shot patterns. The change is traceable to applying an economic / statistical approach to thise decisions.

But my point is people are more like players before the analytical approach took over. They tend to use intuition and “feel” more than the sort of evaluation you are talkkng about.

In fact its really interesting how wrong peoples intuitions are in those situations … making the less efficient choice, choosing the wrong strategy etc.

That to me shows that in practice people do not ordinarily make the sort of calculations you were describing. It doesn’t mean that they should not make those, just that they do not.

9

[deleted] t1_iymxq1u wrote

[deleted]

0

chrispd01 t1_iyn2qwr wrote

It looks like that except in practice its not. There isnt a real analysis going on in terms of real data etc. hence the basketball model - once people start actually applyjgn analysis the behavior markedly changes

That means that people arent doing that becaue once they start doing that their behavior changes.

The counter to that i think is that people think they are doing that but they are doing a bad job. But in general i dont thibk they really are - they dont make a conscious evalaution of the steps to solve the problem and they just intuit it. They may thinknthey exercosdd judgemtn but in practice they did not

5

Phil003 t1_iyovzno wrote

Well, there are actually objective mathematical answers to what is “safe enough" being used in safety engineering (at least in theory... see my remarks at the end)

On academic level there are basically two generally referred methods to determine what is "safe enough":

(Remark: To handle this question, the concept of risk is used. In this terminology risk is basically the combination of the magnitude of a potential harm and the probability of that harm happening. So if there is 1% probability that 1000 people will die, the risk is 10, and also if there is 10% chance that 100 people will die the risk is again 10.)

​

  1. One is the ALARP principle ("as low as reasonably practicable"). This is basically a cost-benefit analysis. In a very simplified way, what you do is that you determine the current risk of the system (e.g. let's say there is 10% probability that a tank will explode in a chemical plant (e.g. till the planned closure date of the plant) and if this happens, on average 100 people would die in the huge explosion and in the resulting fire, then the risk is 0.1*100=10 ) Then you assign a monetary value to this, so let's say you assume that one human life worths 10 million € (this is just a random number, see the end of my post) then the risk*human_life_cost=100 million €. Now let's say you can decrease the risk to 5 (e.g. instead of 10%, there will be only a 5% probability that 100 people will die) by implementing a technical measure, e.g you install automatic fire extinguishers everywhere in the chemical plant, or something like that. If you do this, you reduce the risk*human_life_cost to 50 million € so you will have a 50 million € benefit. So how to decide if you should do this according to the ALARP principle? Easy, you consider the cost of implementing this technical measure (buying, installing, maintaining etc. all the automatic fire extinguishers) and if it costs less than the benefit (50 million € ) you should do this, if it would cost more than the benefit, then this would not be "reasonably practicable" and therefore you should no do this.
  2. The other approach is basically to use the concept of acceptable risk. In this case you first determine the acceptable risk (e.g. a worker in a chemical plant shall have a lower probability of dying in an accident per year than 1 in a million. i.e. out of one million workers only one shall die each year) and then you reduce the risk posed by the system till you reach this level. In this model the cost of reducing the risk is irrelevant, you must do whatever is necessary to reach the level of acceptable risk.

I am a functional safety engineer working in the automotive industry, so I don't claim to have a general overview of every domain of safety engineering, but let me add some remarks to these academic models based on the literature and discussion with other experts on my field:

  • ALARP: sounds very nice in theory, but I think the main problem is that pretty much no regulatory body or company would publish (or even write down! too much risk of leaking documents) their assumption on the worth of human life expressed in money or otherwise the witch hunt would immediately start...
  • Concept of acceptable risk:
    • Here it is important to highlight that what can be considered as an acceptable risk is decided by the society, and it can significantly change depending on the system in question. This also pretty much means that this decision is not necessarily rational. E.g. people accept higher risk while driving a car than when they fly as a passenger. (My understanding is that this is because people feel "in control" while driving, but they feel helpless controlling the situation while on the board of a plane. So this is not a rational decision)
    • Perhaps this acceptable risk concept looks strange, but it really makes sense. Consider car driving. Every year over 1 million people die in traffic related accidents worldwide, and people are fully aware that the same can happen to them on any day they drive a car. Still they choose to take the risk, and they sit in their car every morning. Society basically decided that 1 million people dying every year in car accidents is an acceptable risk.
    • Publishing acceptable risk values has similar challenges like publishing the worth of human life expressed in money, but the situation is a bit better, there are actually some numbers available in the literature for certain cases (but not everywhere, e.g. in my domain, in the automotive industry, we kinda go around of writing down a number)
  • On my field of expertise (developing safety critical systems including complex electronics and software), estimating the probability that the system will fail resulting in an accident is just impossible (describing the reasons would take too much time here), therefore there exists no really reliable way to estimate the probability of an accident and therefore it is not possible to quantify the risk with reasonable precision. Therefore neither of the above two methods are really applicable in practice in their "pure" form. (and I am quite sure that the situation is pretty similar on many other fields of safety engineering)

So my summary is that there exist generally accepted academic models to answer the question of what is “safe enough". These models are in theory the basis of the safety engineering methods followed in the industries everywhere, so applying mathematics to make moral decisions (so to determine e.g. what is an acceptable probability for somebody dying in an accident) is kinda happening all the time. In practice this whole story is much more complicated. e.g. because of the above mentioned reasons, so what is really happening is that we are using these models as "guidance" and we basically try to figure out what is safe enough based on mostly experience. I would be very surprised if these academic models would be used anywhere in significant number in a "clear" and "direct" way.

4

Tinac4 t1_iyqec3c wrote

Great comment! Thanks for the thorough explanation.

1

XiphosAletheria t1_iyndraz wrote

I think the problem there is that people don't generally know such probabilities in the first place. I doubt the vast majority of people could tell you what their chance of being in a car accident is normally, or what it increases to when they are drunk. Nor do they probably think of it as a chance of "them killing someone". An accident is by definition beyond someone's personal control. Likewise, your charitable donation example seems unrealistic, because those numbers are pretty much never going to come up - charities typically rely on emotional appeals rather than mathematical ones.

And the numbers tend not to matter anyway. Obviously it is better to donate and try to save a life than to not donate and guarantee the death (if you believe in a moral obligation to save lives), even if the chance of success is low.

5

Cregaleus t1_iyys1cq wrote

Making an appeal to the authority of the behavior of deontologists isn't persuasive. Most dietitians eat pizza. Pointing that out isn't evidence that pizza is health food.

Comprehensive theories that cannot be comprehensively articulated are not comprehensive theories. I.E. it is a real problem that we cannot say whyfor it is moral to drive to work with a risk of killing someone's at 0.0000001% and immoral at 5%.

1

tmpxyz t1_j2d280u wrote

I remembered there was a case that a car company (chevron?) had a flawed car brand, the company decided not to recall as in their calculation, the total compensation for accidents would be cheaper than fixing all the cars.

So, yeah, some people do make such calculation. But the majority of the mass don't do that, the moral judgement of the mass are usually emotion-driven or event-driven or pattern-matching or just blindly following KOLs they like. The majority probably wouldn't do such calculation until they are in really hard position, and they would probably take decisions that favor their own interest in those situations.

1

feliweli49 t1_iymg53e wrote

Most of those deductions also come from premises like "less deaths is better". Where people have disagreements when it comes to moral questions isn't usually the deduction itself or flawed logic, but the premise.

3

cutelyaware t1_iyorv39 wrote

I think I disagree. I feel our moral disagreements aren't around ideas such as "less death is better", but around the details of "how", not "what". For example is it OK to kill animals for food? We can argue over when it's OK and when it's not, but I can't think of an example where someone came to the decision that less death is better or gave up such a belief.

2

feliweli49 t1_iyovk13 wrote

The "less death is better" part refers to the blog primarily with the trolley problem. It's a naive and utilitarian way to quantify those problems and disregards a lot of the why behind the taken decision. My point is that those alternative decisions can still be expressed with logic because they just has different premises.

"Is killing animals for food ok?" has plenty of different premises for both sides, and both sides can be expressed in a logically sound way.

E.g. it's not ok to kill sentient beings, animals are sentient, eating animals kills them etc. end up with a hard no. Those premises aren't universally agreed upon, so even using the tools logic provides us won't give us a clear universal answer on if killing animals for food is ok.

2

cutelyaware t1_iyp3j01 wrote

Alright then it seems we're in agreement. "less death is better" is the moral position that doesn't yield to mathematics. Only the application of that position to particular situations can.

1

iiioiia t1_iz0qez1 wrote

> You can't solve moral problems with math. You can only express your moral beliefs in symbolic forms and manipulate them with the tools of mathematics

I agree you can't guarantee a solution, or create a solution that solves them directly, but a math based solution could cause belief formation that is sufficient to alter human behavior enough to (at least substantially) solve the problem, could it not?

On one hand, this is kinda "cheating"....but on the other hand, ignoring how reality actually works isn't flawless either.

0

cutelyaware t1_iz2a5je wrote

No, this is the sort of situation that prompted the Jonathon Swift quote >“You cannot reason a person out of a position he did not reason himself into in the first place.”

1

iiioiia t1_iz2aayq wrote

What I like about that quote is rarely does the person playing it care if it is actually true.

1

cutelyaware t1_iz2j9g6 wrote

Ad hominin attacks are fallacious too

1

iiioiia t1_iz2ks7i wrote

If one is making an assertion about the truth value of a proposition based on criticism of the messenger, but I am making this claim broadly (applicable to all people).

1

cutelyaware t1_iz37vpp wrote

In this context it reads as a direct attack, just FYI

1