Viewing a single comment thread. View all comments

iiioiia t1_iz3242b wrote

> I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma after one has presupposed or established a particular ethical framework, moral philosophy, and/or particular moral norms and judgments. Descriptive models, non-normative facts, and math/logic alone can never solve a moral problem or dilemma, in order to arrive at a moral judgment or conclusion one must presuppose an ethical framework or particular norms/value-judgments.

I suspect you have a particular implementation in mind, and in that implementation what you say is indeed correct.

1

Ok_Meat_8322 t1_iz7db9d wrote

Once again, I'm not sure what that's supposed to mean.

1

iiioiia t1_iz9mvo6 wrote

"I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma ...."

What would "applied" consist of?

1

Ok_Meat_8322 t1_izbtljz wrote

The example I used earlier was a utilitarian, who can use basic arithmetic to resolve moral dilemmas (such as, for instance, the trolley problem).

But this only works because the utilitarian has already adopted a particular ethical framework. Math can't tell you what values or ethical framework you should adopt, but once you have adopted them maths and logic may well be used to resolve moral issues.

1

iiioiia t1_izc1dt3 wrote

I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way. In an agnostic framework, representations of various models could have math attached to them (whether it is valid or makes any fucking sense is a secondary matter) and that should satisfy an exception to your rule, I think?

1

Ok_Meat_8322 t1_j0naes5 wrote

>I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way.

I don't think it must be done, I don't think logic or mathematics is going to be relevant to most forms of moral reasoning. But consequentialism is the most obvious case where it would work, since consequentialism often involves quantifying pleasure and pain and so would be a natural fit.

But if what you mean is that we could sometimes use logic or mathematics to answer moral questions without first presupposing a set of moral values or an ethical framework, I think it is close to self-evident that this is impossible: when it comes to reasoning or argument, you can't get out more than you put in, and so if you want to reach a normative conclusion, you need normative premises else your reasoning would necessarily be (logically) invalid.

1

iiioiia t1_j0ng2rm wrote

Oh, I'm not claiming that necessarily correct answers can be reached ("whether it is valid or makes any fucking sense is a secondary matter"), I don't think any framework can provide that for this sort of problem space.

1

Ok_Meat_8322 t1_j0nn0qc wrote

I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link, so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.

So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).

1

iiioiia t1_j0nufl7 wrote

> I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link....

Mostly agree. As I see it, the problem isn't so much that answers to moral questions are hard to discern, but that with few exceptions I can think of (including literal murder), do not have a correct answer at all.

> ...so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.

Right - so put error correction into the system, so when participants minds wander into fantasy and, provide them with gentle course correction back to reality, which is filled with non-visible (for now at least) mystery.

> So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).

"Solving" things can only be done in deterministic problem spaces, like physics. Society is metaphysical, and non-deterministic. It appears to be deterministic, but that is an illusion. Just as the average human 200 years ago was ~dumb by our standards (as a consequence of education and progress) and little aware of it, so too are we. This could be realized, but like many things humanity has accomplished, first you have to actually try to accomplish it.

1

Ok_Meat_8322 t1_j0ny94r wrote

>"Solving" things can only be done in deterministic problem spaces, like physics

I think its more a matter of "solving" things in one domain looking quite differently than in another domain. And solving a moral dilemma doesn't look at all like solving a problem in physics. But that doesn't mean it doesn't happen; oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time.

1

iiioiia t1_j0o9mf3 wrote

> And solving a moral dilemma doesn't look at all like solving a problem in physics.

Agree, but listening to a lot of people talk with supreme confidence about what "is" the "right" thing to do, it seems like this idea is not very broadly distributed.

> oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time

Right, but the chosen course doesn't have to be right/correct, it only has to be adequate for the maximum number of people, something that I don't see The Man putting a lot of effort into discerning. If no one ever checks in with The People, should we be all that surprised when they are mad at we don't know why (though not to worry: memes "explanatory" "facts" can be imagined into existence and mass broadcast into the minds of the population in days, if not faster).

1