Comments

You must log in or register to comment.

therealduckrabbit t1_iyktxnz wrote

Schopenhauer and Plato both address this issue in different ways. Plato describes Akrasia as weakness of will, where one knows what reason dictates but fails to pursue that goal. Though he does identify it as a phenomenon he can't explain it. Mostly because, as Schopenhauer points out, it is based on an empirically flawed moral psychology. Reason for Schopenhauer does not motivate us to act, desire is an exclusive function of the Will, which always motivates us to act. Reason is simply an instrument to guide us in efficiently and effectively fulfilling desire.

That doesn't mean rational approaches to ethics have no place. They are best utilized in collective tools like government to assure good outcomes when using public or finite resources.

The great articulator of this debate is Richard Taylor, the beekeeping philosopher, in his book Good and Evil. The most underrated philosophy publication of the last 100 years imo.

72

cutelyaware t1_iylax98 wrote

You can't solve moral problems with math. You can only express your moral beliefs in symbolic forms and manipulate them with the tools of mathematics. If you describe a moral problems in utilitarian terms, then you'll get utilitarian results. But who is to say that a moral problem requires a utilitarian result? That's just begging the question.

95

EyeSprout t1_iyldpwt wrote

I don't think this article sees or explains the full extent of how far math can go to describe morality. All it talks about are utility functions, but math can go so much further than that.

Many moral rules can arise naturally from social, iterated game theory. Some of you might know how iterated prisoner's dilemma gives us the "golden rule" or "tit for tat" (for those of you that don't, look at this first before reading further https://ncase.me/trust/), but stable strategies for more complex social games gives rise to social punishment and as a result, rules for deciding who and what actions to punish.

Most people would believe that this merely explains how our moral rules became accepted and use in society, and doesn't really tell us what an "ideal" set of moral rules would be. But I think, even if it might not uniquely specify what morality is, it puts some strong constraints on what morality can be.

In particular, I think that morality should be (to some degree) stable/self-enforcing. By that, I mean that a moral rules should be chosen so that if most of society is following that set of moral rules, then for most people following moral rules as opposed to discarding them is in their personal self-interest, in the same way that cooperation is in each of the player's self=interest in the iterate prisoner's dilemma under the "golden rule" or "tit for tat" moral rule.

12

52planet t1_iylgm3x wrote

The moral rule of the universe is built from it's construction. Basically everything is one thing so whatever you do to others you do to yourself. There is simply the illusion of separation. Basically all religions hint at this truth in one way or another.

In Catholicism the same idea is essentially present. When Jesus is asked who he is he simply states “I am" and his Golden rule is to treat others as you want to be treated. Which is implied if you understand that everything is one with god.

Lucifer is associated with the deadly sin of pride. This is because his sin is thinking he is separate from god when the true nature of reality is that he is god. For this he falls into hell, which is described as eternal separation from god. All the other sins emerge from this as the prerequisite for all bad action is to not recognize the entity you're transgressing against is yourself.

The trinity is simply three aspect of god but each imply the existence of each other.

The Father is simply the unknowable force of creation, The holy spirit is the moral virtue of god(The Golden rule) and the son is supposed to be the knowable face of god(the manifestation). The tauist's have similar lingo when they speak about similar subject's.

This rule is the only implied virtue of the universe and it is not subject to change. This is a constructionist view of morality as it is derived from a belief on how the universe is actually constructed and not made up from some arbitrary authority. It is true everywhere at anytime and in any culture. Good and Bad are not relative terms, there is an objective standard and it's the universal will. Treat others as if they were you because they are literally you.

It is important to note that since everything is one thing this also implies that you are the environment around you as well. So the rule even implies respect to the environment and animals.

May seem far fetched but if you drop enough psychedelic drugs it makes sense as you will certainly feel and see it yourself. This is the reason you hear every hippy ever say "one love mannn. They may not be able to articulate the rational behind it but in that state the connection to all things is undeniable. This is how I came to the conclusion. Basically thought about a 6g mushroom trip for a year and half and realized what it was trying to show me about the nature of good and evil.

−10

Tiberiusmoon t1_iylkxsd wrote

Its usually critical thinking that solves moral problems.

Like so:
To address the subject of morals we must consider the broader spectrum of which it covers.
The end goal is to live in a way that is rational, as such it must be considered for all life because life lives.
With consideration to unbiased critical thinking we must challenge our own cultures which influence assumptions and biases, because such man-made constructs have no meaning to living things other than humans.
As such you must consider what it is that influences unethical behaviour in our decision making so that we may avoid it.

Unbiasedly we must strive for an unbiased ethical approach to morals because the study of such a subject requires critical thinking.
To simplify the goal: You can't value a social construct or object over the lives or wellbeings of others.

2

Nymphe-Millenium t1_iylmxmc wrote

I really do agree it's critical thinking, but it's even even more moral values/laws/schemes ingrained in an individual, that may use maths like a tool, not pure maths that solve the moral problem.

It's easy to prove, because you may consider other criteria than the number as the age, the gender, the weakness, the social "utility", according to your moral internal scheme (moral values).

This article decide you will always use the number to decide, but it is totally wrong, one can save 5 children and let the adult to drown, because it's their moral value, or save 5 people because they are of their family. Or even from their ethnicity. Or because their moral value is "to help weak people first", they could try to save disabled people first, or save the guy who is a doctor, an artist, a politician, etc...

There are a lot of possibilities because there are a lot of moral schemes guiding the logical decision of a person, and they could take several different "logical" paths according to their own logics.

Of course, if you ask the solution of this problem with disembodied people that are only imaginary silhouettes, people will use the pure logics, the mathematical one, but in real situations, where the people to saved are "embodied" and real, the moral choices can be different.

So, it is a really big simplification, It's really simplistic to consider maths alone, detached from the internal moral schemes (that have more weight than theorical pure logics) the main determiner for moral choices.

Pure maths can also lead to decisions that would be judged as really non moral in some cultures or situations (culture: people with common moral schemes), if used as a pure tool, like the villain in movies deciding to sacrifice some lives for the good of a greater amount of individual.

If this article was true, and mathematical really so important as a pure tool for chosing moral decisions, nobody would frown upon having some economic slaves for exemple for the great good of more people than there are slaves or exploiting ethnic minorities.

Maths are really not a moral tool, especially if taken alone, as the article tries to suggest.

8

Tinac4 t1_iylruah wrote

Math isn't only a tool for utilitarians, though. The real world is fundamentally uncertain--people are forced to make decisions involving probability all the time. To use an example in the essay, consider driving: If there's a 0.0000001% chance of killing someone while driving to work, is that acceptable? What about a 5% risk? Most deontologists and virtue ethicists would probably be okay with the first option (they make that choice every day!), but not the second (also a choice commonly made when e.g. deciding not do drive home drunk). How do they draw the line without using numbers on at least some level? Or what will they do when confronted with a charitable intervention that, because the details are complicated, will save someone's life for $1,000 with 50% probability?

A comprehensive moral theory can't operate only in the realm of thought experiments and trolley problems where every piece of the situation is 100% certain. They have to handle uncertainty in the real world too, and the only way to do this is to be comfortable with probabilities, at least to some extent.

19

cutelyaware t1_iyls8ml wrote

> How do they draw the line without using numbers on at least some level?

You can't use numbers to justify your morality. You can only optimize it if your morality happens to be purely utilitarian.

10

Tinac4 t1_iylss65 wrote

I didn’t say anything about using numbers to justify morality, and neither did the OP. My point is that a lot of real-life moral dilemmas that involve uncertainty, and it’s very hard to resolve them if your moral framework isn’t comfortable with probabilities to some extent. For instance, how would you respond to the two scenarios I gave above?

15

MightyTVIO t1_iylw6nl wrote

If you can describe a precise way of determining what is most moral or even just more moral of 2 choices then you have an algorithm. If it is not precisely defined then it's ambiguous in which case it's not justified I'd argue? Since you haven't even defined what it even is.

6

cutelyaware t1_iylyu0s wrote

A person's morality is simply their sense of right and wrong behaviors. It doesn't matter if you have an algorithm or not. That would only let you be more consistent with your actions. My point is that you don't get to choose your morality any more than you get to choose your sexual orientation.

−4

Critical_Ad_7778 t1_iym46db wrote

I recommend reading the book "Weapons of Math Destruction". The author describes several mathematically sound algorithms that produce terrible outcomes.

Here is an example: An algorithm helps judges decide if someone should get probation. Part of the calculation includes the likelihood that they will be arrested again.

The problem is that currently, your more likely to be arrested if you're black.

The algorithm becomes racist accidentally. This is just one example of how dangerous it is to base all of your choices on "logic and reason".

17

MightyTVIO t1_iym5j0l wrote

If you're arguing for lack of free will then I'd agree but it makes the whole point moot. A moral theory is generally independent of any specific persons so not sure why that's relevant? Furthermore you absolutely could reduce a persons actions to an algorithm just maybe a complicated one.

4

chrispd01 t1_iym79yj wrote

Its not really a mthematical calculation though … the driver thinks one course is safe enough and the other not.

You could i suppose do some kind of statistical study to try to get some parameters but that is independent of the driver’s thinking

6

Tinac4 t1_iym8rv6 wrote

How does the driver decide that one situation is “safe enough” while the other one isn’t? What’s the right choice if the odds of an accident were somewhere in the middle like 0.01%?

I’m not saying that there’s an objective mathematical answer to what “safe enough” means. There isn’t one—it’s a sort-of-arbitrary threshold that’s going to depend on your own values and theory of ethics. However, these situations do exist in real life, and if your theory of ethics can’t work with math and probabilities to at least some extent, you’re going to get stuck when you run into them.

6

grateful-biped t1_iymcv72 wrote

One of the most misunderstood concepts is that logic or reasoning can solve all problems. Logic is a tool. It’s our most powerful tool but it doesn’t work in all scenarios, especially ethical dilemmas.

We are human & by definition we behave irrationally. This is just one of the areas where logic is misleading. Game Theory often fails to predict human actions due to this human, all too human, characteristic.

2

feliweli49 t1_iymg53e wrote

Most of those deductions also come from premises like "less deaths is better". Where people have disagreements when it comes to moral questions isn't usually the deduction itself or flawed logic, but the premise.

3

experimentalshoes t1_iymj8p8 wrote

Probability is part of what makes us human though, as with the ability to describe our odds of survival somewhere rather than simply feeling it in our bodies.

Our awareness of uncertainty and risk are rooted in emotion, or basic drives, and they later became quantitative disciplines, similar to psychology. Likely or unlikely outcomes have always shaped our actions and our beliefs, sometimes also in contrast to the odds, where things may become heroic, irresponsible, etc.

You might look to numbers not to justify your morality, which is a precise form of argument, but to investigate it. Numbers can bring you back in touch with basic human drives we may have forgotten in the realm of abstract thought. Justification can then be built on top of the findings of that investigation.

3

timbgray t1_iymjco9 wrote

I enjoyed the article, what follows is context, not criticism.

If you come across an article that contains what seem to be large numbers, or infinities (which I did’t see here) take a minute to get at least a sense of what really large numbers are like, (or small numbers as the inverse of a really large number) Numberphile has some good videos on Graham’s number and Tree Three. These really large numbers provide a useful context. If an author pulls out what seems to be a small probability, appreciate how massive that “small” probability is compared to the range of possible really small probabilities.

1

polyglotky t1_iymjxur wrote

it is, of course, a travesty to a pragmatist like myself that we now live in an era where moral reasoning has become more prevalent than building moral character. how are we so lost that not only do we see no problem in the classic trolley problem (that the very construction of it reduces the need to build moral character in aristotelian sense), we encourage the development of such a skill?

the revelation that logic can be applied to ethics is as white as snow—since the beginning of western philosophy, philosophers have applied the use of reason to differentiate between the moral and the immoral. i blame kant, however, for popularizing that reason alone can birth ethical principles--that we may, in a case such as the trolley problem, rely entirely upon our reason to inform how we act. Reason has now become the subject (which Nietzsche takes to an entirely new level), ethics the object; whereas (largely) pre-Kant, ethics was the subject of which reason was applies not to discover but to mould.

2

experimentalshoes t1_iymk6ja wrote

That’s only true if the algorithm is written to build patterns and reintegrate them into its decisions, which was a human decision to program, AKA hubris. There would be no problem if it was written to evaluate the relevant data alone. It wouldn’t do anything to fix the underlying social problems, of course, but ideally this would free up some human HR that could be put on the task.

1

autonomicautoclave t1_iymm2tq wrote

If that is true then philosophers have been wasting our time with moral arguments. If you can’t choose what morality to believe in, it’s no use trying to convince someone to follow your morality. It would be like trying to convince a heterosexual person to become homosexual.

3

Critical_Ad_7778 t1_iymm9uv wrote

I want to understand your argument. My writing might sound snarky, so I apologize to you in advance.

  1. Wouldn't the algorithm be written by a human?
  2. Wouldn't the reintegration happen by a human?
  3. Aren't all decisions made by humans?

I don't understand how remove the human element.

4

experimentalshoes t1_iymnh7e wrote

I did mention that it was written by a human, yes, but the reintegration part is called “machine learning” and doesn’t necessarily require any further human input once the algorithm is given its authority.

I’m trying to say the racist outcome in this example isn’t the result of some tyranny of numbers that we need to keep subjugated to human sentiment or something. It’s actually the result of human overconfidence in the future mandate of our technological achievements, which is an emotional flaw, rather than something inherent to their simple performance as tools.

3

chrispd01 t1_iymqet8 wrote

I think in reality it comes under intuition. You have an idea experientially as to what is a reasoanble course of action to take. Tonthe extent a mathematical decision gets made, its at the level of “i probably ought to be ok”

Thinking about, there is a good analogy in the world of sports - look at the change in basketball shot patterns. The change is traceable to applying an economic / statistical approach to thise decisions.

But my point is people are more like players before the analytical approach took over. They tend to use intuition and “feel” more than the sort of evaluation you are talkkng about.

In fact its really interesting how wrong peoples intuitions are in those situations … making the less efficient choice, choosing the wrong strategy etc.

That to me shows that in practice people do not ordinarily make the sort of calculations you were describing. It doesn’t mean that they should not make those, just that they do not.

9

Wizzdom t1_iymx5r7 wrote

Except information can change your sense of right and wrong. Take smoking/drinking while pregnant. It didn't feel wrong until we learned it's harmful to the baby. You can absolutely change your sense of right and wrong by studying and thinking about it.

1

chrispd01 t1_iyn2qwr wrote

It looks like that except in practice its not. There isnt a real analysis going on in terms of real data etc. hence the basketball model - once people start actually applyjgn analysis the behavior markedly changes

That means that people arent doing that becaue once they start doing that their behavior changes.

The counter to that i think is that people think they are doing that but they are doing a bad job. But in general i dont thibk they really are - they dont make a conscious evalaution of the steps to solve the problem and they just intuit it. They may thinknthey exercosdd judgemtn but in practice they did not

5

BernardJOrtcutt t1_iyn6zys wrote

Your comment was removed for violating the following rule:

>Argue your Position

>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

BernardJOrtcutt t1_iyn76pb wrote

Your comment was removed for violating the following rule:

>Argue your Position

>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

XiphosAletheria t1_iyndraz wrote

I think the problem there is that people don't generally know such probabilities in the first place. I doubt the vast majority of people could tell you what their chance of being in a car accident is normally, or what it increases to when they are drunk. Nor do they probably think of it as a chance of "them killing someone". An accident is by definition beyond someone's personal control. Likewise, your charitable donation example seems unrealistic, because those numbers are pretty much never going to come up - charities typically rely on emotional appeals rather than mathematical ones.

And the numbers tend not to matter anyway. Obviously it is better to donate and try to save a life than to not donate and guarantee the death (if you believe in a moral obligation to save lives), even if the chance of success is low.

5

Tiberiusmoon t1_iyo46gd wrote

Well yeah because either way people will die.
If choosing to die over another person starving to death for example, it is that person starving that has to live through the pain.

Its like choosing to die slowly or quickly.

1

Cli4ordtheBRD t1_iyo5vaj wrote

Hopping on the top comment to provide more context on "longtermism" and "effective altruism", which I think the author was criticizing (but I'm honestly not sure).

First things first: humanity (in our biological form) is not getting out of our Solar System.

So the whole "colonize the galaxy" plan with people being born on the way is not going to work. Those babies will not survive because every biological system depends on the constant force of Earth's gravity. Plus their parents are probably not going to fare much better, as their bones density degrades over time and that lost calcium develops into painful kidney stones.

Here's an article from the Economist's 1843 Magazine that covers Effective Altruism (which is getting a lot of attention right now thanks to Sam Bankman-Fried having bankrolled the movement).

My perspective is that there are a lot of people with good intentions, but the intellectual leaders of the movement are ethically-challenged, who are at the "getting high on their own farts" stage, and it's being seized on by some of the absolute worst people (Elon Musk & Peter Thiel) to justify their horrible actions, with dreams of populating the stars.

>The Oxford branch of effective altruism sits at the heart of an intricate, lavishly funded network of institutions that have attracted some of Silicon Valley’s richest individuals. The movement’s circle of sympathisers has included tech billionaires such as Elon Musk, Peter Thiel and Dustin Moskovitz, one of the founders of Facebook, and public intellectuals like the psychologist Steven Pinker and Singer, one of the world’s most prominent moral philosophers. Billionaires like Moskovitz fund the academics and their institutes, and the academics advise governments, security agencies and blue-chip companies on how to be good. The 80,000 Hours recruitment site, which features jobs at Google, Microsoft, Britain’s Cabinet Office, the European Union and the United Nations, encourages effective altruists to seek influential roles near the seats of power.

#William MacAskill A 35 year-old Oxford Professor is the closest thing to a founder and has produced increasingly controversial positions.

>The commitment to do the most good can lead effective altruists to pursue goals that feel counterintuitive. In “Doing Good Better”, MacAskill laments his time working as a care assistant in a nursing home in his youth. He believes that someone else would have needed the money more and would have probably done a better job. When I asked about this over email, he wrote: “I certainly don’t regret working there; it was one of the more formative experiences of my life…My mind often returns there when I think about the suffering in the world.” But, according to the core values of effective altruism, improving your own moral sensibility can be a misallocation of resources, no matter how personally enriching this can be.

#Longtermism >One idea has taken particular hold among effective altruists: longtermism. In 2005 Nick Bostrom, a Swedish philosopher, took to the stage at a ted conference in a rumpled, loose-fitting beige suit. In a loud staccato voice he told his audience that death was an “economically enormously wasteful” phenomenon. According to four studies, including one of his own, there was a “substantial risk” that humankind wouldn’t survive the next century, he said. He claimed that reducing the probability of an existential risk occurring within a generation by even 1% would be equivalent to saving 60m lives.

>Disillusioned effective altruists are dismayed by the increasing predominance of “strong longtermism”. Strong longtermists argue that since the potential population of the future dwarfs that of the present, our moral obligations to the current generation are insignificant compared with all those yet to come. By this logic, the most important thing any of us can do is to stop world-shattering events from occurring.

#Going full Orwell

>In 2019 Bostrom once again took to the ted stage to explain “how civilisation could destroy itself” by creating unharnessed machine super-intelligence, uncontrolled nuclear weapons and genetically modified pathogens. To mitigate these risks and “stabilise the world”, “preventive policing” might be deployed to thwart malign individuals before they could act. “This would require ubiquitous surveillance. Everyone would be monitored all of the time,” Bostrom said. Chris Anderson, head of ted, cut in: “You know that mass surveillance is not a very popular term right now?” The crowd laughed, but Bostrom didn’t look like he was joking.

>Not everyone agrees. Emile Torres, an outspoken critic of effective altruism, regards longtermism as “one of the most dangerous secular ideologies in the world today”. Torres, who studies existential risk and uses the pronoun “they”, joined “the community” in around 2015. “I was very enamoured with effective altruism at first. Who doesn’t want to do the most good?” they told me.

>But Torres grew increasingly concerned by the narrow interpretation of longtermism, though they understood the appeal of its “sexiness”. In a recent article, Torres wrote that if longtermism “sounds appalling, it’s because it is appalling”. When they announced plans on Facebook to participate in a documentary on existential risk, the Centre for Effective Altruism immediately sent them a set of talking points.

>Chugg, for his part, also had his confidence in effective altruism fatally shaken in the aftermath of a working paper on strong longtermism, published by Hilary Greaves and MacAskill in 2019. In 2021 an updated version of the essay revised down their estimate of the future human population by several orders of magnitude. To Chugg, this underscored the fact that their estimates had always been arbitrary. “Just as the astrologer promises us that ‘struggle is in our future’ and can therefore never be refuted, so too can the longtermist simply claim that there are a staggering number of people in the future, thus rendering any counter argument mute,” he wrote in a post on the Effective Altruism forum. This matters, Chugg told me, because “You’re starting to pull numbers out of hats, and comparing them to saving living kids from malaria.”

>Effective altruists believe that they will save humanity. In a poem published on his personal website, Bostrom imagines himself and his colleagues as superheroes, preventing future disasters: “Daytime a tweedy don/ at dark a superhero/ flying off into the night/ cape a-fluttering/ to intercept villains and stop catastrophes."

I think this is ultimately driven by a whole group of people obsessed with "maximizing" instead of "optimizing". They want a number (to the decimal) about which option to choose and can't stand the thought of "good enough, but it could have been better". Essentially they're letting perfect be the enemy of the good and if we're not careful they're just going to slide into fascism with more math.

1

enternationalist t1_iyocmv0 wrote

I suppose I wouldn't infer that, but I see how you are reading it; if I say "Look, this blender can't make a perfect smoothie that everyone would like", to me that doesn't imply that I think a perfect smoothie liked by everyone can exist; I'm just clarifying that such a concept isn't the goal.

I think what they are really trying to say is that the method constrains morality such that there only a few local maxima of stability - only some moral systems can be stable. It's not that it says that these systems are or are not morally good; in fact it doesn't assign them any sort of "goodness" score - it only tells us what is socially stable enough to be perpetuated as a moral system.

So, if our goal is to arrive at a moral system, this method theoretically lets us discard many unstable possibilities.

In this way, this method can reject a common set of suboptimal ("non-ideal") solutions, even if "ideal" solutions are totally unique for each person as you suggest, so long as we all agree with the premise that stability is good. It relies on that common criterion, even if all other criteria are totally unique.

That's how some "non-ideal" solutions can be consistently identified even if "ideal" is highly personal - it cannot identify ALL non-ideal solutions for all people; that can't be done without asking literally every human what they'd prefer - but it CAN identify a consistent subset of those solutions that will not be functional, regardless of personal views (unless you disagree with the basic premise of stability!)

1

grateful-biped t1_iyoe71x wrote

You’re right but Game Theory didn’t have modest ambitions 40-60 years ago. It was going to guide our national foreign policy & change the world. It’s only been in the past 20+ years that it’s adherents admitted Game Theory had a small place in predicting behaviors by individuals & foreign governments.

“Optimal conditions” exist in the laboratory, not in reality. At best Game Theory provides us with options & approximate probabilities. Very approximate

1

EyeSprout t1_iyofnq7 wrote

The stability condition itself is an independent concept from "ideal" morality. I was using the idea of an "ideal" system of morality for reference because it's what people seem to be most familiar with, even if most people here probably don't believe in the existence of an ideal set of moral rules themselves.

As I said, the stability condition doesn't uniquely define a set of moral rules, it's possible that multiple different sets of moral rules can satisfy it at the same time. Different people with different values will still arrive at different sets of moral rules that all satisfy the stability condition.

A rationale behind caring about the stability condition in a system of morality is that actual systems of morality and ethics all tend to approximately follow the stability condition, due to evolutionary pressures. A moral system that is not (approximately) stable in practice won't persist very long and will be replaced by a different system. So the stability condition is "natural" and not arbitrarily decided by some individual values. Few conditions like that exist, so it's a valuable tool for analyzing problems of morality.

1

cutelyaware t1_iyokehg wrote

I've had new situations come to light that cause me to rethink my proper responses to moral questions, but I can't think of anything that changed my morality. For example I still think that it should be a woman's right to choose abortion, but I've come to believe that pro-life people have a point.

How have you changed your morality?

2

EyeSprout t1_iyom6dr wrote

Absolutely stability is difficult since the world is constantly changing, but the change is slow enough that evolution does tend to produce approximately stable systems. That's a straightforward result of the math; less stable states change quickly and therefore your system spends less time in them.

1

cutelyaware t1_iyomja5 wrote

The moral position involved in your example is that it's wrong to harm fetuses. If you learn that drinking harms fetuses, then you haven't changed your position by then concluding that's wrong behavior. You started off believing it was wrong to harm fetuses, and you ended up still believing that. You've just updated your opinion based on new information.

2

EyeSprout t1_iyon8r9 wrote

The oxygen catastrophe is possibly the worst possible counterexample you could pick here. The oxygen catastrophe happened slowly enough for all forms of life to settle in niches, enough for game theory to direct evolution, and for a stability condition to apply. Those niches were approximately stable while they existed.

That's all that the stability condition needs to be applied. It's not some complicated concept.

1

cutelyaware t1_iyooqnt wrote

Can you give me an example of how you've changed your mind and adopted a different morality, or convinced someone else to change theirs? For example I see plenty of arguments of the form "If you believe killing is wrong, then..." I've never seen someone decide "Yes, I suppose killing is fine". I've only seen them decide that it's OK or not OK to kill in some specific situation.

2

EyeSprout t1_iyopym1 wrote

For example, in iterated prisoner's dilemma "always cooperate with your opponent" is not stable, because your opponent's optimal strategy against that is to defect every turn. The simulation I linked in my original comment shows a ton of strategies that are not stable and shows quite directly how they would quickly get eliminated by evolution.

For a simple example in evolution, most mutations harm the organism and are unstable. If most organism in a population had a very harmful mutation and a small population didn't, that small population would quickly take over the larger population. Hence, that mutation is unstable.

A slightly nontrivial example would be blind altruism in a situation where your species is severely starved of resources. If most animals were blindly altruistic and a small number of animals were not and would take advantage of the altruistic animals, then again, that small number would outcompete the larger population. So blind altruism isn't stable.

Of course we can't find many real-life examples; that is because they tend to be quickly eliminated by evolution. If they exist, it's usually only temporary.

1

cutelyaware t1_iyoq1g1 wrote

> You might look to numbers not to justify your morality, which is a precise form of argument, but to investigate it.

Certainly, math is very useful in lots of moral situations, but I'm making a different claim which is that it can't be used to decide your moral foundation. If you feel that you've done that, then please tell me how it happened.

2

cutelyaware t1_iyorv39 wrote

I think I disagree. I feel our moral disagreements aren't around ideas such as "less death is better", but around the details of "how", not "what". For example is it OK to kill animals for food? We can argue over when it's OK and when it's not, but I can't think of an example where someone came to the decision that less death is better or gave up such a belief.

2

EyeSprout t1_iyos2ps wrote

Ah, wait, just in case... when I say "stability" it has nothing to do with stability of government and things like that. I meant it in more of the physics sense, that small perturbations wouldn't cause a large effect.

1

Georgie_Leech t1_iyoveb9 wrote

Your phrasing seems to be implying that Game Theory is a school of thought, as oppose to a branch of mathematics (one has adherents, the other doesn't). You also seem to be assuming a very limited frame of reference ("our national policy," "foreign governments") for an international field. Might you be confusing "Game Theory as a field is not designed as a predictive model for individual actions" with "certain governments believed Game Theory was something it wasn't?"

1

feliweli49 t1_iyovk13 wrote

The "less death is better" part refers to the blog primarily with the trolley problem. It's a naive and utilitarian way to quantify those problems and disregards a lot of the why behind the taken decision. My point is that those alternative decisions can still be expressed with logic because they just has different premises.

"Is killing animals for food ok?" has plenty of different premises for both sides, and both sides can be expressed in a logically sound way.

E.g. it's not ok to kill sentient beings, animals are sentient, eating animals kills them etc. end up with a hard no. Those premises aren't universally agreed upon, so even using the tools logic provides us won't give us a clear universal answer on if killing animals for food is ok.

2

Phil003 t1_iyovzno wrote

Well, there are actually objective mathematical answers to what is “safe enough" being used in safety engineering (at least in theory... see my remarks at the end)

On academic level there are basically two generally referred methods to determine what is "safe enough":

(Remark: To handle this question, the concept of risk is used. In this terminology risk is basically the combination of the magnitude of a potential harm and the probability of that harm happening. So if there is 1% probability that 1000 people will die, the risk is 10, and also if there is 10% chance that 100 people will die the risk is again 10.)

​

  1. One is the ALARP principle ("as low as reasonably practicable"). This is basically a cost-benefit analysis. In a very simplified way, what you do is that you determine the current risk of the system (e.g. let's say there is 10% probability that a tank will explode in a chemical plant (e.g. till the planned closure date of the plant) and if this happens, on average 100 people would die in the huge explosion and in the resulting fire, then the risk is 0.1*100=10 ) Then you assign a monetary value to this, so let's say you assume that one human life worths 10 million € (this is just a random number, see the end of my post) then the risk*human_life_cost=100 million €. Now let's say you can decrease the risk to 5 (e.g. instead of 10%, there will be only a 5% probability that 100 people will die) by implementing a technical measure, e.g you install automatic fire extinguishers everywhere in the chemical plant, or something like that. If you do this, you reduce the risk*human_life_cost to 50 million € so you will have a 50 million € benefit. So how to decide if you should do this according to the ALARP principle? Easy, you consider the cost of implementing this technical measure (buying, installing, maintaining etc. all the automatic fire extinguishers) and if it costs less than the benefit (50 million € ) you should do this, if it would cost more than the benefit, then this would not be "reasonably practicable" and therefore you should no do this.
  2. The other approach is basically to use the concept of acceptable risk. In this case you first determine the acceptable risk (e.g. a worker in a chemical plant shall have a lower probability of dying in an accident per year than 1 in a million. i.e. out of one million workers only one shall die each year) and then you reduce the risk posed by the system till you reach this level. In this model the cost of reducing the risk is irrelevant, you must do whatever is necessary to reach the level of acceptable risk.

I am a functional safety engineer working in the automotive industry, so I don't claim to have a general overview of every domain of safety engineering, but let me add some remarks to these academic models based on the literature and discussion with other experts on my field:

  • ALARP: sounds very nice in theory, but I think the main problem is that pretty much no regulatory body or company would publish (or even write down! too much risk of leaking documents) their assumption on the worth of human life expressed in money or otherwise the witch hunt would immediately start...
  • Concept of acceptable risk:
    • Here it is important to highlight that what can be considered as an acceptable risk is decided by the society, and it can significantly change depending on the system in question. This also pretty much means that this decision is not necessarily rational. E.g. people accept higher risk while driving a car than when they fly as a passenger. (My understanding is that this is because people feel "in control" while driving, but they feel helpless controlling the situation while on the board of a plane. So this is not a rational decision)
    • Perhaps this acceptable risk concept looks strange, but it really makes sense. Consider car driving. Every year over 1 million people die in traffic related accidents worldwide, and people are fully aware that the same can happen to them on any day they drive a car. Still they choose to take the risk, and they sit in their car every morning. Society basically decided that 1 million people dying every year in car accidents is an acceptable risk.
    • Publishing acceptable risk values has similar challenges like publishing the worth of human life expressed in money, but the situation is a bit better, there are actually some numbers available in the literature for certain cases (but not everywhere, e.g. in my domain, in the automotive industry, we kinda go around of writing down a number)
  • On my field of expertise (developing safety critical systems including complex electronics and software), estimating the probability that the system will fail resulting in an accident is just impossible (describing the reasons would take too much time here), therefore there exists no really reliable way to estimate the probability of an accident and therefore it is not possible to quantify the risk with reasonable precision. Therefore neither of the above two methods are really applicable in practice in their "pure" form. (and I am quite sure that the situation is pretty similar on many other fields of safety engineering)

So my summary is that there exist generally accepted academic models to answer the question of what is “safe enough". These models are in theory the basis of the safety engineering methods followed in the industries everywhere, so applying mathematics to make moral decisions (so to determine e.g. what is an acceptable probability for somebody dying in an accident) is kinda happening all the time. In practice this whole story is much more complicated. e.g. because of the above mentioned reasons, so what is really happening is that we are using these models as "guidance" and we basically try to figure out what is safe enough based on mostly experience. I would be very surprised if these academic models would be used anywhere in significant number in a "clear" and "direct" way.

4

cutelyaware t1_iyp3j01 wrote

Alright then it seems we're in agreement. "less death is better" is the moral position that doesn't yield to mathematics. Only the application of that position to particular situations can.

1

wowie6543 t1_iypl4q3 wrote

This is redundant!

Nothing can be solved WITHOUT logic and probability!

logic and probability are basic elements of all actions and all analytics (of action).

So every method-goal relationship, every ethic problem and every goal atainment needs logic and probability to measure success.

Kants Imperatives gives you everything you need. The hypothetic gives you the logic and the categoric gives you the clear goal you need to attend.

Of course, if you use different, more then one catgegorys, you need more hypotheses. But the hypotheses are only able to do with logic and probability.

The problem if we safe one or 100 people is not a problem of not using mathmatics or using them wrong, its a problem of our moral categorys/not existing goals which are not set with alternatives and our wrong understanding of logic/rationalism.

its a failure to see moral as a right of nature. there are no rights of nature or mankind, there are only rights that we establish and take care of! goals and no goals that can be reached or not - function or not!

So its up to us and our "actual goals and logics" to set the moral standards. and so its up to us how many we safe or if we dont safe anybody and how we safe them. we dont have the the duty, only if we give us the duty!

SOCIAL Utilitarism, also called TECHNOCRACY is about the goal, to make everybody as happy as it gets. This is totally a logical and also quantifiable system. so for me, its not working to divide moral and logic. as you cant divide action and logic.

every moral action underlies the laws of logic and rational goal attainment. and every moral standard you set should better be analyzed correctly, which means you better use A GOOD SYSTEM of logic and other quantifiabe systems. or your truth and (social/moral) efficiency will be inprecise - not good ;)

as subjective and unscientific "logics" are mostly incomplete ;) specially when it comes to social structures ... lol

so the real problem here is imho the question, why we have specific moral standards (which some think falsly are not logic or under the laws of probability) and the other question would be about the precision of our action analytics (and why we think it is not logic or ...).

2

wowie6543 t1_iypmavs wrote

but its not the problem of the logic/rationalism, its the problem of the missing goal! the missing nature of things, such as humankind.

so the game theory is misleading because not all possible golas are included. so the logic must fail, becasue we miss a goal to attend.

So, yes and no, you cant work it out without logic, but the logic itself is not everything you need, you need also a reason where you can use your logic. many forget about the actual (and possible) goals that are relevant for the analytic/statistic.

2

enternationalist t1_iyq1glb wrote

How is changing your answer to a moral question distinct from your morality changing? Per your own definition, your sense of right and wrong has shifted to give you a different answer.

I used to believe making others happy as a priority was the moral choice, now I think people should generally be more self centered. I used to oppose any sort of violence; now I believe it is sometimes necessary or justified. By what definition are these not a change in morality?

1

ammonium_bot t1_iyqbcg0 wrote

> different, more then one

Did you mean to say "more than"?
Explanation: No explanation available.
^^I'm ^^a ^^bot ^^that ^^corrects ^^grammar/spelling ^^mistakes. ^^PM ^^me ^^if ^^I'm ^^wrong ^^or ^^if ^^you ^^have ^^any ^^suggestions.
^^Github

1

VitriolicViolet t1_iyt2gmk wrote

>If you can’t choose what morality to believe in, it’s no use trying to convince someone to follow your morality.

does anyone 'choose' which moral theory to follow?

i would argue the one you pick is merely the one that you feel is best ie you wont be convinced by a rational argument since you never reasoned yourself into your belief in the first place.

logic works from emotion ie if you think utilitarianism is best its because you feel its best, reasoning and logic happen after the fact.

i never reasoned myself into my morals, i pick and choose based on context and use my emotions to guide my reasoning (you cannot determine which is 'better' without use of emotion)

1

VitriolicViolet t1_iyt343x wrote

and? i knew smoking was harmful before i tried it (read the studies) and yet ive been doing for 10 years with no intention to stop.

some people value different things, resulting in different morals. personally safety and security arent even in my top 3 values (honesty, integrity and personal freedom) hence why smoking being factually bad hasnt changed my behavior.

what is more moral? allowing children to teach their kids anything or having the state determine what age certain concepts like sexuality and religion should be taught? your answer will 100% be determined by your values and if you ask 10 people all will be different and none will be wrong.

1

VitriolicViolet t1_iyt3hz2 wrote

not that i can think of.

personally everything is morally permissible in context (no system of morality ever conceived actually works, any system that has inflexible rules is destined to failure ie is genocide always wrong? if a nation tries to genocide you and will not stop no matter what, collectively, then surely killing them all is morally correct?).

theft, murder, lies, all are moral in certain scenarios.

1

Wizzdom t1_iytga9f wrote

First of all, that has nothing to do with what I said. Second, I was talking about smoking while pregnant. Surely you at least value not harming others unnecessarily. If you don't, then you are immoral.

1

Ok_Meat_8322 t1_iyye0so wrote

You can only "solve" moral problems with logic or mathematics once you've already assumed a particular moral philosophy or ethical framework- consequentialism, for instance.

But which moral philosophy/ethical framework is correct or superior is the crucial question; once you have an ethical framework the solution to most moral dilemmas follows fairly straightforwardly, and in the case of utilitarianism/consequentialism may even boil down to no more than simple arithmetic... whereas in other moral frameworks (e.g. deontic systems) quantities are irrelevant and so mathematics has nothing to say.

So this blog's thesis isn't all that objectionable, so far as it goes, but it seems to me that its just that its addressing the least tricky or difficult aspect of moral reasoning and so isn't telling us anything particularly useful or anything which we didn't already know or tend to agree on.

2

Cregaleus t1_iyys1cq wrote

Making an appeal to the authority of the behavior of deontologists isn't persuasive. Most dietitians eat pizza. Pointing that out isn't evidence that pizza is health food.

Comprehensive theories that cannot be comprehensively articulated are not comprehensive theories. I.E. it is a real problem that we cannot say whyfor it is moral to drive to work with a risk of killing someone's at 0.0000001% and immoral at 5%.

1

iiioiia t1_iz0qez1 wrote

> You can't solve moral problems with math. You can only express your moral beliefs in symbolic forms and manipulate them with the tools of mathematics

I agree you can't guarantee a solution, or create a solution that solves them directly, but a math based solution could cause belief formation that is sufficient to alter human behavior enough to (at least substantially) solve the problem, could it not?

On one hand, this is kinda "cheating"....but on the other hand, ignoring how reality actually works isn't flawless either.

0

iiioiia t1_iz0rfl2 wrote

> Nothing can be solved WITHOUT logic and probability!

Disagree - heuristics can solve many issues, and there is substantial evidence that heuristics do not run on (actual, flawless) logic.

> logic and probability are basic elements of all actions and all analytics (of action).

So too with heuristics!

> Kants Imperatives gives you everything you need. The hypothetic gives you the logic and the categoric gives you the clear goal you need to attend.

Is this necessarily an evidence-based True Fact, or might it be merely a heuristic powered belief?

> So its up to us and our "actual goals and logics" to set the moral standards. and so its up to us how many we safe or if we dont safe anybody and how we safe them. we dont have the the duty, only if we give us the duty!

What if people disagree with other people's "logic" and conclusions?

0

iiioiia t1_iz0rmk7 wrote

> You can only "solve" moral problems with logic or mathematics once you've already assumed a particular moral philosophy or ethical framework- consequentialism, for instance.

What if you merely present all of the valid options in a steel-manned manner, making no presumptions or epistemically unsound assertions along the way?

0

Ok_Meat_8322 t1_iz1twka wrote

If you don't assume any value judgment or normative statements, you cannot conclude with any value judgments or normative statements; any argument that did the latter without doing the former would necessarily be deductively invalid.

And it has nothing to do with the manner of your presentation, "steel-mannered" or otherwise you still run afoul of Hume's law if you attempt to conclude an argument with normative or morally evaluative language if you did not include any among your premises.

1

iiioiia t1_iz1vf78 wrote

> If you don't assume any value judgment or normative statements, you cannot conclude with any value judgments or normative statements; any argument that did the latter without doing the former would necessarily be deductively invalid.

Right, don't do that either. Pure descriptive, zero prescriptive.

> And it has nothing to do with the manner of your presentation, "steel-mannered" or otherwise you still run afoul of Hume's law if you attempt to conclude an argument with normative or morally evaluative language if you did not include any among your premises.

And if you aren't making an argument?

1

Ok_Meat_8322 t1_iz1x6jn wrote

>Right, don't do that either. Pure descriptive, zero prescriptive.

But then you can't conclude with a moral judgment. Presumably solving moral dilemmas involves being able to make correct moral judgments wrt the dilemma in question, right?

>And if you aren't making an argument?

But you're needing to make an inference, yes? In order to come to a conclusion as to the correct answer or correct course of action wrt a given moral problem or dilemma? You definitely don't need to be making an explicit or verbal argument, but if you're engaging in a line of reasoning or making an inference to a conclusion, then the same old and you need to assume a particular moral framework (or at least certain moral/normative premises).

1

iiioiia t1_iz1ynlq wrote

> But then you can't conclude with a moral judgment.

Correct.

> Presumably solving moral dilemmas involves being able to make correct moral judgments wrt the dilemma in question, right?

Perhaps certain conditions can be set and then things will resolve on their own. Each agent in the system has onboard cognition, and agents are affected by their environment, their knowledge/belief, and the knowledge/belief of other agents in the system. Normalizing beliefs (ideally: a net decrease in delusion, but perhaps not even necessarily) could change things for the better (or the worse, to be fair).

> But you're needing to make an inference, yes? In order to come to a conclusion as to the correct answer or correct course of action wrt a given moral problem or dilemma?

I'm thinking speculatively, kind of like "I wonder if we did X within this system, what might happen?" Not a risk free undertaking, but that rarely stops humans.

> You definitely don't need to be making an explicit or verbal argument, but if you're engaging in a line of reasoning or making an inference to a conclusion, then the same old and you need to assume a particular moral framework (or at least certain moral/normative premises).

To the degree that this is in fact necessary, that would simply be part of the description as I see it - if something is necessarily true, simply disclose it.

1

Ok_Meat_8322 t1_iz25jjm wrote

>Correct.

But then you can't resolve a moral problem or dilemma, the topic of this thread. When it comes to reasoning or logic, you can't get out more than you put in: if you want to come to a conclusion involving a moral judgment or moral obligation/prohibition, you need premises laying down the necessary moral presuppositions for the conclusion to follow. And mathematics or logic is of no avail here.

>Perhaps certain conditions can be set and then things will resolve on their own. Each agent in the system has onboard cognition, and agents are affected by their environment, their knowledge/belief, and the knowledge/belief of other agents in the system. Normalizing beliefs (ideally: a net decrease in delusion, but perhaps not even necessarily) could change things for the better (or the worse, to be fair).

Sure, and none of that is objectionable; but the OP is talking about using mathematics or logic to solve moral problems, and my point is simply that the point where mathematics or logic are useful is after the hard part has already been done, i.e. determining what sort of moral framework or what sorts of moral presuppositions are right or correct.

Like, if you're a utilitarian you can use simple arithmetic in many situations to decide what course of action maximizes happiness and minimizes unhappiness, but the tricky part is determining whether one should be a utilitarian or not in the first place.

1

iiioiia t1_iz276fw wrote

> But then you can't resolve a moral problem or dilemma, the topic of this thread.

"Perhaps certain conditions can be set and then things will resolve on their own."

Tangential topics often occur in threads, I thought this approach might be interesting to some.

> When it comes to reasoning or logic, you can't get out more than you put in

"Each agent in the system has onboard cognition"

> if you want to come to a conclusion involving a moral judgment or moral obligation/prohibition, you need premises laying down the necessary moral presuppositions for the conclusion to follow.

"agents are affected by their environment, their knowledge/belief, and the knowledge/belief of other agents in the system. Normalizing beliefs (ideally: a net decrease in delusion, but perhaps not even necessarily) could change things for the better (or the worse, to be fair)."

> And mathematics or logic is of no avail here.

Perhaps it is, perhaps it is not.

> Sure, and none of that is objectionable; but the OP is talking about using mathematics or logic to solve moral problems, and my point is simply that the point where mathematics or logic are useful is after the hard part has already been done, i.e. determining what sort of moral framework or what sorts of moral presuppositions are right or correct.

In the virtual model within your mind that you are examining - I have a virtual model that is different than yours (this is one non-trivial but often overlooked detail that I would be sure to mention front and centre in all discussions).

> Like, if you're a utilitarian you can use simple arithmetic in many situations to decide what course of action maximizes happiness and minimizes unhappiness....

To estimate what course of action...

> ...but the tricky part is determining whether one should be a utilitarian or not in the first place.

There are many tricky parts - some known, some not, some "known" incorrectly, etc.

I think it may be useful for humans to be a bit more experimental in our approaches, it seems to me that we are in a bit of a rut in many places.

1

Ok_Meat_8322 t1_iz28e8l wrote

>Perhaps it is, perhaps it is not

No, its definitely not. Neither mathematics nor logic can tell us the answer to any substantive question of fact or value. It can never tell you whether you should be a consequentialist or not. It can't tell you whether you should steal, murder, or even to swipe the last piece of pizza. Logic and mathematics can tell you all about logical or mathematical questions... but nothing substantive about ethics or moral philosophy. Logic and mathematics only become relevant once you've got that part figured out.

>In the virtual model within your mind that you are examining - I have a virtual model that is different than yours

If it differs wrt the fact that mathematics/logic are indifferent to substantive questions of fact or value, then I'm afraid to say that your model is incorrect on this point.

>There are many tricky parts - some known, some not, some "known" incorrectly, etc.

No doubt, but once again that doesn't contradict what I said: I'm saying that the ways in which mathematics/logic is useful is a less tricky matter than what moral philosophy, ethical framework, or particular moral values/judgments are right or correct or should be adopted in the first place. Once you have answered the latter question, the answer to the former follows fairly easily (in most instances, at any rate).

1

iiioiia t1_iz2a29u wrote

> If it differs wrt the fact that mathematics/logic are indifferent to substantive questions of fact or value, then I'm afraid to say that your model is incorrect on this point.

I'm thinking along these lines: "Perhaps certain conditions can be set and then things will resolve on their own."

You seem to be appealing to flawless mathematical evaluation, whereas I am referring to the behavior of the illogical walking biological neural networks we refer to as humans.

> No doubt, but once again that doesn't contradict what I said

I believe it does to some degree because you are making statements of fact, but you may not be able to care if your facts are actually correct. In a sense, this is the very exploit that my theory depends upon.

1

cutelyaware t1_iz2a5je wrote

No, this is the sort of situation that prompted the Jonathon Swift quote >“You cannot reason a person out of a position he did not reason himself into in the first place.”

1

Ok_Meat_8322 t1_iz2ibc5 wrote

>I'm thinking along these lines: "Perhaps certain conditions can be set and then things will resolve on their own."

I'm having trouble discerning what exactly you mean by this, and how it relates to what I'm saying.

>You seem to be appealing to flawless mathematical evaluation, whereas I am referring to the behavior of the illogical walking biological neural networks we refer to as humans.

What does "flawless" mean here exactly- does it just mean that you've done the math correctly? But yes, I'm certainly assuming that one is doing the math correctly- even if ones math is correct, it still can only enter into the picture after we've settled the question of what moral philosophy, ethical framework, or specific values/judgments are right or correct.

>I believe it does to some degree because you are making statements of fact, but you may not be able to care if your facts are actually correct. In a sense, this is the very exploit that my theory depends upon.

Again with these vague phrases. I said that "the tricky question" was what moral philosophy, ethical system, or moral values/judgments one should adopt, not how math or logic can help resolve moral dilemmas... but, as you note, there are more than one "tricky question", which I'm happy to concede, and so what I really meant (and what I more properly should have said) was that the question of the correct/right ethical framework or moral philosophy is trickier than the question of how math/logic can help us solve moral problems.

But keeping that in mind, there was no contradiction between your reply and my original assertion. And yes, for the record, I most definitely do care about which facts are correct, I'm having trouble thinking of anything I care about more than this (at least when it comes to intellectual matters), and drawing a blank.

1

iiioiia t1_iz2ks7i wrote

If one is making an assertion about the truth value of a proposition based on criticism of the messenger, but I am making this claim broadly (applicable to all people).

1

iiioiia t1_iz2lzqx wrote

> I'm having trouble discerning what exactly you mean by this, and how it relates to what I'm saying.

A bit like this is what I have in mind:

https://i.redd.it/5lkp13ljw34a1.png

https://www.reddit.com/r/PoliticalCompassMemes/comments/zdbmoy/90_of_the_people_are_center_dont_let_the_radicals/

My theory is that humans disagree with each other less than it seems, but there is no adequately powerful mechanism in existence (or well enough known) to distribute this knowledge (assuming I'm not wrong).

> What does "flawless" mean here exactly- does it just mean that you've done the math correctly? But yes, I'm certainly assuming that one is doing the math correctly- even if ones math is correct, it still can only enter into the picture after we've settled the question of what moral philosophy, ethical framework, or specific values/judgments are right or correct.

What I'm trying to say that yes, you are correct when it comes to reconciling mathematical formulas themselves, whereas I am thinking that showing people some "math" on top of some ontology (of various ideologies, situations, etc) may persuade them to "lighten up" a bit. Here, the math doesn't have to be correct, it only has to be persuasive.

> Again with these vague phrases. I said that "the tricky question" was what moral philosophy, ethical system, or moral values/judgments one should adopt, not how math or logic can help resolve moral dilemmas... but, as you note, there are more than one "tricky question", which I'm happy to concede, and so what I really meant (and what I more properly should have said) was that the question of the correct/right ethical framework or moral philosophy is trickier than the question of how math/logic can help us solve moral problems.

I think we're in agreement, except for this part: "the correct/right ethical framework or moral philosophy" - I do not believe that absolute correctness necessarily necessary for a substantial (say, 50%++) increase in harmony (although, some things would have to be correct, presumably).

> And yes, for the record, I most definitely do care about which facts are correct...

Most everyone believes that, but I've had more than a few conversations that strongly suggest otherwise - I'd be surprised if you and I haven't had a disagreement or two before! As Dave Chappelle says: consciousness is a hell of a drug.

1

Ok_Meat_8322 t1_iz2qxzo wrote

>My theory is that humans disagree with each other less than it seems, but there is no adequately powerful mechanism in existence (or well enough known) to distribute this knowledge (assuming I'm not wrong).

But we're not necessarily talking about resolving moral disputes between different people, but also of individual people having difficulty determining the correct moral course of action (i.e. "resolving a moral dilemma"), and this meme has nothing to say about the latter case (and that's assuming it says anything substantive or useful RE the former case, which I'm not sure it does).

The point is, once again, that mathematics or logic only enter into the question after one has decided or settled which ethical framework, moral philosophy, or particular moral values/judgments are right and correct, irrespective of how common or popular those ethical frameworks or moral values/judgments may be, or the extent to which people disagree about them.

>I think we're in agreement, except for this part: "the correct/right ethical framework or moral philosophy" - I do not believe that is necessarily necessary for a substantial (say, 50%++) increase in harmony.

Neither do I; determining or even demonstrating what is the right or correct thing is quite a separate matter from convincing others that it is the right or correct thing. It very may well may be (and in fact almost certainly is) that even if we could establish what ethical framework or moral values/judgments are right or correct (something I don't believe to be possible), many if not most people will persist in sticking with ethical frameworks or particular moral values/judgments other than the right or correct one. And it may well not "increase harmony", it could even lead to the opposite; sometimes the truth is bad, depressing, or even outright harmful, after all.

But these psychological and sociological questions are nevertheless separate questions from the meta-ethical question raised by the OP, i.e. whether and how maths or logic can help resolve moral problems or dilemmas.

2

iiioiia t1_iz2te3r wrote

> But we're not necessarily talking about resolving moral disputes between different people, but also of individual people having difficulty determining the correct moral course of action (i.e. "resolving a moral dilemma"), and this meme has nothing to say about the latter case (and that's assuming it says anything substantive or useful RE the former case, which I'm not sure it does).

All decisions are made within an environment, and I reckon most of those decisions are affected at least to some degree by causality that exists (but cannot be seen accurately, to put it mildly) in that environment....so any claims about "can or cannot" are speculative imho.

> The point is, once again, that mathematics or logic only enter into the question after one has decided or settled which ethical framework, moral philosophy, or particular moral values/judgments are right and correct, irrespective of how common or popular those ethical frameworks or moral values/judgments may be, or the extent to which people disagree about them.

I think we are considering the situation very differently: I am proposing that if a highly detailed descriptive model of things was available to people, perhaps with some speculative "math" in it, this may be adequate enough to produce substantial positive change. So no doubt, my approach is other than the initial proposal here, I do not deny it (or in other words: you are correct in that regard).

> ...many if not most people will persist in sticking with ethical frameworks or particular moral values/judgments other than the right or correct one.

To me, this is the main point of contention: would/might my alternate proposal work?

> And it may well not "increase harmony", it could even lead to the opposite; sometimes the truth is bad, depressing, or even outright harmful, after all.

Agree....it may work, it may backfire (depending on how one does it). Also: I am not necessarily opposed to ~stretching the truth (after all, everyone does it).

> But these psychological and sociological questions are nevertheless separate questions from the meta-ethical question raised by the OP, i.e. whether and how maths or logic can help resolve moral problems or dilemmas.

Agree, mostly (I can use some math in my approach).

1

Ok_Meat_8322 t1_iz2v3nr wrote

>I think we are considering the situation very differently: I am proposing that if a highly detailed descriptive model of things was available to people, perhaps with some speculative "math" in it, this may be adequate enough to produce substantial positive change.

I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma after one has presupposed or established a particular ethical framework, moral philosophy, and/or particular moral norms and judgments. Descriptive models, non-normative facts, and math/logic alone can never solve a moral problem or dilemma, in order to arrive at a moral judgment or conclusion one must presuppose an ethical framework or particular norms/value-judgments.

>To me, this is the main point of contention

It may well be the angle that interests you, but its not the point of contention between us because I'm not taking any position on that question.

2

iiioiia t1_iz3242b wrote

> I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma after one has presupposed or established a particular ethical framework, moral philosophy, and/or particular moral norms and judgments. Descriptive models, non-normative facts, and math/logic alone can never solve a moral problem or dilemma, in order to arrive at a moral judgment or conclusion one must presuppose an ethical framework or particular norms/value-judgments.

I suspect you have a particular implementation in mind, and in that implementation what you say is indeed correct.

1

wowie6543 t1_iz44wjy wrote

heuristic is not working without logic and probability. heuristic is an undercategory of it and is mostly using probabilitys!

u have not all info, but you still use logic and probability to come to a solution. Like trail and error, statistics and so on. all of those methods cant work without logic and probability.

​

My sentence of Kant and his Imperatives is not very precise. So im not sure what exactly you ask to be true here. A fact is the rationalistic/hypothetic system that is like causal and determiend analytics, methods that work to create truth and function. They are evidence-based but also use probability. As heuristic is also evidence-based in the end, but its only a probability where you expect the evidence to be.

so like phyiscs and math, which can prolong certain systems/facts but we cant measure them yet. Just after some years we are able to measure them and make em evidencebased. in the end

And the categoric imperative is also a method that works for moral. So both are methods/goals (rational sytems) which are not about belief, but about creating working facts and working moral - a workable and quantifiabale system for action.

So further, u can understand that moral, like all other systems, is a system of goals and methods and you can analyze goals and methods with the hypothetical/rational system (including logic and probability). And thats also evidence-based but also heuristic!

​

If people disagree with other peoples logics and conclusions, then there must be a reason for it. One reason could be, they dont have all the facts. Another reason could be, they dont have the same goals/methods (this is very important). And a third reason could be, they dont manage to come to the right conclusion, even if they have the facts and the same goals. And a forth reason could be, all first three together.

So for example, you have to jews analyzing a moral problem but both come to different conclusions. So where is the problem? They done have the same moral, they dont have the same facts or they dont understand them in the same way. or everything together.

of course its a big problem if you have two different systems, but you think its the same. this is the reason for many wars and many misunderstandings and social separations. and not just in morals.

2

iiioiia t1_iz4t3ml wrote

> heuristic is not working without logic and probability. heuristic is an undercategory of it and is mostly using probabilitys!

Citation please.

Also note I said: "...there is substantial evidence that heuristics do not run on (actual, flawless) logic."

> u have not all info, but you still use logic and probability to come to a solution. Like trail and error, statistics and so on. all of those methods cant work without logic and probability.

You can also flip a coin to come to a solution.

>>> Kants Imperatives gives you everything you need. The hypothetic gives you the logic and the categoric gives you the clear goal you need to attend.

>> Is this necessarily an evidence-based True Fact, or might it be merely a heuristic powered belief?

> My sentence of Kant and his Imperatives is not very precise. So im not sure what exactly you ask to be true here.

Is it objectively true that it gives you everything that you need?

> A fact is the ationalistic/hypothetic system that is like causal and determiend analytics, methods that work to create truth and function. They are evidence-based but also use probability. As heuristic is also evidence-based in the end, but its only a probability where you expect the evidence to be.

If probabilistic, then not guaranteed to give a correct answer.

> And the categoric imperative is also a method that works for moral.

A sledge hammer "works" for opening a locked door also, but how optimal is it?

> So further, u can understand that moral, like all other systems, is a system of goals and methods and you can analyze goals and methods with the hypothetical/rational system (including logic and probability). And thats also evidence-based but also heuristic!

Whether one gets remotely correct answers is another matter.

> If people disagree with other peoples logics and conclusions, then there must be a reason for it. One reason could be, they dont have all the facts. Another reason could be, they dont have the same goals/methods (this is very important). And a third reason could be, they dont manage to come to the right conclusion, even if they have the facts and the same goals. And a forth reason could be, all first three together.

Another potential issue: there is no correct answer and the person isn't smart enough to realize it, due to the shit education systems we have going on here on planet Earth.

> > > > So for example, you have to jews analyzing a moral problem but both come to different conclusions. So where is the problem? They done have the same moral, they dont have the same facts or they dont understand them in the same way. or everything together.

One problem: people are not taught how to recognize when their thinking is unsound.

> of course its a big problem if you have two different systems, but you think its the same. this is the reason for many wars and many misunderstandings and social separations. and not just in morals.

Agree on this!

1

iiioiia t1_iz9mvo6 wrote

"I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma ...."

What would "applied" consist of?

1

Ok_Meat_8322 t1_izbtljz wrote

The example I used earlier was a utilitarian, who can use basic arithmetic to resolve moral dilemmas (such as, for instance, the trolley problem).

But this only works because the utilitarian has already adopted a particular ethical framework. Math can't tell you what values or ethical framework you should adopt, but once you have adopted them maths and logic may well be used to resolve moral issues.

1

iiioiia t1_izc1dt3 wrote

I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way. In an agnostic framework, representations of various models could have math attached to them (whether it is valid or makes any fucking sense is a secondary matter) and that should satisfy an exception to your rule, I think?

1

Ok_Meat_8322 t1_j0naes5 wrote

>I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way.

I don't think it must be done, I don't think logic or mathematics is going to be relevant to most forms of moral reasoning. But consequentialism is the most obvious case where it would work, since consequentialism often involves quantifying pleasure and pain and so would be a natural fit.

But if what you mean is that we could sometimes use logic or mathematics to answer moral questions without first presupposing a set of moral values or an ethical framework, I think it is close to self-evident that this is impossible: when it comes to reasoning or argument, you can't get out more than you put in, and so if you want to reach a normative conclusion, you need normative premises else your reasoning would necessarily be (logically) invalid.

1

iiioiia t1_j0ng2rm wrote

Oh, I'm not claiming that necessarily correct answers can be reached ("whether it is valid or makes any fucking sense is a secondary matter"), I don't think any framework can provide that for this sort of problem space.

1

Ok_Meat_8322 t1_j0nn0qc wrote

I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link, so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.

So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).

1

iiioiia t1_j0nufl7 wrote

> I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link....

Mostly agree. As I see it, the problem isn't so much that answers to moral questions are hard to discern, but that with few exceptions I can think of (including literal murder), do not have a correct answer at all.

> ...so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.

Right - so put error correction into the system, so when participants minds wander into fantasy and, provide them with gentle course correction back to reality, which is filled with non-visible (for now at least) mystery.

> So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).

"Solving" things can only be done in deterministic problem spaces, like physics. Society is metaphysical, and non-deterministic. It appears to be deterministic, but that is an illusion. Just as the average human 200 years ago was ~dumb by our standards (as a consequence of education and progress) and little aware of it, so too are we. This could be realized, but like many things humanity has accomplished, first you have to actually try to accomplish it.

1

Ok_Meat_8322 t1_j0ny94r wrote

>"Solving" things can only be done in deterministic problem spaces, like physics

I think its more a matter of "solving" things in one domain looking quite differently than in another domain. And solving a moral dilemma doesn't look at all like solving a problem in physics. But that doesn't mean it doesn't happen; oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time.

1

iiioiia t1_j0o9mf3 wrote

> And solving a moral dilemma doesn't look at all like solving a problem in physics.

Agree, but listening to a lot of people talk with supreme confidence about what "is" the "right" thing to do, it seems like this idea is not very broadly distributed.

> oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time

Right, but the chosen course doesn't have to be right/correct, it only has to be adequate for the maximum number of people, something that I don't see The Man putting a lot of effort into discerning. If no one ever checks in with The People, should we be all that surprised when they are mad at we don't know why (though not to worry: memes "explanatory" "facts" can be imagined into existence and mass broadcast into the minds of the population in days, if not faster).

1

tmpxyz t1_j2d280u wrote

I remembered there was a case that a car company (chevron?) had a flawed car brand, the company decided not to recall as in their calculation, the total compensation for accidents would be cheaper than fixing all the cars.

So, yeah, some people do make such calculation. But the majority of the mass don't do that, the moral judgement of the mass are usually emotion-driven or event-driven or pattern-matching or just blindly following KOLs they like. The majority probably wouldn't do such calculation until they are in really hard position, and they would probably take decisions that favor their own interest in those situations.

1