Comments

You must log in or register to comment.

contractualist OP t1_j91ix51 wrote

Hello all, I'm looking for feedback on the definition of morality that I defend in the article. Any questions, comments, or criticisms would be highly appreciated.

Summary: Morality exists as "should" statements resulting from the values of freedom and reason. We can assess the truth of morality claims by determining whether they properly derive from these moral values. Moral principles are therefore those principles that free agents cannot reasonably reject based on public reasons. Under this theory of morality, there are no true moral dilemmas. If a principle can be reasonably rejected by a free party, then it is not a moral principle. Yet if it cannot be, then it is morally binding to agents that value freedom and reason.

6

Von_Kessel t1_j9212sy wrote

Probably not what you want to hear but there are a lot of spooks in your definitions. Freedom and reason are both spooks that I would aver don’t have good definitions in principle and thus cannot form a basis for a derived morality.

11

NoobFade t1_j9299ow wrote

Reads to me like some kind of Kantian constructionism. You might enjoy reading some Korsgaard, who I think articulates a different slant on how morality derives from the nature of rational agents.

Personally, I'm skeptical of these varieties of meta ethics which rely on assumptions about the nature of an abstract rational actor. I think the constitutive nature of the rational actor is where the underlying principles really derive from, because you make all kinds of assumptions about what they want (e.g. not being used as a means to an end) and who is accepted as a rational agent (e.g. animals, slaves).

17

Von_Kessel t1_j929gja wrote

Helpful for sure but if you have read some Stirner you know what I mean. Fundamentally freedom as a concept is something that’s been endowed to you to mean something from how others constructed and defined it (or in a contradistinction) . That does not mean it’s a salient term to abide by in a super construct called morality. The corollary cannot be supposed if the prior supposition is nonsense

3

Daotar t1_j92bgpc wrote

This strikes me as too Kantian and idealized. Morality is a biological adaptation of our species meant to foster cooperation. Moral claims take the form of "should statements" simply because they are claims that you endorse and recommend others endorse as well. But the notion about reasonable rejection being used to distinguish what are the "true" moral principles seems problematic, as we don't know what it means to mount a "reasonable" objection. My assumption would be to take a Rawlsian line and say that reasonableness characterizes the attitudes of we modern day liberal democrats, but then we're starting to move away from the sort of objectivist account I think you're aiming for.

2

contractualist OP t1_j92ck17 wrote

Thanks for the review. The biological adaption relates to descriptive morality, whereas I focus on normative morality.

If the problem is with discovering non-reasonably rejectable reasons, then it's only a problem of administrability. This is fine, and not a problem with the philosophy in principle. However, what else would morality be, the code of conduct of our treatment of others, if it could not be reasonably accepted by others? I'll discuss this in a later piece.

0

contractualist OP t1_j92d657 wrote

Yes meta-ethical constructivism. I've read some of her work. For me, it has been either hit or miss and her focus on identity steers too much into subjectivism.

Here, I try not to make any assumptions, not even about rationality. Only that by valuing freedom and reason can we get moral principles. That's what is good about valuing freedom, you don't have to care about what people want, but only recognize that they have wants.

1

Daotar t1_j92fb6f wrote

> Thanks for the review. The biological adaption relates to descriptive morality, whereas I focus on normative morality.

That's far too quick and dismissive. I too am talking about the normative notion, there's just nothing more to that notion than the biological fact of it, nothing beyond that contingency that gives it anymore normative oomph (but nor should we care). But such an account is of course still normative because it describes morality as being action guiding. This is the standard sort of move that Darwinian philosophers like Mackie, Rorty, Ruse, Street, or Joyce will make. It's about naturalizing morality, not about presenting a "merely descriptive" account as opposed to a normative one. Even idealist philosophers like Rawls and Kant are simply giving a "descriptive account" of our intuitions about morality and justice in the same way I am, but this doesn't make their account any less normative than my own or that of other evolutionary ethicists.

> If the problem is with discovering non-reasonably rejectable reasons, then it's only a problem of administrability.

It's not, it's about defining what it means to be "reasonable". Like, sure, there is the further problem of actually figuring out what reasonable people would agree to, but that's largely derivative of your definition of what constitutes a reasonable person.

> However, what else would morality be, the code of conduct of our treatment of others, if it could not be reasonably accepted by others?

It could be a fact of the matter. It could be a collective delusion. It could be an optimal solution to a particular set of game theory problems. There are many things it could be beyond the Kantian notion you're endorsing.

0

contractualist OP t1_j92h4wd wrote

Kant certainly wasn't providing a descriptive account, whereas Rawls didn't make his views very clear. Evolution is useful for explaining our desires, but it doesn't justify why these desires should be respected or what we should do given these desires.

There are no "should" statements when examining morality through a pure evolutionary lens and morality would be the same (the derivatives of the values of freedom and reason) even if we had evolved differently and developed different desires. Given a different evolutionary trajectory, our moral rules might be different, but meta-ethics remains the same.

That being said, science is useful for discovering the moral principles of the social contract, but it doesn't play a role in the first principles discussion that I'm focusing on.

1

acfox13 t1_j92jmku wrote

"Should" falls into what's called "imperative thinking" - should, have to, must, ought to, etc. (What Dweck would call "fixed mindset") Then the question becomes should, according to whom? and based on which criteria and under which circumstances?

My personal criteria is : does the behavior create secure attachment or undermine secure attachment? (See attachment theory: "Becoming Attached - first relationships and how they shape our capacity to love" by Robert Karen)

I've found trustworthy, re-humanizing behaviors build secure attachment and untrustworthy, dehumanizing behaviors lead to disconnection and destroy secure attachment. These are the guidelines I use around trust:

The Trust Triangle - Authenticity, Empathy, Logic (what we say and how we say it)

The Anatomy of Trust - marble jar concept and BRAVING acronym

10 definitions of objectifying/dehumanizing behaviors - these erode trust

I try to choose behaviors that build trust and foster secure attachment. It's a strategy that seems to be paying dividends. My interpersonal relationships are much better and I feel much better, too bc I'm choosing behaviors that align with my values.

3

bumharmony t1_j92kjsm wrote

Equality does not somehow stem from rational agency into an observable and measurable feature. Even the first premise of ethics seems too difficult to justify.

It requires sort of argument from tradition or ideal theory so that we start from that people already accept atleast the baseline equality as non aggression and equal right to decide about other rules.

That works as long as people agree on them.

But another way Rawls uses is that all knowledge is a communal thing by definition. Science is valid only if the community agrees on the theory at hand. So ethics can be comparable science if there is a viewpoint that detaches from the aposteriori to apriorism that fits the idea of inductive logic (although Rawls speaks paradoxically of aposteriori apriori which he explains away with the ideal theory). And everyone who can do this has an equal vote on ethics, like science has its criterion (although it can lead to fallacy of expertise) So there is no many ethical theories, only peoole who have the virtue for ethics and who don’t.

1

Daotar t1_j92lyjh wrote

> There are no "should" statements when examining morality through a pure evolutionary lens

If you really think that, I'd suggest picking up either Mackie's Ethics, Kitcher's The Ethical Project, or Joyce's The Evolution of Morality.

1

rejectednocomments t1_j92mwiv wrote

I think you can skip over a lot of the introductory stuff and get to the point. It covers a lot of territory, but none of it in enough depth to be useful.

As to the main proposal, I am attracted to the idea that morality is importantly related to what we can rationally agree to, so I’m kind of an audience for this kind of proposal. When your first offer your account of morality, I thought you were underestimating the amount of moral disagreement there is, and that demanding actual agreement about moral principles is not a viable standard. But, later it seemed like you thought morality only concerns what there is consensus about, which is why you say the trolley problem is not a moral dilemma at all — there’s no agreement here, and morality is based on rational agreement. I think this just puts too much outside the scope of morality which we would intuitively include within it.

Anyways, at one point you seem to say morality is based on hypothetical imperatives. You might. E interested in this paper by Philippa Foot.

2

KingJeff314 t1_j92mylj wrote

Your article hinges on the idea that humans share values and therefore can come to a normative consensus. It is much more complex than that. Humans have many different values, often conflicting with each other, and each person weighs values and who the values apply to differently.

Some people value security more than freedom, for instance. Should a government do more invasive searches under the threat of a terrorist attack? Either they do nothing and potentially allow a terrorist attack, or they act to stop it and violate citizen’s freedoms in the process. This is a Trolley Problem. Your article suggests "No answer would be justifiable to all involved parties since they would all have a reasonable claim to not being [killed/invasively searched]". Your Trolley Problem article also states, "Like so many other life dilemmas, pure reason cannot provide a definite answer to the trolley problem. Only the free self can make a choice whenever there are sufficient reasons for either side of a decision." Basically, when we get to moral problems with any degree of complexity, your model of pure reason is insufficient.

Additionally, your reasoning is insufficient that "valuing freedom necessarily implies valuing the freedom of others". To show the gap in logic, let me present this statement in propositional logic:

Definitions: Freedom(X,Y) means that X values Y's freedom, Free(X) means that X is a free agent, and H is the set of humans. We can assume (∀X in H, Free(X)^Freedom(X,X)). "∀" means "for all"

So then your claim is that (∀X,Y in H, Freedom(X,X) ⇒ Freedom(X,Y)). Your justification in the linked article is "If others are regarded as having similar freedom to his own—by having the capacity to freely make decisions, including the decision whether or not to be moral—then he cannot deny the value of their own freedom". Propositionally, this is (if ∀X,Y in H, Free(X)^Free(Y)^Freedom(X,X) then Freedom(X,Y)). This does not follow. It assumes a symmetry that does not necessarily exist.

Overall, I caution you against playing loosely with assumptions about values. Can we even be sure that any two humans share the exact same set of values?

2

contractualist OP t1_j92r8pr wrote

Thanks for the review. The article doesn't require that any values be shared, it only states what values lead to morality. What percentage of people share these values (freedom and reason) isn't within the scope of my writing. And values outside of these two aren't relevant for meta-ethics.

As to the scenario you laid out, the issue relates to ethics rather than meta-ethics that the article is about, but I'll still address it. The values of freedom and security would have to be justifiable to someone else. We wouldn't let someone's irrational paranoia guide national security policy, and any reasons provided when making policy (and in the social contract) would need to be public and comprehensible to all that are affected.

And any national security policy would have to be guided by the reason-based moral principles of the social contract. If it goes outside of those principles and acts arbitrarily, then it loses its morality and hence its political authority (imagine a requirement that all redhead people be subject to a special reporting requirement). Only reason has the authority to decide the rights vs. security question. and there will be a range of acceptable policies that respect the boundaries of the social contract. And political communities can give different priorities to the social contract's moral principles based on the national facts and circumstances (it must still value those principles, but it can apply them and prioritize them differently based on reason). See here for a discussion on how the social contract can specify rights.

And the error in the last section was treating X's freedom and Y's freedom separately. Freedom is an objective property that cannot reasonably be differentiated. Its not agent-relative, it is agency. There is no X's freedom or Y's freedom, there is only freedom that both X and Y happen to possess.

0

JunkoBig t1_j92twcj wrote

I think such abstract conceptions of morality should either be based on or connect to an anthropological theory of how and why morality changes. To simply claim "slavery and exism" were morally "simple" dilemmas to solve strikes me as ahistorical.

4

Daotar t1_j92vx8z wrote

> You can't get a "should" conclusion from "is" premises.

Not according to Mackie, Kitcher, and Joyce. The naturalistic fallacy is an extremely controversial position that has gone out of favor in recent decades due to critiques from people like Rawls, Rorty, and Mackie.

1

KingJeff314 t1_j92wme4 wrote

> And the error in the last section was treating X's freedom and Y's freedom separately. Freedom is an objective property that cannot reasonably be differentiated. Its not agent-relative, it is agency. There is no X's freedom or Y's freedom, there is only freedom that both X and Y happen to possess.

To make a statement like "you should not kidnap a person", you have to appeal to a value like "you value that person's freedom", not "you value freedom", which is nebulous and non-specific. Supposing that I was a psychopath and only cared about my own freedom (ie. Freedom(Me, Me)), what rational grounds do you have to make me care about anyone else?

2

RandeKnight t1_j92xvzm wrote

A reasonable argument as far as it goes.

However, it still doesn't solve the problem of how to enforce that morality where it seems that other people aren't following the rules even when they signed up to them.

eg. In the trolley problem, the logical choice for anyone who values their own freedom is to do nothing.

Why? Because to prosecute the person who does nothing, you'd have to jump several major hurdles.

a) The person is even aware of the problem. Being oblivious isn't a crime unless it's literally their job to be aware.

b) The person knows that the switch exists and how to use it. Trolley switches have a device that stops accidental activation.

c) The person would have been able to use the switch in the amount of time available, including shock time.

3

BirdicBirb505 t1_j93fdpc wrote

This was kinda just… bad across the board. Even entertaining the idea that there is no such thing as morality should’ve been a red flag. Simply because we haven’t made sense of it or are unwilling to judge others for having different foundations of morality, we shouldn’t fully consider it? That’s how I was reading it. In about 300 years people are going to look back at articles like this, and think we were silly not to consider morality at all because it’s difficult to figure out. Or because there are people that will disagree with specifics. Morality, very much is the objective of civilized humanity. If we want to move away from the beast, we have to move towards morality.

0

Rowan-Trees t1_j93s3wh wrote

This is very interesting, and similar to a project I am working out myself. I hope to give this a closer read soon, and a more thorough response.

In the meantime, are you familiar with Emmanuel Levinas? I'd be interested in hearing your response to him. He presents an ontological model of ethics similar to yours, but where freedom is supplanted by responsibility.

To Levinas, ethics comes implicitly written into the event of encountering the Other. The fact of my existence is itself an imposition on the Other: in so far as my existence effects the Other, I am responsible. The other's existence stirs me to a moral accountability. This responsibility, in turn, becomes a meaning for my own existence. "ethics, rooted in responsibility, is the node of our subjectivity, tying us to reality." In other words, my being a subject in the world is a result of encountering the Other, who not only makes me responsible, but also makes me conscious of my own Self.

2

contractualist OP t1_j93xfbl wrote

Thank you! It’s hard to make sense of Levinas’s infinite responsibility and how that translates into duties to others, especially when our relationships with others goes beyond public reasons. I’m satisfied with the analytic approach, but I’d be interested to hear your thoughts!

1

BernardJOrtcutt t1_j9404s7 wrote

Please keep in mind our first commenting rule:

> Read the Post Before You Reply

> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.

This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

Sandinista- t1_j95hqkd wrote

This was a great read! I have generally leaned towards moral relativism but this has certainly given me something to think about and was super digestible for someone like me who is entry level. I will definitely be checking out your other work and keeping up in future.

Thanks!

2

ScoutingForAdventure t1_j96ahok wrote

I would say that the human's ontology is not free as it is biologically constricted and so one would need an ontological system in which a person can become free of this constraint to be able to then have a moral system in which freedom is a predicate. Otherwise, there is no morality at all, only force.

Most importantly, your concept of public reason is a form of ontology by role or relationship, given by association with a certain public body, which completely obliterates the concept of freedom.

1

contractualist OP t1_j96brg6 wrote

I discuss what I mean by freedom here Freedom is being able to act in accordance with higher level principles, not being free from all biological and social forces. To the extent that these higher level principles includes reason and morality, the concept of freedom is coherent.

2

ScoutingForAdventure t1_j96hm7x wrote

So a person who is lacking in the ability to reason, such as youth and those with neurological and functional limitations at the highest cognitive level, would be unable to be free persons in your framework? The social force of public reason would constrain and bind them to a group morality based on its implementation of geniocracy?

Such a freedom has zero coherence. As others have mentioned, the disconnect between 1) what is socially prioritized as human needs, and 2) the disconnection individuals can have to those human needs and values would make such freedom conditional and therefore non-binding.

1

contractualist OP t1_j96iv85 wrote

Being a part to the moral community doesn’t rely on reasoning ability, but the laws of the moral community would be reason-based. They would have to be justifiable to others. Membership in the community relies on consciousness and free will.

If you read the article I sent, I argue that ascent to the social contract would be based on agreement to principles that are in accordance with higher-order values. Morality asks what principles of conduct would free reasonable people accept. It doesn’t say morality is reserved for the reasonable.

I’m not sure what freedom you’re talking about but if you have a specific question I’m happy to address it.

1