Viewing a single comment thread. View all comments

contractualist OP t1_izsm516 wrote

Summary: freedom + reason = morality. The basis of normativity is inherently free individuals discovering reasonable justifications for restrictions on freedom. Asking "why should I be moral?” already presupposes (in the question itself) the values of freedom and reason, as well as reason’s priority over freedom.

Since the questioner values freedom, but recognized reason as an authority over freedom, the questioner must recognize and value the freedom of others, having no justification to do otherwise. The questioner has no reasonable basis to value only his own freedom, given that he possesses the same freedom as others. Any differentiation would therefore be arbitrary and would violate his own valuing of reason.

1

AFX626 t1_iztkjzb wrote

>Asking "why should I be moral?” already presupposes (in the question itself) the values of freedom and reason, as well as reason’s priority over freedom.

What about a person who values only their own freedom, and has no inclination to stack their faculty of reason against that of anyone else?

>the questioner must recognize and value the freedom of others, having no justification to do otherwise.

What if it doesn't occur to them that any justification is necessary?

I propose an alternative reason for people to behave in a way that approximates local custom, even if they have no natural inclination to think of themselves as equal members of society, with the "two-way street" that implies:

It makes life easier by removing sources of hindrance.

If I don't go around beating people over the head, then I won't get arrested for doing that. Maybe I really want to do that, but I want to be free even more.

9

LukeFromPhilly t1_izvcea2 wrote

The reasonable basis for valuing ones own freedom over others is that the questioner is himself and other people are not. You generally are not making decisions about what to value from an external reference point otherwise I'd be just as motivated to raise other people's kids as my own.

2

contractualist OP t1_izwk3ow wrote

If we both have possession X, and I value my X for itself, then I can’t say that your exact possession X isn’t valuable because I am me. It’s not a reason that can’t be reasonably rejected.

Children meanwhile are valued through an agent-relative relationship, unique between child and parent. But agency isn’t agent-relative but it’s agency itself. It’s a possession which everyone has in equal capacity and no justifiable difference exists (you can’t say that one is more free than others).

1

LukeFromPhilly t1_izwlo0f wrote

>If we both have possession X, and I value my X for itself, then I can’t say that your exact possession X isn’t valuable because I am me. It’s not a reason that can’t be reasonably rejected.

Since the question is whether I should value you you having freedom as much as I value me having freedom the proper analogy would be the question of whether I should value you you possessing X as much as I value me possessing X. In that case, again, the obvious reasonable reason for someone to prefer themselves having X more than someone else having X is because they are themselves and other people are other people. What's unreasonable about this?

>Children meanwhile are valued through an agent-relative relationship, unique between child and parent. But agency isn’t agent-relative but it’s agency itself. It’s a possession which everyone has in equal capacity and no justifiable difference exists (you can’t say that one is more free than others).

I'll give you that freedom is not an entity whose value is agent-relative so in that sense my example falls down here. However, as I've said above, the question is not whether my freedom is more valuable than someone else's it's whether there is any reasonable justification for me to value myself having freedom more than I value someone else having freedom and there the obvious reason is that I am me and they are them. In this sense all values are agent-relative. I don't value things from a third-person perspective.

2

contractualist OP t1_izwnmg2 wrote

Not valuing the same, but valuing at all. Only in the former question can you can get into issues of degrees. But the latter is binary.

1

LukeFromPhilly t1_izwo9gr wrote

Ah ok. I think my argument still works if you substitute valuing the same for valuing at all but at least I understand you better now.

1

contractualist OP t1_izwoj1b wrote

Then I still wouldn’t say there is a justification for valuing’s someone freedom at 0, given the status of freedom as an agency creating asset, rather than dependent on personal agency. So any claim that “X is valuable because it’s mine” isn’t justifiable since X’s value doesn’t rely on that persons personal agency.

1

LukeFromPhilly t1_izwq091 wrote

I think saying X's value here is confusing. If I say that I value my neighbors Tesla then the implication is that I want it for myself. If I say that I value my neighbors freedom the implication is that I want my neighbor to have freedom which is actually contrary to the first example.

1

contractualist OP t1_izwscg1 wrote

Well your neighbor in that case already has freedom. Now it’s just about recognition and valuing of freedom. But I wouldn’t argue that people would necessarily want others to have freedom (say non-conscious animals). All I argue is that freedom is equal in one dimension and because it’s not agent relative, must have a universal value in itself.

1

LukeFromPhilly t1_izxfaz7 wrote

Well in that case my critique of what you're saying is entirely based on me misunderstanding you.

However, if all you're saying is that we acknowledge that freedom as value regardless of whose freedom it is, how does that belief lead to any constraints on our own behavior? If we're acknowledging that I may have a reasonable reason not to want other people to have freedom then it would seem my actions aren't necessarily constrained in any way and therefore I don't have to be moral.

1

contractualist OP t1_izxsfa0 wrote

Yep, that’s the next step. Once the value of people’s freedom is recognized, they’ll act according to that value by obeying the term of the social contract, the expression of individuals’ freedom.

1

LukeFromPhilly t1_izxweg4 wrote

But that would seem to imply that I want other people to have freedom which I thought we agreed doesn't follow.

1

subzero112001 t1_izw7vo9 wrote

“The questioner has no reasonable basis to only value only his own freedom”

Of course they have a basis. Placing oneself above those around you is pretty much the rule for all living things. Self-preservation/selfishness over others is a very valid basis. It’s the most fundamental of basis.

2

contractualist OP t1_izwju4q wrote

What’s being valued isn’t living status or welfare but the power of agency. Agency isn’t agent-relative but it’s agency itself. It’s a possession which everyone has in equal capacity and no justifiable difference exists (you can’t say that one is more free than others).

1

subzero112001 t1_izxc5lz wrote

>Agency isn’t agent-relative but it’s agency itself.

.......lol?

> It’s a possession which everyone has in equal capacity

No, they really don't have an equal capacity. Not hypothetically, realistically, or even in any manner is it equal.

> no justifiable difference exists (you can’t say that one is more free than others)

Agency over oneself compared to not having agency over another entity is a massive difference.

​

There unfortunately seems to be some big lapse in mutual comprehension here.

1

EyeSprout t1_izujx2b wrote

The article doesn't really explain what "reason" is supposed to mean in this context, but the central argument is very much dependent on this one definition.

> Second, the value of reason is established by asking why. The question isn’t “who shall force me to be moral” or “what is moral,” both of which imply an outside force imposing morality through authority. But rather the question is like “what argument for morality can you provide that I can be reasonably expected to accept?” The skeptic will only accept a reason-based response.

What is a "reason-based response"? Obviously,"the happiness of people with reddit accounts named 'eyespout' should be maximized" is not what you would consider a "reason-based response", but on what grounds exactly? Usually by "reason" we mean a system of statements that can be derived from axioms... but every logical system depends on axioms, why can't I choose whatever I want as an axiom for my system?

What constraints are you putting on your allowed axioms?

>If the skeptic recognizes his own freedom, as well as that freedom being subject to reason, then he must accept the freedom of others. It cannot be reasonable that the skeptic’s own personal freedom is the only freedom worth valuing.

That requires a constraint on what "reason" is: whatever this "reason" means has the property that "it cannot be reasonable that the skeptic’s own personal freedom is the only freedom worth valuing". But why exactly would "reason" have that property?

1

contractualist OP t1_izulrh6 wrote

Reasons is a public justification in favor of something. And if you want to constrain someone's freedom, it must be on the basis of some justifiable reason that couldn't be reasonably rejected.

Since freedom is a property of the skeptic, and the skeptic has no reasonable basis from differentiating this property from the equal properties of others, the skeptic would have to recognize and value the freedom of others. There is no reason to prioritize his freedom-asset over that of others which can be publicly justified.

1

EyeSprout t1_izup7d9 wrote

I don't think this answers my questions. I gave you a specific example, why is "in order to maximize the happiness of EyeSprout" not a good public justification? The above is an objective basis for differentiating my freedom from that of others; it's really a description of how some atoms in some server's memory are arranged. You claim that it's not reasonable, but why is it not reasonable?

The key point here is that people are not identical, and I can always define some set of properties that distinguish me from other people and hence value my freedom from other people. There are more "common" ways to distinguish people, such as based on they contribute to society, or how much money they make. Are you saying that no such set of conditions is "reasonable"? But you have been somehow restricting your moral system to only include humans. Why is only including humans a "reasonable" differentiation while other things are not? In general, why are some methods of differentiation "reasonable" and some not?

The reason I'm a stickler for this point is because there's an explanation I do accept for why people should follow morality, and the answer turns out to be "because morality is designed so that it's usually in their self-interest to follow morality", i.e. morality follows a game-theoretic stability principle.

2

contractualist OP t1_izutqw5 wrote

If it can be reasonably rejected, then its not a good reason. No one would want to bind their freedom to that specific reason.

No people are not identical, but they possess identical freedom. There's no basis for differentiating one's own freedom from another. In the same way that you cannot say you are more "alive" than another living being (except metaphorically) being "more free" makes about as much sense. If you value reason, then you can't deny that people's freedom are equal, since there is no basis for stating otherwise.

If morality is just rational interest, subject to game theoretic stability, then its not morality, just rationality. Why not be a free rider if there are no consequences to being so? Thats what I mean by morality.

1

EyeSprout t1_izv1sez wrote

>No one would want to bind their freedom to that specific reason.

By that, do you mean: That specific reason (assuming you're talking about the reddit account name condition) is easy enough to change (by, say, someone hacking one's account or something), and no one is willing to lose their freedom over that (their account being hacked) so it's not a good condition?

Then is the condition just about how easy something is to change? i.e. the value of a person's freedom shouldn't change very easily under realistic circumstances? That does sound like a decent functional definition, it can work.

>If you value reason, then you can't deny that people's freedom are equal, since there is no basis for stating otherwise.

That paragraph is hard to understand, but at the end, do you just mean that qualitative/discrete properties of a person's freedom should be equal? A good argument for that is that there are a continuous spectrum of people and any discrete cut we introduce in that continuity would necessarily be arbitrary.

So on one hand, it's can make sense to restrict people's freedom of action in the sense of giving them varying amounts of income because income is a continuous property, but it doesn't make sense to restrict people's freedom of action by allowing or disallowing specific actions because it's a discrete property and would introduce an arbitrary cut?

i.e. your central argument is basically a topological one? That's an interesting idea and something I could get behind.

Edit: or more specifically, in the case of two continuous properties, any map/dependence would have some arbitrary parameters, so we can't really "reduce" it by making everyone equal. But when you map a continuous space to a discrete space, there's a clear preference there.

-------------------

My own framework isn't really important to this conversation, but to explain some things:

>If morality is just rational interest, subject to game theoretic stability,

No, that's not quite what I mean. Morality has the property of (approximate) stability, but it is not uniquely defined by stability. There are many distinct systems with the property of stability and some of them can be called "morality" while calling others morality would be ridiculous.

>Why not be a free rider if there are no consequences to being so?

In any realistic situation, no one is able to tell ahead of time whether there are consequences or not, and just assuming there are consequences tends to lead to better results than constantly worrying about whether there are consequences.

But yeah, I get it, I tend to treat morality descriptively rather than prescriptively, which is a slightly different question. It's a matter of my interests; I always find the descriptive problem more interesting. Same thing happens when I talk about the problem of induction, it's more interesting to me to talk about when we can induct and not if we can induct.

2