Viewing a single comment thread. View all comments

EyeSprout t1_izujx2b wrote

The article doesn't really explain what "reason" is supposed to mean in this context, but the central argument is very much dependent on this one definition.

> Second, the value of reason is established by asking why. The question isn’t “who shall force me to be moral” or “what is moral,” both of which imply an outside force imposing morality through authority. But rather the question is like “what argument for morality can you provide that I can be reasonably expected to accept?” The skeptic will only accept a reason-based response.

What is a "reason-based response"? Obviously,"the happiness of people with reddit accounts named 'eyespout' should be maximized" is not what you would consider a "reason-based response", but on what grounds exactly? Usually by "reason" we mean a system of statements that can be derived from axioms... but every logical system depends on axioms, why can't I choose whatever I want as an axiom for my system?

What constraints are you putting on your allowed axioms?

>If the skeptic recognizes his own freedom, as well as that freedom being subject to reason, then he must accept the freedom of others. It cannot be reasonable that the skeptic’s own personal freedom is the only freedom worth valuing.

That requires a constraint on what "reason" is: whatever this "reason" means has the property that "it cannot be reasonable that the skeptic’s own personal freedom is the only freedom worth valuing". But why exactly would "reason" have that property?

1

contractualist OP t1_izulrh6 wrote

Reasons is a public justification in favor of something. And if you want to constrain someone's freedom, it must be on the basis of some justifiable reason that couldn't be reasonably rejected.

Since freedom is a property of the skeptic, and the skeptic has no reasonable basis from differentiating this property from the equal properties of others, the skeptic would have to recognize and value the freedom of others. There is no reason to prioritize his freedom-asset over that of others which can be publicly justified.

1

EyeSprout t1_izup7d9 wrote

I don't think this answers my questions. I gave you a specific example, why is "in order to maximize the happiness of EyeSprout" not a good public justification? The above is an objective basis for differentiating my freedom from that of others; it's really a description of how some atoms in some server's memory are arranged. You claim that it's not reasonable, but why is it not reasonable?

The key point here is that people are not identical, and I can always define some set of properties that distinguish me from other people and hence value my freedom from other people. There are more "common" ways to distinguish people, such as based on they contribute to society, or how much money they make. Are you saying that no such set of conditions is "reasonable"? But you have been somehow restricting your moral system to only include humans. Why is only including humans a "reasonable" differentiation while other things are not? In general, why are some methods of differentiation "reasonable" and some not?

The reason I'm a stickler for this point is because there's an explanation I do accept for why people should follow morality, and the answer turns out to be "because morality is designed so that it's usually in their self-interest to follow morality", i.e. morality follows a game-theoretic stability principle.

2

contractualist OP t1_izutqw5 wrote

If it can be reasonably rejected, then its not a good reason. No one would want to bind their freedom to that specific reason.

No people are not identical, but they possess identical freedom. There's no basis for differentiating one's own freedom from another. In the same way that you cannot say you are more "alive" than another living being (except metaphorically) being "more free" makes about as much sense. If you value reason, then you can't deny that people's freedom are equal, since there is no basis for stating otherwise.

If morality is just rational interest, subject to game theoretic stability, then its not morality, just rationality. Why not be a free rider if there are no consequences to being so? Thats what I mean by morality.

1

EyeSprout t1_izv1sez wrote

>No one would want to bind their freedom to that specific reason.

By that, do you mean: That specific reason (assuming you're talking about the reddit account name condition) is easy enough to change (by, say, someone hacking one's account or something), and no one is willing to lose their freedom over that (their account being hacked) so it's not a good condition?

Then is the condition just about how easy something is to change? i.e. the value of a person's freedom shouldn't change very easily under realistic circumstances? That does sound like a decent functional definition, it can work.

>If you value reason, then you can't deny that people's freedom are equal, since there is no basis for stating otherwise.

That paragraph is hard to understand, but at the end, do you just mean that qualitative/discrete properties of a person's freedom should be equal? A good argument for that is that there are a continuous spectrum of people and any discrete cut we introduce in that continuity would necessarily be arbitrary.

So on one hand, it's can make sense to restrict people's freedom of action in the sense of giving them varying amounts of income because income is a continuous property, but it doesn't make sense to restrict people's freedom of action by allowing or disallowing specific actions because it's a discrete property and would introduce an arbitrary cut?

i.e. your central argument is basically a topological one? That's an interesting idea and something I could get behind.

Edit: or more specifically, in the case of two continuous properties, any map/dependence would have some arbitrary parameters, so we can't really "reduce" it by making everyone equal. But when you map a continuous space to a discrete space, there's a clear preference there.

-------------------

My own framework isn't really important to this conversation, but to explain some things:

>If morality is just rational interest, subject to game theoretic stability,

No, that's not quite what I mean. Morality has the property of (approximate) stability, but it is not uniquely defined by stability. There are many distinct systems with the property of stability and some of them can be called "morality" while calling others morality would be ridiculous.

>Why not be a free rider if there are no consequences to being so?

In any realistic situation, no one is able to tell ahead of time whether there are consequences or not, and just assuming there are consequences tends to lead to better results than constantly worrying about whether there are consequences.

But yeah, I get it, I tend to treat morality descriptively rather than prescriptively, which is a slightly different question. It's a matter of my interests; I always find the descriptive problem more interesting. Same thing happens when I talk about the problem of induction, it's more interesting to me to talk about when we can induct and not if we can induct.

2