Viewing a single comment thread. View all comments

EyeSprout t1_izv1sez wrote

>No one would want to bind their freedom to that specific reason.

By that, do you mean: That specific reason (assuming you're talking about the reddit account name condition) is easy enough to change (by, say, someone hacking one's account or something), and no one is willing to lose their freedom over that (their account being hacked) so it's not a good condition?

Then is the condition just about how easy something is to change? i.e. the value of a person's freedom shouldn't change very easily under realistic circumstances? That does sound like a decent functional definition, it can work.

>If you value reason, then you can't deny that people's freedom are equal, since there is no basis for stating otherwise.

That paragraph is hard to understand, but at the end, do you just mean that qualitative/discrete properties of a person's freedom should be equal? A good argument for that is that there are a continuous spectrum of people and any discrete cut we introduce in that continuity would necessarily be arbitrary.

So on one hand, it's can make sense to restrict people's freedom of action in the sense of giving them varying amounts of income because income is a continuous property, but it doesn't make sense to restrict people's freedom of action by allowing or disallowing specific actions because it's a discrete property and would introduce an arbitrary cut?

i.e. your central argument is basically a topological one? That's an interesting idea and something I could get behind.

Edit: or more specifically, in the case of two continuous properties, any map/dependence would have some arbitrary parameters, so we can't really "reduce" it by making everyone equal. But when you map a continuous space to a discrete space, there's a clear preference there.

-------------------

My own framework isn't really important to this conversation, but to explain some things:

>If morality is just rational interest, subject to game theoretic stability,

No, that's not quite what I mean. Morality has the property of (approximate) stability, but it is not uniquely defined by stability. There are many distinct systems with the property of stability and some of them can be called "morality" while calling others morality would be ridiculous.

>Why not be a free rider if there are no consequences to being so?

In any realistic situation, no one is able to tell ahead of time whether there are consequences or not, and just assuming there are consequences tends to lead to better results than constantly worrying about whether there are consequences.

But yeah, I get it, I tend to treat morality descriptively rather than prescriptively, which is a slightly different question. It's a matter of my interests; I always find the descriptive problem more interesting. Same thing happens when I talk about the problem of induction, it's more interesting to me to talk about when we can induct and not if we can induct.

2