KingJeff314
KingJeff314 t1_j92wme4 wrote
Reply to comment by contractualist in The Ontology and Epistemology of Morality by contractualist
> And the error in the last section was treating X's freedom and Y's freedom separately. Freedom is an objective property that cannot reasonably be differentiated. Its not agent-relative, it is agency. There is no X's freedom or Y's freedom, there is only freedom that both X and Y happen to possess.
To make a statement like "you should not kidnap a person", you have to appeal to a value like "you value that person's freedom", not "you value freedom", which is nebulous and non-specific. Supposing that I was a psychopath and only cared about my own freedom (ie. Freedom(Me, Me)), what rational grounds do you have to make me care about anyone else?
KingJeff314 t1_j92mylj wrote
Your article hinges on the idea that humans share values and therefore can come to a normative consensus. It is much more complex than that. Humans have many different values, often conflicting with each other, and each person weighs values and who the values apply to differently.
Some people value security more than freedom, for instance. Should a government do more invasive searches under the threat of a terrorist attack? Either they do nothing and potentially allow a terrorist attack, or they act to stop it and violate citizen’s freedoms in the process. This is a Trolley Problem. Your article suggests "No answer would be justifiable to all involved parties since they would all have a reasonable claim to not being [killed/invasively searched]". Your Trolley Problem article also states, "Like so many other life dilemmas, pure reason cannot provide a definite answer to the trolley problem. Only the free self can make a choice whenever there are sufficient reasons for either side of a decision." Basically, when we get to moral problems with any degree of complexity, your model of pure reason is insufficient.
Additionally, your reasoning is insufficient that "valuing freedom necessarily implies valuing the freedom of others". To show the gap in logic, let me present this statement in propositional logic:
Definitions: Freedom(X,Y) means that X values Y's freedom, Free(X) means that X is a free agent, and H is the set of humans. We can assume (∀X in H, Free(X)^Freedom(X,X)). "∀" means "for all"
So then your claim is that (∀X,Y in H, Freedom(X,X) ⇒ Freedom(X,Y)). Your justification in the linked article is "If others are regarded as having similar freedom to his own—by having the capacity to freely make decisions, including the decision whether or not to be moral—then he cannot deny the value of their own freedom". Propositionally, this is (if ∀X,Y in H, Free(X)^Free(Y)^Freedom(X,X) then Freedom(X,Y)). This does not follow. It assumes a symmetry that does not necessarily exist.
Overall, I caution you against playing loosely with assumptions about values. Can we even be sure that any two humans share the exact same set of values?
KingJeff314 t1_j9p6kak wrote
Reply to LPT: Get the brave browser if you want to listen to music on YouTube with your screen turned off without having the subscription. Simply get brave, click the ... , go to Settings and enable background music. Now you switch over to other apps or even turn off your screen. iOS/Android by Long8D
Brave also blocks ads, so it is good for iOS which can’t have UBlock afaik