Viewing a single comment thread. View all comments

sener87 t1_iwhuzht wrote

Well, technically there are some requirement for consistency, but they mostly boil down to a simple structure. As long as you are able to rank any two experiences relative to each other, the rest is sorted out by transitivity. The exact number of the utility score is not important, any order preserving transformation of the scale is equivalent for the choice/ranking. The question is therefore simply: can you choose between them? And indifference is allowed.

Multi-faceted experiences make such comparisons more difficult, for pretty much the same reason that comparing between experiences of different persons is difficult. There is not that big a conceptual difference between 'better in aspect A (food) but worse in B (pop song)' and 'better for person A (me) but worse for person B (you)'. The one thing I find so much harder about the interpersonal setting, there is no single actor to make the decision, while we can rely on the single actor to make the choice (even if it is indifference) in the multi-criterion setting.

2

Squark09 OP t1_iwichxa wrote

> As long as you are able to rank any two experiences relative to each other, the rest is sorted out by transitivity.

This is key, I actually recall hearing about some Neuroscience research that showed that we actually do these kind of comparisons all the time and are quite good at distinguishing the relative valence of very mixed experiences

1

trinite0 t1_iwih5j5 wrote

I'm more than happy to grant that people use an intuitive form of utilitarian judgment as a heuristic aid in decision-making. That's quite far from claiming, as the original article does, that utilitarianism can form an "ultimate ethical theory," or that conscious valence solves the "is/ought" problem in moral reasoning.

The fact is, the vast majority of the decisions that people make in their day-to-day lives don't really involve any reasoning at all, ethical or otherwise.

As an ethical theory, utilitarianism is, at best, a limited lens through which we can examine certain very simplified, highly circumscribed decisions, for points at which we have (or think we have) a far clearer understanding of the most likely consequences of an action than we do in normal circumstances.

This is why, I think, utilitarians seem to like thought experiments so much: it's much easier to formulate a utilitarian reasoning chain to decide dramatic imaginary scenarios than it is to apply it to normal daily behavioral decisions. Utilitarianism might be able to figure out whether it would be ethical to choose to annihilate the human race in nuclear fire, but it has a lot less to say about whether I should tell my kid to stop picking his nose.

1

Squark09 OP t1_iwiq1rn wrote

As I say in the article, most of the time deontological or virtue ethics are actually a better bet for figuring out how to act. But that's just because they're a more efficient way of reasoning the best thing to do. In the end the thing that matters is the sum total of positive conscious experience

1

trinite0 t1_iwir4yz wrote

There is no such thing as a "sum total of positive conscious experience." Why do you think there would be?

Or if there is, how could such a thing possibly be accessible to our limited, forgetful, mortal brains?

1