Comments

You must log in or register to comment.

trinite0 t1_iwha8qc wrote

From the article:

>If we accept the reality of experience, it’s hard to deny that the universe has value built in. The good must derive from reducing suffering and promoting bliss, every coherent conception of it must ultimately point to this fact. If we buy the self-evident fact that conscious valence is real, we get an ought from an is.
>
>Given this strong argument for moral realism, I find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism. Deontological and virtue ethics may provide good tools for achieving good outcomes, but they don’t get to the heart of the matter. Any coherent ethical theory must aim to attain a world-state with less suffering. This ultimately reduces to a form of utilitarianism that is based on measuring the quality of conscious experience, often referred to as valence utilitarianism.1

So the argument here is: if you accept the premise that utilitarianism is the only option, then you must conclude that utilitarianism is the only option.

Okay then.

17

eliyah23rd t1_iwhqujt wrote

My problem is more in the first paragraph you quoted.

>If we buy the self-evident fact that conscious valence is real, we get an ought from an is.

Nope. Nice paragraph but it's not an argument.

Yes I feel that valence for myself. It so happens I even want to end suffering for others. But that still doesn't tell me why I should want to end suffering for others.

I'm afraid I'm going to have to remain in the moral skeptic camp.

7

Squark09 OP t1_iwib1es wrote

If you can believe suffering is objectively bad and that other people can suffer, then you get the should. It's almost tautological based on the definition of bad

6

clairelecric t1_iwk69vs wrote

But it's not objectively bad it's subjectively bad

4

phenamen t1_iwnxi1u wrote

It's bad for any subject, therefore it's objectively bad insofar as subjectivity exists

3

clairelecric t1_iwoajb4 wrote

That doesn't follow logically at all. For instance, it is mostly bad from the perspective of said subject, but maybe not from a different perspective. Also, suffering can be an important signal to ourselves that something is amiss. So you might say "but then we try to solve the suffering so then we prove that suffering is bad because we want to get rid of it". Well, you could argue that, but you could also argue that without the suffering we wouldn't change for the better. The suffering is a means to an end. Without it we wouldn't know there was some kind of (subjective) problem.

1

phenamen t1_iwoilxr wrote

You're absolutely right about the counterargument that I'm going to make but I don't think that your rebuttal is sound. How else can we understand "change for the better" except as creating personal habits or relationships which more consistently produce happiness than suffering?

1

clairelecric t1_iwoy9fz wrote

Surely you see that many of our ideas of better or worse are constructions? For the person seeking enlightenment going into refuge and isolation is better. For someone else it might be getting more friends. For another it's faith in God.

Let's say there's an alien species looking at us and considering us detrimental to the planet and potentially our solar system. Let's say they look at our self destruction with joy and a sense of justice. Does that possibility not prove there is no objective bad?

1

iiioiia t1_iwiaq5g wrote

> But that still doesn't tell me why I should want to end suffering for others.

Arguably it may decrease the likelihood of people harming other people due to anger as a consequence of their suffering...and some day, one of those harmed people could be you or one of your loved ones.

2

eliyah23rd t1_iwm74l9 wrote

Hi. Your reply is a factual argument about how best to implement my own preference.

Value. I don't want to suffer. Fact. Letting others suffer increases the chances that they will make you suffer. Plan. Decrease their suffering.

I called it a value here. But it is a preference. There is no argument here that I "should" hold to that preference. There is just the description that I do. I could just as easily be a masochist. I know that I have Neural modules that implement Motivational Salience but that is just a fancy way of saying that I do what I want to do.

Again, moral skepticism does not imply that I prefer only survival or my own pleasure (though advertisers have an interest in my thinking that.) I could just prefer helping other people. I get a real buzz when I prevent suffering. Is that a "value"?

2

iiioiia t1_iwmigm3 wrote

I was coming at it from the perspective that most people tend to prefer not being killed (or dying in general).

Considering that: did you get vaccinated for COVID?

2

eliyah23rd t1_iwmk9bd wrote

>Considering that: did you get vaccinated for COVID?

why?

2

iiioiia t1_iwml78f wrote

It may provide insight into your stance on staying alive (that may be contrary to your self-perceptions on it, at times).

2

eliyah23rd t1_iwp3p4z wrote

I'm not sure it would.

In your mind (I assume), I am a disembodied online persona. There is no evidence that the sufficient cause of these words is an actual human being.

Say, I claim that the person causing this persona had received 5 shots but that the person spent time every day with their 97-old parent. They express concern that becoming infected might endanger their parent's life. That person's brain would seem to have some Neurons that achieve motivational salience that associate looking after the welfare of a parent as a high-priority value. Say an advanced fMRI could confirm this description.

What would that tell you about what you or anyone else in the world "should" do?

3

iiioiia t1_iwqajxu wrote

I think this is a plausible edge case - you could even be suicidal, yet get vaccinated to protect a loved one.

But I don't think this necessarily reaches a "should", as simple preference could be sufficient. In a sense, from certain perspectives, maybe "should' is purely a collective hallucination, like "rights", most of "truth", etc. Adult life is very much like a continuation of "playing house" from childhood.

1

eliyah23rd t1_iwqe368 wrote

Wow! Exactly!

I just did a few posts on the idea of ethics as a game on the Instagram link. Check out the comments where I expand on the aphorism.

3

iiioiia t1_iwqegwu wrote

> ethics as a game

This sounds interesting....but can you link to what you're referring to?

1

eliyah23rd t1_iwqic63 wrote

Sure

https://www.instagram.com/eliyah23rd/

See the last two posts. I am struggling to upload a third but Instagram seems to be having some issue.

2

iiioiia t1_iwqj07t wrote

Ah, I think we've talked about this before right? As a fan of subversion: some of those are excellent.

Now we just need an implementation.

2

eliyah23rd t1_iwqjtpn wrote

Thank you so much.

Yes, we have spoken about it before.

I need collaborators for an implementation. Also more criticism of the core ideas.

That's why I hang out trying to talk to people.

2

iiioiia t1_iwqrori wrote

What sort of an implementation do you have in mind?

2

eliyah23rd t1_iwqyyo8 wrote

Rather than pour too many words on to the page, can I refer to the comment/caption in these posts:

https://www.instagram.com/p/Cj4_NgYsxUw/

https://www.instagram.com/p/CkIg-UkMbhw/

https://www.instagram.com/p/CkNx6RkMvQo/

I try to put a lot of the details directly in the description

2

iiioiia t1_iwqzoo9 wrote

You are onto something.

1

eliyah23rd t1_iwr2f4v wrote

Thank you so much.

3

iiioiia t1_iwr36b0 wrote

If you ever come into some $$$, let me know, and I will do likewise. :)

1

eliyah23rd t1_iwu93ag wrote

I'm not trying to get $$$

I believe in collaboration instead.

Look at the open source movement. They produce far superior code to that of all the big corporations with budgets in the billions. Yes, some corporations hire teams of independent minded open source creators and exploit them IMO because they control the real gold: eyeballs and data.

I want to work without being tied to $$. I want to find the first collaborators and roll on from there.

:heart: > $

3

iiioiia t1_iwuupck wrote

> I'm not trying to get $$$

Why not? What kind of an implementation are you aiming at that wouldn't benefit from $$$?

> I believe in collaboration instead.

They are not mutually exclusive....like not at all. In fact, the opposite is more often true (consider the website we are having this conversation on right now, which runs on money).

> Look at the open source movement. They produce far superior code to that of all the big corporations with budgets in the billions. Yes, some corporations hire teams of independent minded open source creators and exploit them IMO because they control the real gold: eyeballs and data.

True....but if one wants to reach a goal, which is (probabilistically) faster: hoping for talent, or buying talent?

> I want to work without being tied to $$. I want to find the first collaborators and roll on from there.

Money tends to tie/corrupt people, but it is not a necessity.

> :heart: > $

Exactly. But that does not render the value/power of money to be zero. Both variables could be extremely large numbers, with heart being a much bigger large number.

As a though experiment, imagine two scenarios:

a) You and me (as is) vs the world ("the powers that be")

b) You and me (with unrestricted access to $45B in capital) vs the world ("the powers that be")

If it was me, I'd choose option (b).

1

eliyah23rd t1_ix45dij wrote

I am not against other people raising money for a cause. I am not interested in raising money. I don't want to be controlled by the wishes of those who give the money nor am I interested in buying talent and telling other people what I want them to create.

I just want to inspire, be corrected by and collaborate with other people who are trying to achieve the same goal. If we have differing goals but can find some goals in common, then that is fantastic too. If one or more of the team wants to raise money to further the ideas we've worked on, that would be great. It is just not the role I want.

That may all change. For now, I just want to create a forum where people who care can discuss the issues. I've got some ideas and these ideas need criticism. I want to hear the ideas other people have. Once there's some momentum, let's see where we all want to go from there.

1

sener87 t1_iwhnviz wrote

You have a typo there, the end should be Q.E.D.

1

Squark09 OP t1_iwiajr8 wrote

If you accept the premise that experience with positive or negative valence is real and that closed individualism is false, utilitarianism is the only option.

I guess most people have more issue with the closed individualism part

1

trinite0 t1_iwie922 wrote

That conclusion absolutely does not follow from those premises, and the article makes no coherent argument that it does. It smuggles its conclusion into the premises themselves, and asserts it with no argument.

2

Squark09 OP t1_iwij01g wrote

It seems like the conclusion is smuggled in because in a way it's tautological. What we mean by good or bad has to be conveyed by conscious valence, as that's the only way we can know anything.

Then if you reject closed individualism, you have to admit that other people's experiences matter as well.

Hence you get valence utilitarianism.

1

trinite0 t1_iwijwfc wrote

A tautology is not an argument. It does not have a conclusion.

2

Squark09 OP t1_iwikbdg wrote

Although pointing out tautologies can clear up confusion.

What do you mean by good or bad?

1

trinite0 t1_iwikhiz wrote

"Good or bad" what? In what context?

1

Squark09 OP t1_iwioe5t wrote

In the context of ethics

1

trinite0 t1_iwiqlwd wrote

"Ethics" is not a meaningful context.

I assume you mean something like, "How do I assign value to experiences in the course of making choices between different possible courses of action?"

And I'll answer for myself, but with what I think applies to every human being: "Poorly, inconsistently, intuitively, and with very little reflection, reasoning, or conscious judgment in 99.9% of cases."

3

RelativeCheesecake10 t1_iwgvr3y wrote

Here’s a question: you say that the good must derive from reducing suffering and promoting bliss.

Is there anything valuable/morally relevant in people in themselves, independent from their capacity as experiencers of suffering or joy?

If the answer is yes, then it seems like reducing suffering and promoting bliss is not the ultimate good, since we value people in themselves, not just in their capacity to experience bliss. Maybe the implication of valuing a person is that we want to reduce their suffering, but it doesn’t seem obvious that that’s the only morally relevant implication, and utilitarianism therefore gives us an incomplete picture of what it is to attend to human worth.

If the answer is no, then the idea that we should care about suffering and bliss seems silly. We want to reduce suffering, but we don’t actually care about the person suffering, as such? We just have an abstract crusade against suffering qua suffering, but we don’t assign innate value to the people we want to bring out of suffering into bliss?

Utilitarianism is only capable of valuing people insofar as they are experiencers of suffering or joy. That is, it can’t give a morally coherent picture of humans as a whole—of embodied, relational, non-fungible, agentic beings that are morally significant in their own right.

9

trinite0 t1_iwhcolt wrote

Yes, and even if one considers humans simply as experiencers, we still have a whole lot of experiences that are not easily classifiable into either suffering or joy.

When I'm driving my car on an empty highway, not thinking about anything, and not really paying attention to the road, am I experiencing suffering, or joy? What about when I brush my teeth? What about when I'm in a meeting at work that's not exactly boring but not exactly interesting?

And, of course, there are a bunch of experiences that seem to be complicated mixtures of suffering and joy, and I'm not just talking about that Hellraiser stuff. When I read a stupid comment on Twitter that makes me mad but also makes me feel smarter than that dope, am I experiencing more suffering, or more joy? What about when I laugh at an offensive joke, while also feeling ashamed for laughing? What about when I go to a horror movie and get scared for fun?

3

sener87 t1_iwhoexs wrote

I think the term you are looking for is indifference, it's like a zero on the scale between suffering and joy, not really a problem. Interpersonal comparisons, or how to value my joy relative to yours....

2

trinite0 t1_iwhsp85 wrote

How does that help?

I think all of us would rather have "indifferent" experiences than have no experiences at all, which would imply that humans value experience in its own right, regardless of its "score" on this imaginary joy/suffering scale. In addition, all of these "indifferent" experiences can be distinguished from each other linguistically and cognitively, so they're not equivalent to one another. And people can have preferences from among these "indifferent" experiences along axes that are not obviously related to the concepts of "joy" and "suffering."

The point is that the evaluation and description of human experience is way, way more complicated than any utilitarian account can ever reflect, because people very often do not make their experiential choices based on a neatly articulable account of where they are located on the "joy/suffering" metric.

Let's go a bit further: how do we even distinguish "an experience" from a different "experience"?

Temporally? But we can experience a variety of different things at the same time.

Sensorially? But we can sense a variety of things at once, even with the same sense.

How about in memory? But our memories often are radically simplified narrative constructs of our experiences, with vast amounts of both sensory data and emotional response discarded, or they are inaccurate combinations of different events, or they include imaginary elements that we did not actually experience, etc.

I am eating a delicious steak dinner at Chili's. At the same time, I am listening to an irritating pop song on the radio. At the same time, I am thinking about a funny joke I heard earlier in the day. At the same time, I am looking at the ugly wallpaper in the Chili's. At the same time, I am being told by the waiter that a second plane has hit the Twin Towers. How many experiences am I having?

And then we have the further problem of varying levels of "reality" of experiences. Is remembering something very clearly a form of re-experiencing that event? If I intentionally, deliberately imagine something that did not actually happen, do I experience it? What about vicarious experience, when I watch something happen to someone else, and respond emotionally as though it were happening to me? What if I have a nightmare (that certainly seems like "suffering," but is it a real experience)?

So if we cannot even confidently define what "an experience" is, or how it can be distinguished from other "experiences," or how to assess the relationships between the different levels of reality of experiences, then how in the world can we rate them all on some kind of linear scale and assign points to them?

1

sener87 t1_iwhuzht wrote

Well, technically there are some requirement for consistency, but they mostly boil down to a simple structure. As long as you are able to rank any two experiences relative to each other, the rest is sorted out by transitivity. The exact number of the utility score is not important, any order preserving transformation of the scale is equivalent for the choice/ranking. The question is therefore simply: can you choose between them? And indifference is allowed.

Multi-faceted experiences make such comparisons more difficult, for pretty much the same reason that comparing between experiences of different persons is difficult. There is not that big a conceptual difference between 'better in aspect A (food) but worse in B (pop song)' and 'better for person A (me) but worse for person B (you)'. The one thing I find so much harder about the interpersonal setting, there is no single actor to make the decision, while we can rely on the single actor to make the choice (even if it is indifference) in the multi-criterion setting.

2

Squark09 OP t1_iwichxa wrote

> As long as you are able to rank any two experiences relative to each other, the rest is sorted out by transitivity.

This is key, I actually recall hearing about some Neuroscience research that showed that we actually do these kind of comparisons all the time and are quite good at distinguishing the relative valence of very mixed experiences

1

trinite0 t1_iwih5j5 wrote

I'm more than happy to grant that people use an intuitive form of utilitarian judgment as a heuristic aid in decision-making. That's quite far from claiming, as the original article does, that utilitarianism can form an "ultimate ethical theory," or that conscious valence solves the "is/ought" problem in moral reasoning.

The fact is, the vast majority of the decisions that people make in their day-to-day lives don't really involve any reasoning at all, ethical or otherwise.

As an ethical theory, utilitarianism is, at best, a limited lens through which we can examine certain very simplified, highly circumscribed decisions, for points at which we have (or think we have) a far clearer understanding of the most likely consequences of an action than we do in normal circumstances.

This is why, I think, utilitarians seem to like thought experiments so much: it's much easier to formulate a utilitarian reasoning chain to decide dramatic imaginary scenarios than it is to apply it to normal daily behavioral decisions. Utilitarianism might be able to figure out whether it would be ethical to choose to annihilate the human race in nuclear fire, but it has a lot less to say about whether I should tell my kid to stop picking his nose.

1

Squark09 OP t1_iwiq1rn wrote

As I say in the article, most of the time deontological or virtue ethics are actually a better bet for figuring out how to act. But that's just because they're a more efficient way of reasoning the best thing to do. In the end the thing that matters is the sum total of positive conscious experience

1

trinite0 t1_iwir4yz wrote

There is no such thing as a "sum total of positive conscious experience." Why do you think there would be?

Or if there is, how could such a thing possibly be accessible to our limited, forgetful, mortal brains?

1

Squark09 OP t1_iwic2sq wrote

I actually think if you pay more close attention (e.g. by training in meditation) you will see that there are no really neutral experiences, but there is also some kind of pleasantness in just existing without suffering.

It is true that the picture can be mixed though and it's not obvious how to treat that: see here for example https://forum.effectivealtruism.org/posts/bvtAXefTDQgHxc9BR/just-look-at-the-thing-how-the-science-of-consciousness

1

Squark09 OP t1_iwicrpf wrote

I don't think "people themselves" really full exist as independent entities outside of a web of experience. Then the value in someone's individuality comes from the experiences associated with that individuality.

This is the point about rejecting closed individualism

2

RelativeCheesecake10 t1_iwie9eh wrote

This is a good response, but I don’t think it quite works. You’re still requiring that the experience of happiness is the only morally significant component of humanness, which doesn’t seem true.

2

Squark09 OP t1_iwig291 wrote

I'm not saying it necessarily has to be happiness, any positive experience counts

1

Vainti t1_iwj6njz wrote

The notion that caring about suffering and bliss could ever be silly is silly. Imagine a person who doesn’t believe that people are valuable outside of their ability to experience bliss/suffering. Then, imagine trying to convince that person that it’s silly to care if they get tortured and abused or that it doesn’t make sense for them to want what makes them happy. There has never been and will never be a conscious entity that doesn’t care about happiness or suffering.

2

Thedeaththatlives t1_iwl4y7d wrote

> We just have an abstract crusade against suffering qua suffering, but we don’t assign innate value to the people we want to bring out of suffering into bliss?

I don't really see any problem with that?

1

rejectednocomments t1_iwgzn27 wrote

Utilitarianism is not the only option though.

7

Squark09 OP t1_iwib9fu wrote

Utilitarianism is the only option if you believe in the reality of consciousness, that it is valenced and reject closed individualism

0

rejectednocomments t1_iwibqx8 wrote

What do you mean by consciousness being valenced, and what do you mean by closed individualism?

1

Squark09 OP t1_iwidy43 wrote

Valenced means it can be intrinsically good or bad, suffering is intrinsically bad, joy is intrinsically good.

Closed individualism (nice description from https://qri.org/glossary ): "In its most basic form, this is the common-sense personal identity view that you start existing when you are born and stop existing when you die. According to this view each person is a different subject of experience with an independent existence. One can believe in a soul ontology and be a Closed Individualist at the same time, with the correction that you exist as long as your soul exists, which could be the case even before or after death."

0

Nickesponja t1_iwj1pit wrote

It seems like the majority of people would accept closed individualism and would therefore have no use for this argument.

3

Truenoiz t1_iwqzned wrote

Deontology with non-malfeasance as a primary virtue is a stronger option- you don't even need to assume several precepts to make it so.

1

Nameless1995 t1_iwvghxr wrote

  • Even if we assume that conscious valence provides an ought for the conscious being with the valence, it's not clear how you can universalize valence maximization separating it from particular individuals. Having an ought from my consciousness valence doesn't immediately imply that I am obligated to, say, sacrifice my consciousness valence for "overall maximization of some utility function accomodating valence of all beings" or such if one is not already poised towards selflessness.
  • Open Individualism may help with the above concern, but it's more controversial and niche than utilitarianism itself. Kind of undermines the whole project when to support X, you have rely on something even more controversial than X. Either way, I also don't see why I should care for some "metaphysical unity/lack of separation" -- which can be up to how we use language. I don't see why boundedness of consciousnesses (restriction from accessing other's consciousness unmediated) isn't enough to ground for individuation and separation irrespective of whether all the separate perspectives are united in a higher dimensional spatial manifold, God, or Neoplatonic One or what have you. It's unclear to me if such abstract metaphysical unities really matter. We don't individuate things based on them being completely causally isolated and separated from the world.
  • I don't see why a proper normative theory shouldn't be applicable and scalable to hypothetical and idealized scenarios. Their lack of robustness to hypotheticals should count as a "bug" and proper reason should be given as to why it doesn't apply there. Real life scenarios are complex, before just blindly applying a theory we need some assurance. Idealized scenarios and hypotheticals allows us to "stress test" our theories to gain assurance. Ignoring them because "they are not realistic" doesn't sound very pleasant.
  • I don't see how logarithmic scaling helps with repugnant conclusion. Repugnant conclusion comes from the idea that x beings with utility y but high quality utility for each of x beings can be always overtaken by a seemingly less ideal scenario where m beings (m >>> x) exists with utility z (z > y) but each of m being has low quality utility. I don't see what changes if the individuals happiness grows logarithmically (you can always adjust the numbers to make it a problem), and I don't see what changes if there is a underlying unitary consciousness behind it all. Is the "same" consciousness having multiple low quality experiences really better than that having multiple high quality experiences but in less number?
  • I also don't see the meaning of calling it the "same" consciousness if it doesn't have a single unified experience (solipsistic).
2

Squark09 OP t1_iwztj7o wrote

Excellent comment!

I'm already pretty committed to open/empty individualism, so this post was really meant to be me thinking through what utilitarianism means in this context. I get that it's controversial, but my own experiences in meditation and thinking about the scientific ambiguity of "self" have convinced me that closed individualism doesn't make sense.

> I don't see how logarithmic scaling helps with repugnant conclusion

You're right that it doesn't make any difference in the pure form of the thought experiment, however I think it does make a difference when you have limited resources to build your world. It's much easier to push someone's conscious valence up than to generate another 1000 miserable beings. The main thing that makes a difference here is open/empty individualism.

> I don't see why boundedness of consciousnesses (restriction from accessing other's consciousness unmediated) isn't enough to ground for individuation

If you go down this line of reasoning, your future self is separate from your past self, they don't share a bound experience either, it's just that your future self has a bunch of memories that it puts together into a story to relate to your past self. Most common sense ethics still tries to reduce suffering for a future self, why is this any different than claiming that you should help others?

> I also don't see the meaning of calling it the "same" consciousness if it doesn't have a single unified experience (solipsistic).

I mean that all consciousness is the same in the sense that two different electrons are the same, all consciousness has the same ontological properties. So if you buy that your own suffering is "real", then so is someone else's.

1

[deleted] t1_iwk7l37 wrote

[removed]

1

BernardJOrtcutt t1_iwn53jq wrote

Your comment was removed for violating the following rule:

>Argue your Position

>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1

Thedeaththatlives t1_iwl5plb wrote

This whole thing really just seems like a big argument from incredulity. I can very easily imagine a moral system that doesn't care about suffering at all.

1

breadandbuttercreek t1_iwnmoqr wrote

This is a very human-centred argument. You can make the world a lot better without making the experience of any single human any better. Things like saving the sea floor from trawling, protecting habitat in remote places, there are a lot of important things we need to do that won't help any individual humans very much, or even groups of humans. I want to live in a better world, that doesn't necessarily mean better (or worse) for any particular humans. I can pick up some litter in a forest, that is an inconvenience for me and doesn't help anyone, but it does make the world a better place.

1

Squark09 OP t1_iwztp1c wrote

I believe most living beings probably have some form of consciousness, so it isn't human-centred to me

1