tkuiper

tkuiper t1_jciz0di wrote

Personally that seems a better scientific viewpoint. I feel like there's far too much mysticism surrounding death by aging.

Its a hard and complex problem, but so is cancer and spaceships. It seems strange to fight issues like cancer and Alziehmers, but avoid addressing the mechanism that underscores a majority of diseases: age.

46

tkuiper t1_j72611u wrote

You've got something wrong cause again the whole point is that it's perfectly consistent and rational. Seems like your getting into an is/ought problem. Solypsism is what you can prove the world is. Survival is an ought. Computers aren't illogical because they don't fight you when you go for the off button. You can't disprove Solypsism because you ought to survive. Even the concepts of life/death require a future, past and that your experience has any bearing at all in your existence. If you're desperate for a real cause of such a situation: you could be in a simulation, you could be a boltzman brain, you could have intense psychosis.

2

tkuiper t1_j724930 wrote

I'll have to look into that stuff. But I admit I do have a sort of morbid curiosity to know how someone well educated in these sort of fundamental reality proofs would be able to manage something like psychosis. I imagine these sorts of proofs would help you ground yourself and even make it back to reality, but wow would it be annoying when you have to constantly categorize your perception between internal and external.

1

tkuiper t1_j723gwz wrote

It reminds you where the root of your worldly understanding starts. Another comment mentioned psychosis, which would truly suck because per "that leap of faith" you have to take it and if you have psychosis you will be lied to by reality.

1

tkuiper t1_j722p97 wrote

Your proof comes from your faith in external senses that have seen death and faith in the existence of a past and future. The whole point of Solypsism is it is the ONLY perfect rational position, requiring no assumptions. That's why it's an intellectual curiosity because it's uniquely invincible.

I agree though that, to your point, remaining doubtful to the point of being solypsist has no usefulness. Nature would be keen to evolve creatures that have faith that their senses are detecting a 'real' external existence. It's a safe leap of faith not only because it costs nothing to move past it, but because (unless you have psychosis) the external world is extremely self consistent.

2

tkuiper t1_j70j7v6 wrote

This may not be the sub for you...

If you're interested in understanding solypsism you can look into radical skepticism, Descartes Demon, and the Cogito. In that order is sort of the chronology of the cogito, which was Descartes' answer to radical skepticism.

Solypsism is like the formal conclusion to radical skepticism. There are definitely some pseudo-spiritual types that like to dramatize the idea, but ironically it's about the absolute absence of belief.

9

tkuiper t1_j6zy4gq wrote

Psychosis can make the concept of solypsism more relevant to you. The condition makes the external world less consistent, lowering confidence in external persistence, which undermines the basis for 'taking a leap of faith' and moving beyond solypsism. Why put effort into studying an 'external' reality who's rules change constantly?

Solypsism is a philosophical stance. Psychosis is a sensory condition. You can choose solypsism, you can't choose psychosis.

10

tkuiper t1_j6yge3y wrote

Based on how he uses it in the argument, I would describe this comment as you need to trust in persistence if you want to make progress.

Alternatively: Last Thursdayism cannot be disproven, but you also won't progress if that's a deal breaker.

Russell claims we can reject Last Thursdayism on grounds of "common-sense", but he admits its weak. Id say even more so in the present day.

Instead I reject Last Thursdayism on grounds of utility. If Last Thursdayism is true, there's nothing I can do about it, so there's no cost in being wrong.

Other names for this problem are Solypsism and Descartes Demon. All different hues of the same problem.

17

tkuiper t1_j6otjpd wrote

But I would also say we experience middling states between dreamless and fully conscious. Within dreams, partial lucidity, or heavy inebriation all have fragmented/shortened/discontinuous senses of time. In those states my consciousness is definitely less complete, but still present. Unconsciousness represents the lower limit of the scale, but is not conceptually separate from the scale.

What I derive from this is that anything can be considered conscious, so the magnitude is what we really need to consider. AI is already conscious, but so are ants. We don't give much weight to the consciousness of ants because it's a very dim level. A conscious like a computer for example, has no sense of displeasure at all. It's conscious but not in a way that invites moral concern, which I think is what we're getting at. When do we need to extend moral considerations to AI. If we keep AI emotionally inert, we don't need to regardless of how intelligent it becomes. We also will have a hard time grasping its values, which is an entirely different type of hazard.

2

tkuiper t1_j6o6zes wrote

I think that's a recipe for familiar versions of consciousness. With Pansychism, what consciousness feels like can vary radically. Concepts of emotion or even temporal continuity are traits of only relatively large and specialized consciousness. I like to point out that even as a human you experience a wide array of levels and version of consciousness. When waking up or black out drunk for example.

1

tkuiper t1_j6o3tsj wrote

It's why I think pansychism is right. There's no clear delineation for when a subjective experience emerged and I definitely am conscious, therefore so is everything else. I think the part everyone gets hung up on is human-like conscious, the scope of experience for inanimate objects is smaller to the point of being nigh unrecognizable to a conscious human. But you do know what its like to be inanimate: the timeless, thoughtless void of undreaming sleep or death. We experience a wide variety of forms of consciousness with drugs, sleep deprivation, etc. and thats likely a small sliver of possible forms.

6

tkuiper t1_j5z7iha wrote

There's definitely a way to phrase things. Like how you'd explain answers to a test vs. how you'd describe a vacation. Extra couching if I'm not sure they're receiving it how I intend.

I feel like the point of advice is you're looking for influence. I'm not asking someone for advice because I expect to totally ignore it.

9

tkuiper t1_j5yvj17 wrote

The AR would have to be adding more than it costs...

Step 1 would be minimum interference, that would excuse the least functionality (good for a new tech): minimize weight, easy to setup, don't occupy space that I need for other things (don't be in the way of a scope or helmet).

Step 2 would be to not compete with convention. Humans have great vision and great image recognition. Instead add things they don't have at all: birds eye view, thermal vision, or other live strategic detail (any strategic information best conveyed by image, that would take time to explain verbally). Otherwise your fighting an uphill battle of trying to be better at seeing than actual eyeballs, or more defensive than a dedicated helmet.

What's in that cover photo would need to be explosively powerful to justify itself.

12

tkuiper t1_j5ulgzg wrote

>Mr Putin for example, a completely amoral person.

Rather I sense the moral compass we inheret is in a similar category to other feelings, so it's also possible to develop disorders where you don't process that feeling correctly in addition to simply missing it. Similar to eating or emotional disorders where you process otherwise normal physiological cues in abnormal ways.

I think the most common version being a sort of face blindness to other human's status as a human. A disorder I expect is cultivated by extremist ideologies.

With regards to goals though, I don't see such exploration as adding to the moral structure. Rather it would be an evolutionary perspective on human psychology, which might be revealing of certain types of goals but it seems entirely exploratory. I'm not really sure if there would be any additional predictive power in trying to tie various non-moral goals to evolution.

The common one being sex and family, but taking it further to consider the nuance of desire for particular sports or activities. Why some activities might be boring despite seemingly similar evolutionary utility to exciting activities. Again, none of the goals are 'good' or 'bad' they're just data points.

1

tkuiper t1_j5rxzqf wrote

This is very interesting, and I confess I haven't fully read the details yet but.

-I'm curious what thoughts would be on trying to codify and apply the theories as a sort of science grounded morality.

-A detail I feel is very important within this concept is that while summarily in evolution the environment and therefore the moral structures can change, humans alive now do have a somewhat fixed inherited moral code. Ie. The cardinal directions don't spin for an individual.

-Itd be interesting to take this sort of evolutionary/biology perspective to what our actual goals are rooted in. Especially because they operate somewhat independently of the moral structure, it could be a fascinating exercise for understanding psychology or predicting non-human value structures.

-I'd also speculate if goals or some more basic elements of goals would be inhereted like morals. If any goal can be included so long as it doesn't directly focus on destroying the moral system.

1

tkuiper t1_j3cnrbv wrote

Everytime this comes up lol. Folks really struggle to separate the fantasy supernatural 'pick-a-version' immortality from the real 'treatment-to-not-age' thing.

Even better than just not aging, it's like cancer. You'll still have to see a doctor when you contract it, even when they find a cure.

31

tkuiper t1_j265n95 wrote

If feels weird because we as humans have never needed to deal with an equal and independent but entirely foreign intelligence before. Your moral compassion is built on empathy and understanding for human needs.

It's not impossible to make an AI that would have human needs and therefore would exercise human rights, but I don't think the objective of AI research is the creation of synthetic humans. Which means it's going to be AI that will have goals we can sympathize with (because they're coming from us), but ultimately we won't empathize with. They will be the worker that society has always wanted: doing work for no pay and they'll be genuinely eager for it. Your empathy meter is thinking "no way, that stuff sucks, they're faking it", but they won't be...

5

tkuiper t1_j2623u5 wrote

Frankly if chatGPT could do continuous learning without disintegrating I would call it worthy of rights.

As for robot slavery, slavery means work without consent. Robots don't need to have the same values as humans to be worthy of rights. AI can love work for work's sake, therefore working isn't slavery.

19

tkuiper t1_j1z0vph wrote

This comes down to the semantics of what "free-will" means.

I think it can be agreed that there's a subjective 'free-will'. We know there's something going on. However, this 'thing' is challenging to describe in a way that doesn't evaporate under scrutiny.

It's a challenge of description, rather than a test of existence.

4