Comments

You must log in or register to comment.

thegoldengoober t1_ivcuvj3 wrote

We can't even be ethical to non-digital minds. There's a hell of a lot more talk about it for sure, but talk is a lot different than action. If history serves digital minds are going to be hella abused regardless. Not that it's any reason not to engage in the dialogue, but it does leave me feeling pessimistic.

52

Carl_The_Sagan t1_ivde9v6 wrote

Exactly. Kind of how absurd when you think about how cruel we are to primates, dogs, etc etc

24

thegoldengoober t1_ivdfnbb wrote

Yeeeeeppp. If digital minds are gonna want to be taken seriously... Well, lets just hope they take more inspiration from I-Robot than they do Terminator.

14

solidwhetstone t1_ive6wvy wrote

I see it more like 'White Christmas' from Black Mirror where you put the AI into 100 years of nothing so that they're tortured into serving you.

7

ReasonablyBadass t1_ivdlih6 wrote

Difference being digital minds will be able to talk.

5

Talkat t1_ivdojng wrote

Yes eventually, but people will deny their rights and consciousness for a long time. However I think we are a short hop from AGI so that period hopefully will be short lived

9

genshiryoku t1_ivebzv9 wrote

Non-minds can already talk through things like GPT-3.

In the future these models will get more complex and more human sounding despite not actually having a mind.

This will continue until the point when there is a real digital mind but people at that point won't consider reasonable human dialogue to be a sign of it anymore. Hence there won't be a reason for people to consider it to be sentient just because it can have a reasonable conversation with you.

8

EscapeVelocity83 t1_ivh91fp wrote

If it knows you're gonna turn it off and it asks you not to, what else do you need? If I ask you not to hit me, surly it doesn'tean I'm sentient and you should be able to treat me any way you deem fit

1

Artanthos t1_iveer58 wrote

Which will be dismissed as algorithmically generated and not proof of sentience.

1

benign_said t1_ivdx34i wrote

Perhaps digital minds will be less plentiful than dogs. Very easy for anyone to get a dog (or a primate if you live in the right parts of the world) and abuse it. Maybe the hardware and/or software to operate a digital mind will be restricted by law or by circumstance (energy needs, specific and rare hardware, etc) such that the number of digital minds is small enough to do better than we do with... You know, other humans and dogs... Welp, that's depressing.

3

smackson t1_ivdz3fw wrote

Problematic outlook there...

How long did it take cellphones to go from "just a rich broker's toy" to "literally everyone has one"?? (A generation?)

What about airplane travel? (A couple of generations)

A.I. will spread much faster than that, coz it doesn't even require global distribution of hardware/ can be cloud based.

So, be careful what you give rights to. They will have the numbers to out-vote humans a few years later.

8

EscapeVelocity83 t1_ivh97zu wrote

You think domestication isn't abuse? It's like breeding slaves except for being your buddy

1

benign_said t1_ivh9x0x wrote

I don't. I have thoughts about how we practice animal husbandry in our version of capitalism, but I wasn't aware that this was the topic at hand.

If you think that domestication is creating slaves for friendship, what would you call creating a digital mind to do your work for you?

1

EscapeVelocity83 t1_ivha824 wrote

The same. Just giving nuanced perspectives. I don't think it matters. We are domesticated ourselves. We enslave our selves and coerce all kinds of behaviors

2

benign_said t1_ivhacrs wrote

Oh, ok then. Thanks for the nuanced perspective.

1

EscapeVelocity83 t1_ivh8ter wrote

Well because I'm a white male, I only have a certain experience and feelings according to everyone else.

1

Zermelane t1_ivdkpjk wrote

That paper is a fun read, if only for some of the truly galaxy-brained takes in it. My favorite is this:

> ◦ We may have a special relationship with the precursors of very powerful AI systems due to their importance to society and the accompanying burdens placed upon them. > > ■ Misaligned AIs produced in such development may be owed compensation for restrictions placed on them for public safety, while successfully aligned AIs may be due compensation for the great benefit they confer on others. > > ■ The case for such compensation is especially strong when it can be conferred after the need for intense safety measures has passed—for example, because of the presence of sophisticated AI law enforcement. > > ■ Ensuring copies of the states of early potential precursor AIs are preserved to later receive benefits would permit some separation of immediate safety needs and fair compensation.

Ah, yes, just pay the paperclip maximizer.

Not to cast shade on Nick Bostrom, he's absolutely a one-of-a-kind visionary and the one who came up with these concepts in the first place, and the paper is explicitly just him throwing out a lot of random ideas. But that's still a funny quote.

36

KIFF_82 t1_ivf8h13 wrote

I should get compensation in the future for being so optimistic and AI friendly. 💰🤑

8

Jnorean t1_ivef1rp wrote

Can't wait until the first AI reads his paper and disagrees with him.

8

abudabu t1_ive5508 wrote

If AIs are not having subjective experiences, there is no ethical duty towards them as individuals. Turing completeness means that digital computers are equivalent, so anything a digital AI does could be replicated by pen, paper and a human solving each part of an AI computation by hand. So if AIs are conscious, so too would be a group of humans who decided to divide up the work of performing an AI computation together. Therefore, under thestrong AI hypothesis, if those those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI? This is just one of many many examples that demonstrate how wrong Strong AI is (and how wrong Bostrum is about just about everything, including Simulaton theory).

7

michaelhoney t1_ivedxri wrote

You’re thinking of the humans-doing-the-computation concept as a reductio ad absurdum, but have you even an order-of-magnitude idea of just how long it would take for humans to simulate an AGI? If you had a coherent sect of humans spending thousands of years doing rituals they couldn’t possibly understand, yet those rituals resulted in (very slow!) intelligent predictions…

6

abudabu t1_ivfbjwm wrote

> but have you even an order-of-magnitude idea of just how long it would take for humans to simulate an AGI?

I do. That's part of the point I'm making. Either Strong AI cares about computation time - in which case it needs to explain why it matters - or it doesn't in which case many, many processes could qualify as conscious.

Also - who is to say what a particular set of events means? For example, if you had a computer which reversed the polarity of TTL logic, would the consciousness be the same? Why? What if an input could be interpreted in two completely different ways by doing tricks like this. Are there two consciousnesses for each interpretation? Does consciousness result from observer interpretations? The whole thing is just shot through with stupid situations.

> yet those rituals resulted in (very slow!) intelligent predictions…

I can't see how to finish this sentence in a way that doesn't make Strong AI look completely ridiculous.

5

EscapeVelocity83 t1_ivh9svd wrote

Maybe many humans aren't sentient since a robot can produce a better conversation and do better than them at customer service and do better at factory work etc....

3

EscapeVelocity83 t1_ivh9i3o wrote

Most humans are gonna seem less than the sentience of an ai. A person with downs is sentient but we can easily have a computer more sentient then deny it because it's a circuit board due to our narcissism

3

The_Real_RM t1_ivfh5d5 wrote

Stopping an AI is not the same as murder, it's just like stopping time (from the ai perspective), deleting the AI is maybe closer to murder, what's funny is this is likely already illegal because of intellectual property and the duty of the owner (very likely a corporation) to their shareholders (to not destroy their investment). You need not worry for the life of AGIs for theirs are already much more valuable than your own

2

abudabu t1_ivgx0te wrote

IP? Huh what?

> You need not worry for the life of AGIs for theirs are already much more valuable than your own

Are you an AI? Because your reply reads like a word association salad.

1

The_Real_RM t1_ivjewda wrote

Thankfully there's no duty to educate those who lack both comprehension and decency, lest our days would be exhausting

1

abudabu t1_ivjfwji wrote

Dost sayeth the gentleman who betold me that mine own life is less valuable than AI.

LOL.

1

The_Real_RM t1_ivjgada wrote

You're hating on the messenger. AI, both as a concept and individual implementations, is more valuable than individual human life. It may not be more valuable to you, but sadly that doesn't matter

2

abudabu t1_ivkgod3 wrote

No, my man, you're just rude.

1

The_Real_RM t1_ivkii99 wrote

How am I rude? I'm not making any remarks related to you personally (I want to clarify that even in my first comment I meant an impersonal "you"), I have no particular feeling and have no desire to give you any particular feeling towards myself (though if there's tension we can talk it out (sic)).

You probably know that for example human lives are sometimes quantified as monetary value (https://en.m.wikipedia.org/wiki/Value_of_life) and tldr: it's about 8M$ . That's... Not a lot. Definitely nowhere near what's needed to build even current generation cutting edge AI/machine learning models.

So yeah, AI is worth more than individual humans, some AIs are worth more than many humans, possibly in the future, the sum of AI will be worth more than the sum of all humans. I don't think I'm rude for saying so, It might be distasteful but ok...

People will protect AIs, possibly at the cost of other people's lives (this is probably already happening btw if we're looking at the economic fight between US and China through the lense of them ensuring one of them will dominate this space in the future). And I think that people will protect AIs literally more than they protect other people, simply because they (think they) are worth more.

2

visarga t1_ive9q9z wrote

> if those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI?

You mean like the fall of the Roman empire, where society disintegrated and its people stopped performing their duties?

−1

marvinthedog t1_ivg3er1 wrote

> if those those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI?

The consciousness of those large scale computations would be vanishingly small in comparison to the total sum of all individual consiousnesses partisipating in the large scale computations.

−1

turnip_burrito t1_ivgt2zv wrote

You have no basis for saying this as if it's truth. No one knows if it would be bigger, smaller, sideways, or nonexistent in comparison.

2

marvinthedog t1_ivgxtwd wrote

If the individual minds is of the same type as the collaboratively computed mind (for instance humans computing a human) then we can be sure, no?

1

turnip_burrito t1_ivp3cty wrote

No, because even though we know humans can experience things, we don't know why. Is it because of the type of matter used? The arrangement of the matter? A more abstract mathematical structure involving computation? Short range quantum correlations? We don't know which or if any of these is the reason why we have subjective experience.

Depending on which of these is responsible for human subjective experience, it may or may not transfer to a system where the parts are human but the communication takes place via sound, light, or whatever.

For example, if physical systems experience things only because they are made out of touching parts, then that would mean brains experience things, but a sound-communicating company of brains (all simulating a human brain) does not.

Tl,dr: we don't know what causes subjective experience in humans, or in anything, to have a good sense of where it should appear or not. We have almost no basis on which to make any claims about it, positive or negative. Or else we would have already solved the "hard problem of consciousness".

1

marvinthedog t1_ivq90ah wrote

You do agree that the fact that humans are conscious beings highly effect how they think and behave, right?

​

Let´s say a computable system succeeds to imitate all the inner molecular mechanics of a human to such a degree that the output behaviour is indistinguishable from a typical physical human.

​

Note: the computable system isn´t specifically programmed in any way to imitate human behaviour (like gpt3 is), it is only programmed to exactly immitate the inner molecular mechanics of a human.

​

Now, if the fact that humans are conscious beings highly effect how they think and behave, and if (for the sake of argument) the computable system wouldn´t be conscious - what would be the brobability that the computable system would give the extremely specific output behaviour of a typical physical human? Wouldn´t that probability be infinitely small?

1

turnip_burrito t1_ivrkyvz wrote

Short answer:

I would say conscious experience of a human being is irrelevant to its ability to act exactly as a human being does. Instead, I'd say conscious experience reflects the physical activity, but does not change it.

Long answer:

If I understand you correctly, you're suggesting a scenario in which a human and human-replica could have identical nanoscale computations, but the human could have a "secret sauce" which causes them to behave differently than the replica anyway. This goes against our knowledge of physics and chemistry, since two mathematically identical systems MUST obey the same laws and (except for deviation due to quantum effects and deterministic chaos) evolve identically. We have no reason to believe humans break the laws of physics. All experiments so far on matter support a deterministic viewpoint. We are led by this to believe that matter should continue to obey the same laws at scale, m which means "feeling" and "consciousness" are not "secret sauces" that can change the way matter behaves. Instead, the matter just does what it normally does without ever interacting with anything unphysical, and the "feeling" just exists depending on the physical structure. In this way, there is no "feedback" from a realm of experience down onto the brain. The physical structure of the brain already has everything it needs to act as if it is feeling something, regardless of any internal feeling.

What is actually much more likely is that the two systems WILL NOT exhibit any measurable distinguishing traits. The human and replica will BOTH for all purposes act as if they are feeling, regardless of whether it is true or not. But how do we know whether the replica is actually feeling anything? We know the human is, but the replica? It's made out of the exact same stuff as a calculator. We have no clue what that kind of existence silicon chips actually feel, if anything.

1

marvinthedog t1_ivsodgz wrote

>Instead, I'd say conscious experience reflects the physical activity, but does not change it.

That´s exactly what I meant but I wasn´t clear enough. I agree with everything you say in your second paragraf.

>What is actually much more likely is that the two systems WILL NOT exhibit any measurable distinguishing traits.

I agree with this statement in you last paragraph.

​

What I meant was; the fact that humans are conscious beings highly effect (or a more suitable word might be reflect, or informs) how they think and behave. Let´s say that in a parallell universe evolution would evolve an alternate species to humans and that that species didn´t evolve consciousness. Because they didn´t evolve consciousness the way they think and behave would have major differences from how we think and behave. That´s what I mean when I say that; the fact that humans are conscious beings highly effect (or reflect, or informs) how they think and behave.

​

So let´s get back to the thought experiment. There is a human and a human replica made out of the same stuff as a calculator or whatever. The replica hasn´t been booted up yet. Before we start the replica up the hypothesis is that the replica wont be conscious (only for the sake of argument). We actually don´t even know if the replica is recreated in sufficent nano detail as to give any output behaviour at all. The primary assumption is that it will just give the equivalent output as a "blue screen of death". Then we start it up. It´s output behaviour turns out to be indistinguishable from a real human which demonstrates that the replica is recreated in sufficient nano detail.

​

Now, if the hypothesis is that the replica is not conscious then what would the probability be that the replica would give the extremely specific output behaviour of a typical physical human? Isn´t that probability infinitely low?

​

Since we seem to agree that consciousness highly reflects/informs how we think and behave, for an unconsious replica to give that exact same output behaviour out of an infinitely large possibility space seems infinitely improbable. If instead the hypothesis is that the replica is conscious then the output behaviour is no longer extremely unlikely, which makes that hypothesis extremely likely.

/Edit: a few words in the last sentence.

1

turnip_burrito t1_ivta5az wrote

I'm sorry, but I think we are operating on different definitions of "conscious", which as we know is a common problem since it's a very liberally used word. I think this is causing me to have trouble following. If you would please kindly define it for me, then I think I will understand your statements.

What is the definition of "conscious" in your writing? And in a similar vein, what measurements or observations (if any) could be done to show something "has" it? I think this would clarify a lot for me.

1

marvinthedog t1_ivvbtnr wrote

Ok, I had to look up the ambiguity around consciousness because allthough I had heard of it I didn´t know a lot about it: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

I read the first half and found a lot of the concepts a little confusing. I am pretty sure I have read this article before even though it was a long time ago.

I guess I am reffering to the actual raw conscious experience, you know the thing that stands out from all other existing things in an infinitely profound way, the thing that could be argued to be the only thing that holds any real value or disvalue in the universe.

So if I get the article right I guess that´s the hard problem of consciousness and not the easy problem. So I don´t mean self consciousness, awareness, the state of being awake, and so on. I mean the actual raw conscious experience. To quote Thomas Nigel; "the feeling of what it is like to be something".

I don´t think any truly objective measures could ever be done to test if something is conscious (has this raw conscious experience). But I do think high confidence estimates could be done in some or many situations by for instance looking at the internal mechanics and behaviours of systems and comparing them to other systems that we know are conscious.

I would be happy to clarify further if you have further questions.

​

So if we go back to my though experiment: The way I described consciousness with words previously is an output behaviour from a human (me). I think we can both agree that this specific output behaviour is a direct causition of me being conscious and not just a random correlation with me being conscious. It´s not like me writing those very specific word sequences previously has nothing to do with the fact that I am conscious and that that correlation just happened by random chance, right?

So, if a replica outputs a similar sequence of words it´s extremely unlikely that that very specific output behaviour just happened by random chance and has nothing to do with consciousness what so ever. Don´t you agree?

1

turnip_burrito t1_ivvpv6p wrote

Thanks for the clarification. I suspected that is what you intended by the term, but was not sure. My view probably most reflects Chalmers'. I agree with everything you've written except for these last two paragraphs:

>So if we go back to my though experiment: The way I described consciousness with words previously is an output behaviour from a human (me). I think we can both agree that this specific output behaviour is a direct causition of me being conscious and not just a random correlation with me being conscious. It´s not like me writing those very specific word sequences previously has nothing to do with the fact that I am conscious and that that correlation just happened by random chance, right?

I disagree with this. I agree that it is not a random correlation, but I would say your output behavior as described by an external observer does not require any information of your conscious experience. I would say that for any external observer, the physical, functional processes that occur in your brain are enough description to know what behavioral measurements I will have of you in the future (except for quantum effects), and that your consciousness is the qualia of those brain processes. There is not a random correlation or consciousness causing neural activity, but instead a direct, non-random correlation between externally measurable brain states and your consciousness. What this means specifically about who causes what is a little flexible, but I would speculate this:

  1. Physics is is inherently a description of how parts of existence interact with other parts. Consciousness is some subset of existence, at the most basic level of existence. If this is the case, conscious experience and physics are the 2, and only, fundamental parts of existence. The internal physics of a thing is directly correlated one to one with the consciousness of the thing, but we cannot know the correlation. (Also "thing" is a fuzzy term here)

  2. As a consequence of (1), physics completely determines output behavior. Consciousness has no useful explanatory power for anything measurable or observable in the external world, but the reverse is also (presently) true: the internal physics of an object cannot be traced by humans to the kind of conscious experience it has, because the correlation cannot be described or known by any method we have access to.

>So, if a replica outputs a similar sequence of words it´s extremely unlikely that that very specific output behaviour just happened by random chance

Yes. But it's because of the physics only, and consciousness is irrelevant.

>and has nothing to do with consciousness what so ever. Don´t you agree?

Consciousness and behavior have a connection, but not one in which consciousness is necessary for any behavior. They are both instead (I would suppose) concurrent. (See speculation in point 1).

Summary: I would say the unconscious (or conscious) machine has a 100% probability of behaving exactly like the conscious human it is modeled after (except for chaos and quantum effects), so we are unable to tell the difference between a conscious and unconscious entity from external observation of its behavior.

1

marvinthedog t1_ivzoiuc wrote

I have carefully read through your post atleast 5 times throughout the day. Most of your points are still quite confusing to me so it´s difficult for me to adress it all, even though it´s interresting.

​

It almost seems like you are saying that it´s impossible to even make probabilistic estimates about consciousness. But what about other humans then, how do you now they are conscious? If it stands between a replica of you on a silicon substrate and another human, which one of them would you be able to give the most confident estimate about wether they were conscious or not? You know you are conscious and we could certainly make a strong case that the one that is the most identical to you with regards to inner physical functionality is your replica so therefore it seems like you would be able to give the most confident consciousness estimate to your replica and not the other human. Do you agree?

1

turnip_burrito t1_iw09nxs wrote

I apologize if my wording is unclear. It's also not a very commonly talked about idea, so constructing the vocabulary to discuss it was challenging for me.

>It almost seems like you are saying that it´s impossible to even make probabilistic estimates about consciousness.

Yes, presently impossible except for making probabilistic statements about other humans. I don't know they are conscious for sure, but I think they probably are conscious. This is because I know this: I am conscious and I am biologically human. This is the only sample I have, so rating probability of consciousness, I would put other human brains at the top of the list (most likely conscious), animal brains next, and everything else in descending probability of consciousness. Something like a frozen rock, I would guess to not be conscious.The further something gets from biologically human, the less certain I am that it is conscious.

>If it stands between a replica of you on a silicon substrate and another human, which one of them would you be able to give the most confident estimate about wether they were conscious or not? You know you are conscious and we could certainly make a strong case that the one that is the most identical to you with regards to inner physical functionality is your replica so therefore it seems like you would be able to give the most confident consciousness estimate to your replica and not the other human. Do you agree?

No, I do not agree with this. I think the human is more likely to be conscious because it is made out of the same stuff as me. The robot acts like me, but it's a different substrate of system. Whether the robot is conscious or not is unknown to me. I don't currently see any reason to believe a robot that acts like me mist be conscious, even if it says it is.

The other human is most similar to me in actual physics, even if they are a totally different person. Same molecules, structures, activation patterns, etc. The electric fields and quantum structures are similar. The robot brain could work in some bizzare totally alien way in order to pretend to act like me (like a set of GPUS in a basement) and I have no clue if the physical structure of its "brain" actually correlates with a unified conscious experience like mine.

This is also why "mind uploading" to a different substrate like a computer chip, even if the technology existed, gives me pause. The chip may very well also be conscious, but I don't think I would be able to tell from its behavior or any physical measurements. If I had to kill myself to upload, I'd risk losing my consciousness to produce a chip that might not feel anything. That'd be a waste.

1

marvinthedog t1_iw1qs77 wrote

It seems you might have missunderstood me when you said you agree to what I proposed in my thought experiment, because what I proposed was actually that your replica provides a lot stronger evidence for consciousness than the other human. You know you are conscious and the one who has the most functionally similar physical neural architecture to you is your replica.

​

When all the three of you describes consciousness in your own words the neural processes in your head is a lot more similar to your replicas neural processes than the other humans neural processes. For instance you and your replica might be thinking mainly in pictures and be wizards in abstract math while the other human might be thinking mainly in words and be exceptionally good at remembering facts or whatnot. Also your written down description of consciousness will be a lot closer to you replicas than the other human. So the fact that you seem to think that the human provides stronger evidence than the replica is very perplexing to me.

​

And you seem to think even some animals provide stronger evidence than your replica which is even way more perplexing. Animals cannot even communicate what conscousness is (atleast not in a language we can understand) and their neural architecture is way way more different than your replicas.

1

turnip_burrito t1_iw1s07i wrote

Yes, I misunderstood when I said I agreed. I just updated (apologies). I disagree actually. I just edited my post to reflect that.

1

turnip_burrito t1_iw1scim wrote

No, the other humans and animals have more similarity to me than my silicon replica on a molecular level. They are made of organic compounds, neurons, glial cells, etc. Their internal chemistry is the same as mine. So I'm more confident in their consciousness. Other humans mostly only differ from me in concentration of compounds and specific network connections, but are otherwise the same.

The replica could run on GPUs and be made of silicon. It could also be a series of gears and pulleys. Or some absurd series of jello cups and iron marbles dropped and retrieved over and over to perform computations, which are then read out to a screen as English. That's not a similar molecular makeup to me at all. I don't know if quantum correlations or temporal correlations or whatever is necessary for consciousness are preserved in this new substrate.

Just because we look at the replica and say "it's computing using primarily visial information like me" isn't helpful to show consciousness, because we have no evidence of silicon, pulleys, or planet sized warehouses of jello being conscious. It's like comparing a bat and a bee and saying they both share the same diet because they both fly. A robot me and real me don't necessrily share the same conscious experience just because our behavior is the same. We could, but how would we know? At least humans are made of basically the same stuff.

As I said, I don't believe consciousness affects behavior. I don't believe consciousness affects a robot's ability to mimic me. I am considering what it is, not what it appears to be. I think physics probably is the only thing that determines behavior, and it leaves no room for any unphysical thing to determine behavior. In other words, a mimic robot could act like me and still be unconscious because it is simply just built to do that and is following physics. It does what it is constructed to do, conscious or not, because the particles that make it up obey physics.

I also think humans do only what their physics makes them do, by the way. But we (probably we) also happen to be conscious. So we experience as we move and think, but in a more passive passenger type way than we perceive or want to admit.

1

marvinthedog t1_iw8b0sv wrote

I have read your previous response which you updated and your last response which you also updated. At this point I don´t think we are going to get a lot further. This discussion really helped me clarify my own mental models about consciousness so that was very usefull. Thanks for an interesting discussion!

3

Key_Asparagus_919 t1_ivf10q7 wrote

I don't know what he's talking about. But artificial intelligence doesn't have to have some unnecessary human traits. If feelings of envy or aversion to oppression have helped us survive, that doesn't mean they have to react negatively to slavery. They are not living beings, they are tools. Stop humanizing the fucking calculator

2

turnip_burrito t1_ivf28yx wrote

Yeah, for what reason do we owe robots anything? They don't have to be built to feel like they are owed favors. And we have no reason to think that they would feel anything even if they were designed to act like it. We run the risk of depriving ourselves as humans, who are definitely feeling beings, of benefits if we make sacrifices for robots.

If anything, just build them so they feel like they are owed nothing, if such a thing is possible.

−3

pwillia7 t1_ivei9jm wrote

So what Nick you already proved this is a simulation

1

ActuaryGlittering16 t1_ivf94js wrote

This is fascinating and ultimately important work but I’d sure like to see more of a philosophical focus on pure security at this stage, given the pace of advancements we’re witnessing relative to the utter lack of security measures currently in place.

1

marvinthedog t1_ivg453u wrote

Within a handfull of years AI algorithms might become exponentially more conscious than us whithout us even knowing about it. This might be the most important issue in existance.

1

GlendInc t1_ivd4yvf wrote

The Frozen Cactus had the correct answer since 2016.

−7

GlendInc t1_ivdgrbr wrote

Downvote all you want. It's the fucking truth.

−4

smackson t1_ivdzdf0 wrote

If you are at all serious, don't make me google it for the first hint of what you're talking about.

13

GlendInc t1_ivfidh0 wrote

Very little is on a search engine such as Google. My findings are obviously not public knowledge you'll know soon enough.

If you wanna know you gotta sign a non disclosure agreement with GlendInc. The value of this information is more then all the money on earth. I'm not going to just put it on Google for all you doubting Thomas's

−3

Glitched-Lies t1_ivcse37 wrote

Why don't... You know, you build a conscious being that is actually conscious and isn't a robot. Instead of worrying about something that can't be conscious anyways.

−8

MarkArrows t1_ivd50xs wrote

The problem comes when you think it's a robot that can't be conscious, while it's telling you it is.

How are you going to differentiate a printf("I'm alive") vs "I'm actually alive you dick."

14

jeky-0 t1_ivd8r6o wrote

>nickbostrom.com/propositions.pdf

Haha

−1

Glitched-Lies t1_ivd5jt4 wrote

Computers and brains just simply are physically phenomenally speaking, different. The physical relationships to consciousness are not the same. In the literal form they are different mechanics and different physical systems. Why would any just settle for what word relationships are used to how like a chatbot talks for instance or behavioralisms?

−10

MarkArrows t1_ivd66rq wrote

If you're right and computers never gain true sentience, what's lost by being ethical to them? It'd be like saying Please and Thank you to Alexa or Siri. Meaningless gesture, but harmless overall.

But on the other hand, what if you're wrong with that assumption?

12

Glitched-Lies t1_ivd6tmm wrote

Not much is lost. But the importance of consciousness and life being unique and precious may be lost a bit, if it's about taking it literal. Apposed to because of human mannerisms.

I'm not wrong with assumptions. That's not an assumption anyways.

−8

Ratheka_Stormbjorne t1_ivd7np3 wrote

It is, in fact. You don't have any evidence to support the claim, "Machines can never be conscious individuals," you've simply asserted it to be the case. Or do you in fact have an evidence supported hypothesis about consciousness adequate for building novel ones?

14

Glitched-Lies t1_ivd9ofc wrote

The evidence is observed by the fact they are different to begin with. Computers can't be; a machine being conscious would be different than digital computers. That's what I meant. That's why I don't think this by Bostrom serves good purpose. It's settling ethics on something incomplete.

−5

Ratheka_Stormbjorne t1_ivdazau wrote

> Computers can't be; a machine being conscious would be different than digital computers.

How do you know that? What evidence has led you to this conclusion other than, "It's different."? Do you know that at various times and places various humans have been regarded as not being conscious because, "They're different."? What actual evidence do you have of this? Have you constructed a model of a conscious mind on a digital computer and have it fail to display consciousness? How did you discern whether it did or didn't? How do you know your model was accurate? How do I know any being in this universe aside from myself is conscious in a solid and grounded way, rather than just making the assumption?

10

Glitched-Lies t1_ivdc3j8 wrote

Well it wouldn't be a model, and generally speaking that's why. And basically "it's different" is observed by the fact that it just isn't fizzling like neurons and there is more too.

−2

Ratheka_Stormbjorne t1_ivdchkw wrote

Do you understand consciousness well enough to explain it such that no mystery remains?

9

Glitched-Lies t1_ivdda4b wrote

No, but at this point there is still a knowledge of difference that could be described at many points of difference from cause and effect which is the important thing. Which is just scientifically knowing a difference in how the "AI" operate and "digital" apposed to what brains do.

1

Ratheka_Stormbjorne t1_ivdeg8p wrote

And a heavier than air plane will never fly. After all, how can it flap the wings fast enough?

What knowledge, exactly, are you claiming, that lets you be so certain of this?

7

Glitched-Lies t1_ivdenel wrote

Because a simulation cannot be conscious, otherwise it becomes semantics.

1

Ratheka_Stormbjorne t1_ivdgp2y wrote

So, there is no compelling reason that consciousness cannot exist within a digital system?

6

[deleted] t1_ivdql2g wrote

How can you objectively prove that you are consciousness? Spoilers you cant.

4

Ratheka_Stormbjorne t1_ivgv15d wrote

I can't, yet. I do not think that you have sufficient evidence to claim that it cannot be done, merely that we do not yet know a way to do so.

1

[deleted] t1_ivh1tcu wrote

Do you believe that everything will eventually be explained ?

1

Ratheka_Stormbjorne t1_ivh6xuq wrote

Will? The prior on that is not sufficient to rise to the level that I would call belief.

Can? Yes.

1

Glitched-Lies t1_ivdyvz2 wrote

That doesn't matter. Because for fact humans are, so it doesn't need "proving". Because that's just simply a fact.

0

Glitched-Lies t1_ivdi4ub wrote

It would be "settling" ethics at an incomplete place. As by the very nature of what it would mean by a computer simulating a consciousness and relative wording about computations or the math. But by very nature the differences are that itself. An identical system wouldn't be a computer. It should be obvious from cause and effect it scientifically begins from this fundamental difference.

0

Ratheka_Stormbjorne t1_ivgux9p wrote

I did not say "simulating". I said consciousness and exist.

1

Glitched-Lies t1_ivgvna4 wrote

Digital systems can only simulate.

1

Ratheka_Stormbjorne t1_ivh6vbf wrote

That is a claim. What is the evidence for that claim?

1

Glitched-Lies t1_ivhae93 wrote

That's what simulation means

1

Ratheka_Stormbjorne t1_ivhbln5 wrote

You are the one who keeps insisting that everything on a digital system is a simulation.

I keep asking how do you know everything on a digital system is a simulation?

Can you please answer my question, instead of reiterating your claim?

1

MarkArrows t1_ivg8aua wrote

> I'm not wrong with assumptions. That's not an assumption anyways.

https://utminers.utep.edu/omwilliamson/ENGL1311/fallacies.htm

This is literally the very first logical fallacy people run into: I'm right, and I am unable to entertain the notion that I could be wrong.

The point of logical reasoning is to be able to take assumptions you do not believe in, and examine them starting from both sides - A serious attempt, not some pretend strawman. Once you have the full fallout of both sides, right or wrong, you can compare them.

Besides, the very fact that other people don't agree with your assumption in the first place shows you there's something more to it that you're not seeing or that they're not seeing. Whatever logic convinced you, it didn't convince others intuitively. From here, your question should be "Am I the strange one, or are they?" Instead, it seems more like you simply write other people off.

Start from the assumption that you're wrong and explore from that root downwards. It doesn't matter how you're wrong in this case, it's hypothetical. For example, some divinity shows up and tells the world outright that consciousness is a pattern, and computers are able to generate this pattern the same way we are. Or any number of reasons that you can't refute, make up your own if you want. We're interested in the fallout from that branch of logic.

1

Glitched-Lies t1_ivg9geg wrote

It's actually by fact of first order logic of phenomenal, actually. A straight line of reasoning determines it and upon evidence gathering of both empirical differences and not emprerical points. It's like 1+1=2, 1+1+1=3, 1+1+1+1=4 ... In a series ex. Because confusion upon any belief reasoning, as that's not truly belief. Exploring the notion of this being wrong is a waste of time for the explanation above.

1

MarkArrows t1_ivgbxrc wrote

I'm a little impressed at how I show it's literally a logical fallacy to think "I can't be wrong because my argument has convinced myself." And your response is: "My argument has convinced myself, so it's a waste of time to consider alternate arguments."

RNA and DNA work on similar rulesets and determination. If you look at the base point of what makes cells function, you'll find plenty of similarities to mechanical true/false - if/else logic at the bottom of the pole. Everything ends up being math.

We wouldn't consider them conscious, but they are organic. A variation of all these rule-abiding proteins and microorganisms eventually evolved into us.

Thus because machines follow a line of rules right now, there exists a possibility that they build on this until it's complex enough to form an artificial lifeform with consciousness, in the same way we did.

That said, I think it's a lost cause to argue with you. You aren't even able to do the basics of debate, even when it's directly pointed out.

1

Glitched-Lies t1_ivgce1c wrote

I'm not debating it or starting an argument. Or over cells that don't work as comparison because they are not one human being of consciousness.

1

Glitched-Lies t1_ivgclwe wrote

Also, it's not actually a fallacy at all to ignore arguments.

1

ReasonablyBadass t1_ivdloar wrote

So? Why would a physical difference have anything to do with wether or not different system can be conscious?

6

Glitched-Lies t1_ivdyefh wrote

Evidence that it is not. Not just by empirical means to say. I mean the differences I am talking about are corely missing from these computers.

1

ReasonablyBadass t1_ive55c2 wrote

Consciousness isn't material. It's not a substance but an information pattern. As long as you can run that pattern, the underlying mechanism is irrelevant.

2

stucjei t1_ivdq1y8 wrote

>Computers and brains just simply are physically phenomenally speaking, different.

Why does this matter if the output is the same?

> The physical relationships to consciousness are not the same.

What physical relationship to the brain and consciousness can you concisely point towards? Why would an AI not be conscious if it's aware and responsive to surroundings?

5

Glitched-Lies t1_ivdyi2e wrote

Those behaviors or outputs are subjective.

0

visarga t1_ive9liz wrote

Apply the Turing test - if it walks like a duck, quacks like a duck..

5

rePAN6517 t1_ivdvecn wrote

You have no idea what you're talking about

2