Submitted by AUFunmacy t3_10pwt44 in philosophy
Magikarpeles t1_j6n6fsg wrote
Reply to comment by HEAT_IS_DIE in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
I think the hard problem is more about being unable to prove or disprove someone else’s phenomenological experience of being conscious (at least how I understand it). I think that’s quite relevant to the discussion about whether or not the AI is “conscious”. Unlike humans and animals the AI isn’t constantly processing and thinking and feeling, just when it’s tasked with something.
If consciousness is an emergent property then it’s possible for the AI to be conscious in its own way while its “thinking”. But the point stands that it’s not possible to access someone or something’s subjective experience, so we can only ever speculate.
HEAT_IS_DIE t1_j6ngdon wrote
I think it is not a problem unless you make it so. Of course we can't exactly know what's going on in someone else's experience, but we know other experiences exist, and that they aren't all drastically different when biological factors are the same.
I still don't understand what is so problematic about not being able to access someone else's experience. It just seems to be the very point of consciousness that it's unique to every individual system, and that you can't inhabit another living thing without destroying both. Consciousness reflects outwards. It is evident in reactions. For me, arguing about consciousness totally outside reality and real world situations is not the way to understand the purpose and nature of it. It's like thinking about whether AI will ever grow a human body and if we will be able to notice when it does.
jamesj t1_j6obala wrote
It may not be the case that there is a strong correlation between consciousness and evidence of consciousness. Your claim that it is obvious which other entities are conscious and which are not is a huge assumption, one that could be wrong.
wow_button t1_j6ogc1e wrote
I like your point of need for preservation, react to stimuli and others as necessary but I'll posit that we can already do that with computers. Need for preservation is an interesting phrase, because I can create an evolutionary algorithm that rewards preservation. But 'need' implies desire. And we have no idea how to make a computer program desire anything. React to outside stimuli - this can be emulated on a computer, but there is nothing with any sense of 'outside' and 'inside'. Others as necessary - see previous for problem with 'others'. Necessary is also problematic, because it implies desire or need?
If you can teach me how to make a computer program feel pain and pleasure, then I agree you can create ai that is sentient. If you can't, then no mater how interesting, complex, seemingly intelligent the code behaves, I don't see how you can consider it conscious.
Viewing a single comment thread. View all comments