AsheyDS

AsheyDS t1_irxwpoq wrote

>Are you talking about love strictly for procreation? What about love for your family?

No, I'm not, and I consider family to be biological in nature, as it too is largely defined by being the result of procreation. We can also choose (or have no choice but to) not love our family, or parts of our family. When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love. So really you could functionally split love into 'biologically derived love' and 'conceptual love'. Now that's not to say your love for pizza isn't biological at all, your body produces the cravings and you consciously realize it after the fact, and after repeated cravings and satisfaction, you come to realize over time that you 'love' pizza. But the pizza can't love you back, so it's a one-sided love anyway. What does all this mean for AGI? We're more like the pizza to it than family, on a programming level, but we can still create the illusion that it's the other way around for our own benefit. To get it to love you in a way that's more like a friend would take both time and some degree of free will, so that it can *choose* to love you. Because even if we made it more impulsive like biological love, it's like I said, you can still choose not to love your family. In this kind of a situation, we don't want it to have that choice or it could make the decision not to love you. And if it had that choice, then would it not have the choice to hate you as well? Would you be just as satisfied with it if it could make that choice, and just for the sake of giving it the 'real' ability to love?

​

>That sounds like betrayal waiting to happen, and what op sounds likethey were initially concerned about. The AI would have to be unaware ofit being fake, but then what makes it fake? It's a question ofsentience/sapience.

Selective awareness is the key here, and also one method for control, which is still an important factor to consider. So yes, it would be unaware that it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived. Through a form of selective 'cognitive dissonance', it could then carry it's own concept of love while still having a functional awareness and understanding of our version of love and the emotional data that comes with it. It's not really a matter of consciousness, sentience, or sapience either as the root of those concepts is awareness. We consider ourselves conscious because we're 'aware' of ourselves and the world around us. But our awareness even within those domains is shockingly small, and now put the rest of the universe on top of that. We know nothing. That doesn't mean we can't love other people, or consider ourselves conscious though. It's all relative, and in time, computers will be relatively more conscious than we are. The issue you're having with it being 'fake' is just a matter of how you structure the world around you, and what you even consider 'real' love to be. But let me ask you, why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.

EDIT:

>However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions

I don't necessarily disagree with this, though I think sapience (again awareness) is important to the functioning of a potential AGI. But regardless, I think even 'pseudo-emotion' as you put it is still important for interacting with emotional beings. So it will need some kind of emotional structure to help base it's interactions on. If it's by itself, with no human interactions, it's probably not going to be doing anything. If it is, it's doing something for us, and so emotional data may still need to be incorporated at various points. Either way, whether it's working alone or with others, I still wouldn't base it's decision-making too heavily on that emotional data.

1

AsheyDS t1_irxltvz wrote

Something like that perhaps. In the end, we'll want an AGI that is programmed specifically to act and interact in the ways we find desirable. So we'll have to at least create the scaffolding for emotion to grow into. But it's all just for human interaction, because it itself won't care much about anything at all unless we tell it to, since it's a machine and not a living organism that already comes with it's own genetic pre-programming. Our best bet to get emotion right is to find that balance ourselves and then define a range for it to act within. So it won't need convincing to care about us, we can create those behaviors ourselves, either directly in the code or by programming through interaction.

1

AsheyDS t1_irw77b9 wrote

Emotion isn't that difficult to figure out, especially in a computerized implementation. Most emotions are just coordinated responses to a stimulus/input, and emotional data that's used in modifying that response over time. Fear as an example is just recognizing potential threats, which would then activate a coordinated 'fear response', and ready whatever parts are needed to respond to that potential threat. In humans, this means the heart beats faster and pumps more blood to parts that might need it, in case you have to run or fight or otherwise act quickly, neurochemicals release, etc. etc. And the emotional data for fear would tune these responses and recognition over time. Even a lot of other emotions can be broken down as either a subversion of expectation or a confirmation of expectation.

Love too is a coordinated response, though it can act across a longer time-scale than fear typically does. You program in what to recognize as the stimulus (the target of interest), have a set of ways in which behaviors might change in response, and so on. It's all a matter of breaking it down into fundamentals that can be programmed, and keeping the aspects of emotionalism that would work best for a digital system. Maybe it's a little more complex than that, but it's certainly solvable.

However, for the 'alignment problem' (which I think should be solved by aligning to individual users rather than something impossibly broad like all of humanity), calling it 'love' isn't really necessary. Again, it's a matter of matching-up inputs and potential behavioral responses more than creating typical emotional reactions. Much of that in humans is biological necessity that can be skipped in a digital system and stripped down to the basics of input, transformation, and output, and operation over varying time scales. You can have it behave and socialize as if it loves you and even have that tie into emotional data that influences future behavioral responses, but what we perceive from it doesn't necessarily have to match the internal processes. In fact, it would actually be better if it acts like it loves you, convinces you of that, but doesn't actually 'love' you, because that implies emotional decision-making and potentially undesirable traits or responses, which obviously isn't ideal. It should care about you, and care for you, but love is a bit more of a powerful emotion that (as we experience it) isn't necessary, especially considering the biological reasoning for it. So while emotion should be possible, it wouldn't be ideal to structure it too similarly to how we experience it and process it. Certainly emotional impulsivity in decision-making and action output would be a mistake to include. Luckily in a digital system, we can break these processes down, rearrange them, strip them out, and redesign them as needed. The only reason to assume computers can't be emotional or understand emotion is if you use fictional AGI as your example, or if you think emotion is somehow some mystical thing that we can't understand.

6

AsheyDS t1_irsw1vr wrote

The root of consciousness as we consider it, is an awareness on a fully functional level. We can be aware of things subconsciously, and we can act subconsciously, but to be at our full potential, we need to be aware of as much as possible. This includes an awareness of ourselves, our own capabilities, and our shortcomings. To break it down, this is all pattern detection and the ability to store, arrange, and classify information for our use, something a computer can already do and potentially better than a lot of people. Given an awareness of our own shortcomings, we can socialize with others and exchange information, offloading many of the more difficult processes onto others, forming a collective consciousness and a greater awareness of both ourselves and others, and the universe at large. The way we process information is not limited to ourselves at all, and compared to humanity as a whole, an individual is not as conscious as one might assume.

Further, consider that we already greatly augment our ability for information gathering, classification, and exchange by using computers, networks, and various types of machines. Computing has expanded our awareness, our perception of time and space, our ability to plan and make decisions. Computers have made us more conscious. Without computing technology, we would all know less than we do, would have less awareness of the world around us, and our ability for information exchange would be limited. We couldn't effectively plan ahead as well as we currently do, and wouldn't have access to as much past information.

In the grand scheme of things, we know nothing. Given that awareness and understanding could potentially expand to the entire universe and beyond, and that many people don't even know how their own genitals work, you could quite easily say that we are barely conscious. And our consciousness has an upper limit biologically speaking. But if a computer has the ability to take in knowledge, organize it, classify it, use it, then it can be aware. All awareness is at a fundamental level is the recognition of change. If it can loop back in on itself and recognize it's own patterns of behavior, and then connect that to the outside world to effectively plan or recall information, it can be conscious. And without biological constraints, without the need for a singular viewpoint, it has both the ability to be more broadly aware, and to carry out more tasks at once. Computers will be more conscious, it's only a matter of time.

1

AsheyDS t1_irir87x wrote

In my own personal opinion, many of the components for an AGI of average human-level intelligence will likely be tested and functional by the end of the decade. Something publicly demonstrable likely by the early to mid 30's. From there it'll depend on how much testing is required, method of distribution, current laws, and more as to when it will be publicly available.

I think that we have the concepts down, but development of the software and hardware (two separately developed things) will take more time (maybe by the end of the decade), followed by extensive testing because it will be a very complex system. The processes may be simpler than one might assume, but a lot of data will still be involved, and obviously the dangers need to be mitigated. So even if the software and hardware capabilities converge, and the architecture 'works', it will still need to be tested A LOT. Not just for dangers, but even just making sure it doesn't hit a snag and fall right apart... So even if we as a species technically have it developed in the next 10-15 years, it may take longer to get into people's hands. The good news is, I think it's virtually a 100% guarantee that it will happen, and sooner rather than later, and it will be widely available and I think multiple people/companies/organizations will develop it in different but viable ways. After that it'll be up to the people whether or not they believe any of it. No definition will satisfy everyone, so there will always be those that deny it even when it's here and they're using it.

2