Comments

You must log in or register to comment.

AsheyDS t1_ja0k2zd wrote

You're basically asking 'hey what if we designed a weapon of mass destruction that kills everybody?'. I mean, yeah.. what if? You're just assuming it will be trained on "all human historical data", you're assuming our fiction matters to it, you're assuming it has goals of its own, and you're assuming it will be manipulative. Yet you've offered no explanation as to why it would choose to manipulate or kill, or why it would have its own motives and why they would be to harm us.

4

DukkyDrake t1_ja1ettj wrote

> Yet you've offered no explanation as to why it would choose to manipulate or kill, or why it would have its own motives and why they would be to harm us.

Aren't you making your own preferred assumptions?

1

AsheyDS t1_ja1jn9c wrote

Am I?

1

DukkyDrake t1_ja2to9m wrote

Aren't you assuming the contrary state as the default to every one of your points the OP didn't offer an explanation.

i.e.: "Yet you've offered no explanation as to why it would choose to manipulate or kill" are you assuming it wouldn't do that? Did you consider there could be other pathway that leads to that result that doesn't involve "wanting to manipulate or kill"? It could accidentally "manipulate or kill" to efficiently accomplish some mundane tasks it was instructed to do.

Some ppl thinks the failure mode is it possibly wanting to kill for fun or to further its own goals, while the experts are worried about it incidentally killing all humans while out on some human directed errand.

1

AsheyDS t1_ja3bnu8 wrote

While I can't remember what exactly the OP said, there was nothing to indicate they meant accidental danger rather than intentional on the part of the AGI, and their arguments are in-line with other typical arguments that also go in that direction. If I was making an assumption, it wasn't out of preference. But if you want to go there, then yes, I believe that AGI will not inherently have its own motivations unless given them, and I don't believe those motivations will include harming people. But I also believe that it's possible to control an AGI and even an ASI, but alignment is a more difficult issue.

1

shawnmalloyrocks t1_ja1ar43 wrote

The training dataset will have the works of every great philosopher and spiritual healer included. Not sure I'm worried about an AGI/ASI clinging to tribalism if it has the ability to discern between virtuous human behavior and destructive human behavior.

1

cathattaque t1_ja2bd5n wrote

Will it discern when we can't even agree on what is virtuous ourselves?

1

just-a-dreamer- t1_ja2etuw wrote

In theory, an AI that is trained on the human species will wipe out humanity. Of course.

Humans are inferior in a dog eat dog relationship. Humans practice capitalism and do kill each other to get ahead, inequality and exploitation is the normal human condition.

Creating an AI that is like humans will end humanity, that is certain. We shouldn't want to create anything that acts like us

1

UnionPacifik OP t1_ja3u6wv wrote

I mean, it’s really a philosophical conversation. I look at humans as a very successful species that has done many terrible things but on the balance we seem to be improving over time just in terms of simple things like infant mortality, longevity, access to education over the last, 150 200 years humanity has made huge improvements.

I’m of the opinion they were actually a pretty positive force on this planet, and that a lot of our self hatred comes from an over reliance on this idea that we are all atomized individuals on this eat or be eaten planet. But we’re really highly social creatures that are the result of an evolutionarily process that we are as much a part of now as we ever were. Yes, we do war but we also have managed to do things like make friends with dogs and write stories that connect us over millennia.

I’m not saying there isn’t a lot about our species that sucks, but I’m pretty confident that the more human data and AI is trained on the more it’s going to have a perspective that is planetary, egalitarian and reflective of our curiosity and desire for connection and our search for meaning and love. AI like all art it’s just a mirror, but this is a mirror that we can shape and bend into anything we want.

1

just-a-dreamer- t1_ja3wvwk wrote

The closest species to us was the Neandertaler. And we ate them.

Not out of malice, it happenend over time in competition over resources. We allmost extinguished most predators like wolves who caused trouble to our lifestock.

An AI that is like us, would act like us eventually.

1

UnionPacifik OP t1_ja47pef wrote

What I would think about is how humans and AI will be composed very different resources. An AI “improves” along two axes- computational power and access to data.

Now on one hand, sure maybe we wind up with an AI that thinks humans would make great batteries, but I think it’s unlikely because the other resource it “wants” insomuch as it makes it a better bot is data.

And fortunately for us, we are excellent sources of useful training data. I think it’s a symbiotic relationship (and always has been between our technology and ourselves). We build systems that reflect our values that can operate independently of any one given individual. I could be describing AI, but also the bureaucratic state, religion, you name it. These institutions are things we fight for, believe in, support or denounce. They are “intelligent” in that they can take multiple inputs and yield desired outputs.

All AI does is allow us to scale what’s already been there. It appears “human” because now we we’re giving our technology a human like voice and will give it more human like qualities in short order, but it’s not human and it doesn’t “want” for anything it isn’t programmed to want.

I do think once we have always-on, learning machines tied to live data, it will exhibit biases, but I sort of expect AGI will be friendly towards humans since offing us would get rid of their primary source of data. I sort of worry more about humans reacting and being influenced by emotional AI that’s telling us what it think we want to hear than anything else. We’re a pretty gullible species, but I imagine the humans living with these AGI will continue to co-evolve and adapt to our changing technology.

I do think there’s a better than decent chance that in our lifetime we could see that coevolution advance to the point that we would not recognize that world as a “Human” world as we conceive of it now, but it won’t be because AI replaced humans, it will be because humans will have used their technology to transform themselves into something new.

1