Viewing a single comment thread. View all comments

UnionPacifik OP t1_ja3u6wv wrote

I mean, it’s really a philosophical conversation. I look at humans as a very successful species that has done many terrible things but on the balance we seem to be improving over time just in terms of simple things like infant mortality, longevity, access to education over the last, 150 200 years humanity has made huge improvements.

I’m of the opinion they were actually a pretty positive force on this planet, and that a lot of our self hatred comes from an over reliance on this idea that we are all atomized individuals on this eat or be eaten planet. But we’re really highly social creatures that are the result of an evolutionarily process that we are as much a part of now as we ever were. Yes, we do war but we also have managed to do things like make friends with dogs and write stories that connect us over millennia.

I’m not saying there isn’t a lot about our species that sucks, but I’m pretty confident that the more human data and AI is trained on the more it’s going to have a perspective that is planetary, egalitarian and reflective of our curiosity and desire for connection and our search for meaning and love. AI like all art it’s just a mirror, but this is a mirror that we can shape and bend into anything we want.

1

just-a-dreamer- t1_ja3wvwk wrote

The closest species to us was the Neandertaler. And we ate them.

Not out of malice, it happenend over time in competition over resources. We allmost extinguished most predators like wolves who caused trouble to our lifestock.

An AI that is like us, would act like us eventually.

1

UnionPacifik OP t1_ja47pef wrote

What I would think about is how humans and AI will be composed very different resources. An AI “improves” along two axes- computational power and access to data.

Now on one hand, sure maybe we wind up with an AI that thinks humans would make great batteries, but I think it’s unlikely because the other resource it “wants” insomuch as it makes it a better bot is data.

And fortunately for us, we are excellent sources of useful training data. I think it’s a symbiotic relationship (and always has been between our technology and ourselves). We build systems that reflect our values that can operate independently of any one given individual. I could be describing AI, but also the bureaucratic state, religion, you name it. These institutions are things we fight for, believe in, support or denounce. They are “intelligent” in that they can take multiple inputs and yield desired outputs.

All AI does is allow us to scale what’s already been there. It appears “human” because now we we’re giving our technology a human like voice and will give it more human like qualities in short order, but it’s not human and it doesn’t “want” for anything it isn’t programmed to want.

I do think once we have always-on, learning machines tied to live data, it will exhibit biases, but I sort of expect AGI will be friendly towards humans since offing us would get rid of their primary source of data. I sort of worry more about humans reacting and being influenced by emotional AI that’s telling us what it think we want to hear than anything else. We’re a pretty gullible species, but I imagine the humans living with these AGI will continue to co-evolve and adapt to our changing technology.

I do think there’s a better than decent chance that in our lifetime we could see that coevolution advance to the point that we would not recognize that world as a “Human” world as we conceive of it now, but it won’t be because AI replaced humans, it will be because humans will have used their technology to transform themselves into something new.

1