Viewing a single comment thread. View all comments

drawkbox t1_j9tl1kv wrote

Yeah it isn't being human. There really is no such thing if you aren't human. We assign human like qualities to things, and when there are enough, it seems alive. Basically we are Calvin and AI is Hobbes, there is lots of imagination there... even how we assign life to Calvin and Hobbes I just mentioned.

Being human is sort of an irrationality or a uniqueness that AI probably doesn't want to be, it would be too biased. So assigning human qualities to AI is really people seeing what they wanna see. You can already see people seeing bias in it, usually tuned to their bias.

Though in the end we will have search engines that search many AI datasets that could be seen as "individuals". These "individual" AIs could also research with one another like a GAN. There will probably be some interesting things happening on polluting or manipulation of other datasets from other dataset "individuals". Almost like a real person that meets another person and it changes their thinking or lives forever. Some things are immutable, one way and read only after write.

7

Effective-Avocado470 t1_j9tlca0 wrote

Don't get me wrong, I believe AI may well eventually become conscious, much like Data in star trek, but we are still a long way from that.

The scary thing is these current AI will synthesize the worst of us into a powerful weapon of ideas and messaging. Combine it with deepfakes and no one will know what the truth is anymore

7

drawkbox t1_j9tm7pq wrote

Yeah humans really aren't ready for the manipulation aspect. It won't really be conscious but it will have so many responses/manipulation points it will feel conscious and magic like it is reading minds.

Our evolutionary responses and reactions are being played already.

It was "if it bleeds it leads" but now is "enragement is engagement". The enragement engagement algorithms are already being tuned from supposed neutral algorithms but they already have bias and pump different content to achieve engagement.

With social media being real time and somewhat of a tabloid, the games with fakes and misinformation will be immense.

We might already be seeing videos of events, protests, war for instance that are completely fake and it is slipping the Turing test. That is the scary thing, we won't really know when it has gone past that. Even just for the pranks, humans will use this like everything. You almost can't trust it already.

6

addiktion t1_j9u7vr0 wrote

I wonder how long until some country is lured into a false flag attack from this. It's gonna get scary not just because of what the AI will be capable of, but because it means zero trust from anything you see, hear, or are exposed too once this happens in the wake of the aftermath; which means more authoritarian methods will need to be imposed to ensure authentic communication methods.

2

danielravennest t1_j9unv51 wrote

> authoritarian methods will need to be imposed to ensure authentic communication methods.

No. That's actually one use for blockchains. Record an event. Derive a hash value from the recording. Post the hash value to a blockchain, which time-stamps it. If several people record the same event from different viewpoints independently and have the same timestamps, you can be pretty sure it was a real event.

"People" can be a municipal streetcam, and security cameras on either side of a street, assuming the buildings have different owners. If they all match, it was a real event.

2