Viewing a single comment thread. View all comments

FacelessFellow t1_j75215s wrote

So if a human makes an AI the AI will have the humans biases. What about when the AI start making AI. Once that snowball starts rolling, won’t future generations of AI be far enough removed from human biases?

Will no AI ever be able to perceive all of reality instantaneously and objectively? When computational powers grow so immensely that they can track every atom in the universe, won’t that help AI see objective truth?

Perfection is a human construct, but flawlessness may be obtainable by future AI. With enough computational power it can check and double check and triple check and so on, to infinity. Will that not be enough to weed out all true reality?


Sad-Combination78 t1_j75312i wrote

you missed the point

the problem isn't humans, it's the concept of "learning"

you don't know something, and from your environment, you use logic to figure it out

the problem is you cannot be everywhere all at once and have every experience ever, so you will always be drawing conclusions from limited knowledge.

AI does not and cannot solve this, it is fundamental to learning


FacelessFellow t1_j757skq wrote

But I thought AI was computers. And I thought computers could communicate at the speed of light. Wouldn’t that mean the AI could have input from billions of devices? Scientific instruments nowadays can connect to the web. Is it far fetched to imagine future where all collectible data from all devices could be perceived simultaneously by the AI?