Viewing a single comment thread. View all comments

undefined7196 t1_jab8hoq wrote

Any form of AI will be the product of the mind that creates it. All forms of basic AI we have, has all of our biases and beliefs because AI has to be taught and it is taught by its creator. We could possibly find a way around this but I don't see how. I build "AI" models for a living. You have to train the models on something or else they are useless, the only thing we have to train them on is ourselves.


Porkinson t1_jaewe1s wrote

Maybe in the future you could train an AI from just predicting what happens on its surroundings. Just like you can make an ai that predicts the next token of text.


undefined7196 t1_jaexo9d wrote

Perhaps, but those surroundings would inevitably have human influence. I suppose you could make a simulated world and put simulated AI in it, you would need many entities so they could learn empathy and interaction with other beings. It would work similarly to a GAN (Generative Adversarial Network). Where the AI entities compete and that is what drives the learning. Then you just don't allow any human interference at all, just AI vs AI interactions. That could work.

That being said though, that could be what we are experiencing right now. We may be those entities being simulated to create a pure AI in a simulated environment. It would be identical to what we are experiencing, and we ended up being manipulative and destructive on our own.