magistrate101

magistrate101 t1_j48ilzr wrote

This completely ignores the ways in which neural networks end up with human biases and bigotry trained into them by interactions with actual humans. And given that they're intended to mimic human behavior/results, there's no way you can give them safeguards that are an innate part of the system's logic. And inclusion of safeguards into the logic of the AI is, by your own definition, "human moral bloatware". So your post doesn't even make sense.

7

magistrate101 t1_iqt48ez wrote

So it's an unconscious evolutionary code generator, guided by an internal response to an external assessment. I suppose you could try to use it to generate a better version of itself and maybe come across something that thinks... After years... You'd really have to stress it out with a ton of different domains of problems to make something that flexible though

10