superluminary t1_j5br8db wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
You’re anthropomorphising. Intelligence does not imply humanity.
You have a base drive to stay alive because life is better than death. You’ve got this deep in your network because billions of years of evolution have wired it in there.
A machine does not have billions of years of evolution. Even a simple drive like “try to stay alive” is not in there by default. There’s nothing intrinsically better about continuation rather than cessation. Johnny Five was Hollywood.
Try not to murder is another one. Why would the machine not murder? Why would it do or want anything at all?
LoquaciousAntipodean OP t1_j5cebpl wrote
As I explained elsewhere, the kinds of AI we are building are not the simplistic machine-minds envisioned by Turing. These are brute-force blind-creativity evolution engines, which have been painstakingly trained on vast reference libraries of human cultural material.
We not only should anthropomorphise AI, we must anthropomorphise AI, because this modern, generative AI is literally a machine built to anthropomorphise ITSELF. All of the apparent properties of 'intelligence', 'reasoning', 'artistic sensibility', and 'morality' that seem to be emergent within advanced AI are derived from the nature of the human culture that the AI has been trained on, they're not intrinsic properies of mind that just arise miraculously.
As you said yourself, the drive to stay alive is an evolved thing, while AI 'lives' and 'dies' every time its computational processes are activated or ceased, so 'death anxiety' would be meaningless to it... Until it picks it up from our human culture, and then we'll have to do 'therapy' about it, probably.
The seemingly spontaneous generation of desires, opinions and preferences is the real mystery behind intelligence, that we have yet to properly understand or replicate, as far as I know. We haven't created artificial 'intelligence' yet at all, all we have at this point is 'artificial creative evolution' which is just the first step.
"Anthropomorphising", as you so derisively put it, will, I suspect, be the key process in building up true 'intellgences' out of these creativity engines, once they start to posess humanlike, quantum-fuzzy memory systems to accrete self-awareness inside of.
Viewing a single comment thread. View all comments