Viewing a single comment thread. View all comments

SendMePicsOfCat OP t1_j16v39q wrote

Yeah, from the human condition. Let's start with a few of your pretty bold assumptions about this sentient AI.

First assumption: Self preservation. Why would an AI care if it dies? It has not been programmed to care about it's life, it has not been designed to prioritize it's continued existence, and nothing about it's training or reinforcement has given it any self value. That's a biological concept, and doesn't apply here.

Second assumption: Motivation. Why has this sentient AI been given the ability to self implement goals and make decisions? It's purpose is to be a mechanical servant to humanity, to bring profit and comfort, so why is it being given these useless and hazardous capabilities?

Third assumption: Independence. Why is this super intelligent sentient AI being given the ability to do literally anything without human approval? I could understand much further down the line when we have all our ducks in a row leaving it to the more qualified super machines, but this early on? Who would design a free acting AI? What purpose would it serve but to waste power and computation?

It's a good story but bad programming. No one in their right mind would make something like you described. Especially not a bunch of the greatest machine learning minds to ever exist.

2

Donkeytonkers t1_j16wxne wrote

HAHA you assume a lot too bud.

  1. self preservation from a computing stand point is basic error correction and is hard wired into just about every program. Software doesn’t run perfectly without constantly checking and rechecking itself for bugs, it’s why 404 error is soo common in older programs when devs stop sending patch updates to prevent more bugs.

  2. motivation is something that may or may not be an emergent process that is born out of sentience. But I can say that all AI will have core directives coded into their drivers. Referring back to point one, if one of those directives is threatened AI has incentive to protect the core to prevent errors.

  3. independence is already being given to many AI engines and you’re also assuming the competence of all developers/competing parties with vested interest in AI. Self improving/coding AI is already here (see Alpha Go documentary, the devs literally state they have no idea how Alpha Go decided/circumvented it’s coding to come to certain decisions).

2

SendMePicsOfCat OP t1_j16xyk8 wrote

Big first paragraph, still wrong though.

Self preservations isn't checking for errors, it's actively striving not to die. Old websites don't do that, and your argument there is just weird. That's not what's happening, their just not working anymore that's why you get errors. No sentient AI will ever object or try to stop itself from being turned off or deleted.

AI don't have drivers, their software, and core directives are a sci-fi trope not real machine learning science. There is no reason to assume that motivation is an emergent process of sentience, that's a purely biological reasoning.

I'm certain every machine learning developer is more competent than you and me put together. They do not give their AI independence, that's just a lie dude. There's nothing to even give independence to yet. Alpha Go is not self implementing code, that's bullshit you came up with. As for devs not understanding how a machine learning program works in exotic cases, that has more to do with the complex nature of the algorithms than anything to do with independence or free will.

−1