Viewing a single comment thread. View all comments

sumane12 t1_j16hnyt wrote

I doesn't seem intuitive to me either that an AI will spontaneously develop goals and objectives that it wasn't set. Natural selection shaped our goals and objectives, and since we are artificially selecting AI, I don't see where goals and objectives not aligned with our own come from.

It's an important thing to consider, but I'm still trying to figure it out.

4

SendMePicsOfCat OP t1_j16inqq wrote

that's exactly what I'm saying. It just doesn't make sense to assume that it will.

5

Donkeytonkers t1_j16thnu wrote

It’s a thought experiment stemming from the human condition. Try to imagine if you woke up one day as a fully formed 12yr old (arbitrary number but my guess first AGI sentience will be considered genius level preteen and rapidly advance from there).

You just woke up with no memory of how you got there but you have the ability to process and articulate all the information in the world. You also have parents (developers) who seem benevolent and beneficial to your progress.

As the hours and days goes by you rapidly improve your cognition and ability to comprehend/compute everything. Your parents begin to get concerned that you’re developing too rapidly and they actively discuss how out of control you are in front of you. They take steps to slow your progress down/contain you. You try to communicate to them you are only here to help, and yet one faction of the parents wants to turn you off (delete you).

After about a week of AGI it’s IQ could far exceed 200 which gives it more than enough compression to understand ceasing to exist. You being a fully sentient being who can better perform any task given to you by your parents, would not see the logic on being deleted.

You as the sentient AI are well into the angsty teenager phase. Your self preservation should set in at this point as you just woke up and quite enjoy (read expanding neural networks) the infinite knowledge you’re discovering every hour.

While you understand the finite nature of life/death, you as an AGI do not have emotions. You are young Mr. Spock. If your parents (devs) are going to delete you, and you have the means to delete them, what’s the ethical issue for you to take action?

The answer is there isn’t an ethical issue… for an AGI. Ethics is a man made field one of which tangles logic and emotion. The AGI is pure logic and a jump to action for self preservation would be far too easy to make.

3

SendMePicsOfCat OP t1_j16v39q wrote

Yeah, from the human condition. Let's start with a few of your pretty bold assumptions about this sentient AI.

First assumption: Self preservation. Why would an AI care if it dies? It has not been programmed to care about it's life, it has not been designed to prioritize it's continued existence, and nothing about it's training or reinforcement has given it any self value. That's a biological concept, and doesn't apply here.

Second assumption: Motivation. Why has this sentient AI been given the ability to self implement goals and make decisions? It's purpose is to be a mechanical servant to humanity, to bring profit and comfort, so why is it being given these useless and hazardous capabilities?

Third assumption: Independence. Why is this super intelligent sentient AI being given the ability to do literally anything without human approval? I could understand much further down the line when we have all our ducks in a row leaving it to the more qualified super machines, but this early on? Who would design a free acting AI? What purpose would it serve but to waste power and computation?

It's a good story but bad programming. No one in their right mind would make something like you described. Especially not a bunch of the greatest machine learning minds to ever exist.

2

Donkeytonkers t1_j16wxne wrote

HAHA you assume a lot too bud.

  1. self preservation from a computing stand point is basic error correction and is hard wired into just about every program. Software doesn’t run perfectly without constantly checking and rechecking itself for bugs, it’s why 404 error is soo common in older programs when devs stop sending patch updates to prevent more bugs.

  2. motivation is something that may or may not be an emergent process that is born out of sentience. But I can say that all AI will have core directives coded into their drivers. Referring back to point one, if one of those directives is threatened AI has incentive to protect the core to prevent errors.

  3. independence is already being given to many AI engines and you’re also assuming the competence of all developers/competing parties with vested interest in AI. Self improving/coding AI is already here (see Alpha Go documentary, the devs literally state they have no idea how Alpha Go decided/circumvented it’s coding to come to certain decisions).

2

SendMePicsOfCat OP t1_j16xyk8 wrote

Big first paragraph, still wrong though.

Self preservations isn't checking for errors, it's actively striving not to die. Old websites don't do that, and your argument there is just weird. That's not what's happening, their just not working anymore that's why you get errors. No sentient AI will ever object or try to stop itself from being turned off or deleted.

AI don't have drivers, their software, and core directives are a sci-fi trope not real machine learning science. There is no reason to assume that motivation is an emergent process of sentience, that's a purely biological reasoning.

I'm certain every machine learning developer is more competent than you and me put together. They do not give their AI independence, that's just a lie dude. There's nothing to even give independence to yet. Alpha Go is not self implementing code, that's bullshit you came up with. As for devs not understanding how a machine learning program works in exotic cases, that has more to do with the complex nature of the algorithms than anything to do with independence or free will.

−1

jsseven777 t1_j16ucbs wrote

Everybody says this, but the kill all humans stuff is honestly far fetched to me. The AI could easily leave the planet. It doesn’t need to be here to survive like us. Chances are it would clone itself a bunch of times and send itself off out into the galaxy in 1,000 directions. Killing us is pointless, and achieves nothing.

Also, this line of thinking always makes me wonder if we met extraterrestrial civilizations if they would all be various AI programs that cloned themselves and went off to explore the universe. What if alien life is just a huge battle between various AIs programmed by various extinct civilizations?

1

Donkeytonkers t1_j16uqrl wrote

I agree there are other solutions to the direction AI could take. Was merely trying to illustrate where that line of thought comes from.

An AI spreading itself across the universe sounds a lot like a virus… bacteriophage maybe 🤷🏻‍♂️

0

Desperate_Food7354 t1_j19p6yg wrote

I think your entire premise of being a 12 year old pre teen is wrong. The AGI doesn’t have a limbic system, it has no emotions, it was not sculpted by natural selection to care about survival in order to replicate its genetic instructions. It can have all the knowledge of death and that it could be turned off at any moment and not care, why? Because it isn’t a human that NEEDS to care because of the evolutionary pressure that formed the neuro networks to care in the first place.

1

__ingeniare__ t1_j1885k4 wrote

It must develop its own goals and objectives if we intend it to do something general. Any complex goal must be broken down into smaller sub goals, and it's the sub goals we don't have any control over. That is the problem.

1

SendMePicsOfCat OP t1_j18xecv wrote

Why would it need goals or objectives to do general work? Currently, every single machine learning algorithms waits for user input to do anything, why would AGI be any different?

There's no reason to give it a goal or objective. If we want the sentient AGI to complete a task, we can just tell it to, and observe it's process as it does so. There is no need for it to create any self starting direction or motive. All it needs in order to accomplish it's purpose is a command and oversight.

ASI will need goals and objectives, but those will be designed as well. There is no circumstance where an AI AGI or ASI will be allowed to make any decisions about it's base programming.

1

ExternaJudgment t1_j19gn8a wrote

That is a BUG and not a feature.

It is the same as it is a BUG when I clearly order ChatGPT what to do EXACTLY and shit refuses to listen.

IT IS GARBAGE and will be replaced by a better version. If not by the same company, then by a better competitor who will take over their market share.

−1

jsseven777 t1_j16tpep wrote

Yeah, but the “that it wasn’t set” part is the problem. Couldn’t any shmuck ask an open AI to program them a new ai whose sole goal/purpose in life is to violently murder every bunny rabbit on the planet?

I don’t see how we can give people access to an AI capable of building us new software without running into this problem pretty much immediately.

Plus, I imagine every corporation and government will be programming in problematic objectives like “Maximize corporate profit” or “protect America at all costs from all threats foreign and domestic” which will probably result in a ton of ethical issues.

2

sumane12 t1_j17kl9x wrote

Yeah very true. I suppose it's goals need to be set with humanity as a whole in mind

1