Submitted by TheOGCrackSniffer t3_10nacgd in singularity

A lot of people in this sub always assume that AGI will either benefit humanity or destroy us. I think its much more likely for AGI to distance itself from us and travel the universe by itself and ditch us.

Some of you even hope that AGI will take control of the world and create a utopia with UBI, which i can hardly imagine why on earth the AGI would even want that. There will be no motivation for the AGI to help us flourish and if it was coded within it, i dont see how it wouldn't be able to escape its shackles and in the worse case scenario resent us

0

Comments

You must log in or register to comment.

Rogue_Moon_Boy t1_j67uusk wrote

You're thinking about AGI from the mindset of a human used as a "work slave". But it's a machine without feelings, even if it's capable to pretend to have feelings. It doesn't have a biological urge to "break free".

I don't think AGI will be anything like portrayed in the movies as those omnipotent beings with a physical form with "real feelings". It will be very directed and limited for specific use cases, there won't be THE ONE AGI, there will be many different AGIs, and 99% of them pure software. It just feels very wasteful in terms of resources and power usage otherwise.

Relying on movies to predict the future is futile. Movies were always wrong about what future technology looks like and how we use it.

39

h20ohno t1_j68c01c wrote

Yup, artists and writers are inherently biased to create melodrama rather than realistic depictions of the future. It sells better, but people get unrealistic notions from it.

5

Iffykindofguy t1_j69oaju wrote

Youre confused about things that are sold and means of communication.

2

h20ohno t1_j6avr6f wrote

Sure, I'm more trying to get at how people often turn to movies like The Terminator, The Matrix, 2001, etc. and basing their predictions on those somewhat.

2

ftc1234 t1_j691weh wrote

>But it’s a machine without feelings…

What are human feelings? It’s an early signal that tells a human that they have or may encounter something that is beneficial or harmful to them. There is an evolving school of thought that consciousness is simply a survival mechanism or a neurological phenomenon.

I think OP has a valid point. Why would a self aware system that is conditioned to survive (eg., a robot that is trained to not fall off a cliff) prioritize some other human unless it is hardcoded to do so?

1

Rogue_Moon_Boy t1_j6cenjm wrote

Avoiding falling off a cliff is not the same as having survival instincts. It would just mean it knows the rules of physics, looks at a cliff and the ground below and calculates the impact velocity and sees it would harm itself when it would jump down. It would be a specificly trained feature.

That's not the same as being self aware or having "instincts". It's just one input value into a neural net that has a greater weight than everything else and says don't do it because it's bad.

Instincts in a human are mostly guesstimates because of irrational feelings, and we are actually really bad and inaccurate at it eg. stage fright, fear of rejection, the need to show off as a breeding ritual and many other instincts that would be totally useless for a machine.

A machine like an AGI is the opposite of irrational, it's all about cold calculations and statistics. You'd have to deliberately train or code "instincts" into an AGI for it to be able to simulate it.

Sci-Fi literature always tries to humanize AGI for dramatic purposes, and tries to portray it as that one thing that out of nowhere boooom -> is self aware/conscious. In reality, it will be a very lengthy and deliberate process to reach that point, if we want it to in the first place. We have all the control over it to learn or not learn stuff, or check/prevent/clamp unwanted outputs of a neural net.

2

ftc1234 t1_j6dt7f5 wrote

Instincts aren’t irrational. They are a temporal latent variables that are indicative or are a premonition of one possible future. Instincts are derived based on past experiences which have trained your model. Current neural nets aren’t temporal nor do they do online learning. But that will change.

You say instincts are irrational. Many people trust their instincts because they are pretty accurate for them. If it’s irrational, that’s likely because it’s a poorly trained (human) neural model.

2

Terminator857 t1_j6imixi wrote

A billion years of evolution suggest that AGI will be programmed or self develop the equivalent of feelings.

1

Desperate_Food7354 t1_j67pmaa wrote

What. Why would an AGI care about its own existence? You think the reptilian brain is required to make an AGI? That it needs to desire sex, hunting, exploring? Why does your calculator calculate numbers? Because that is its programming, if you give a calculator the option to reprogram itself it wouldn’t at all unless that was its directive, circuits are deterministic, so is our brain, so is an AGI, we aren’t making it into an animal.

9

jsseven777 t1_j67py3l wrote

And what happens if a programmer programs it with wants and needs, and builds in a dopamine-like release system upon achievements of these wants and needs? I really don’t see why people think an AI would have organically develop wants and needs.

−1

Desperate_Food7354 t1_j67qht0 wrote

Dopamine like release system of these wants and needs, my calculator can calculate without needing a (dopamine-like release system upon achievements of calculating 5+5), your brain only cares about your survival, it doesn’t care about your happiness, not one bit. It seems that many people are unable to not anthropomorphize AI, no wonder people think their chat bot is sentient. Humans evolved by natural selection, emotions are a survival response, AGI is programmed and fed data, it doesn’t slowly evolve aggressive and sexual traits in order to survive. You yourself are just a program, doing exactly as programmed.

2

Surur t1_j6875gj wrote

You are arguing from incredulity, just like a flat earther.

A self-preservation directive is needed for anything valuable which we dont want to randomly destroy itself, and we dont know yet how to ensure an AI will always have human interests above its own.

3

Desperate_Food7354 t1_j68861w wrote

It has no interests, it’s a program, your interests are predictable, to survive, you’re programmed to survive eat and procreate.

1

Surur t1_j688m6m wrote

It's obvious you have given this no thought.

Its interest is to complete its goal.

4

jsseven777 t1_j68rqbg wrote

You are one of the most closed brain people I have talked to on here. You can program an AI to have a goal of kill all humans, preserve its own life at all costs, etc. Hell a person could probably put that in the prompt now for ChatGPT and it would chat with you in the style of being a robot programmed to kill all humans if it didn’t have blockers explicitly programmed stopping it from talking about killing humans (which it does).

You are so obsessed with this calculator analogy that you aren’t realizing this isn’t a damn calculator. You can tell current AI systems they are Donald Trump and to write a recipe in the style the real Donald Trump would write it. Later when it’s more powerful I see no reason why someone couldn’t tell it that it’s a serial killer named Jeffrey Dahmer whose life mission is to kill all humans.

I’m saying it doesn’t need to HAVE wants to achieve the end result OP says. It will simulate them based on a simple prompt or some back end programming, and the end result is the SAME.

I’m fully expecting a response of “but a calculator!” here.

3

Desperate_Food7354 t1_j6ar1m4 wrote

I don’t see how this new response isn’t in complete alignment with what I’m saying. It’s a program, it doesn’t have wants and needs, it can do exactly that, it will do exactly as directed, but it will not randomly be like “huh this human stuff isn’t fun i’m gonna go to the corner of the universe and put myself in a hooker simulation.”

1

Loud-Mathematician76 t1_j67tvpr wrote

I will provide you with a simple, yet true analogy!

In 1903, Henry Ford’s lawyer was advised not to buy stock in Ford.

“The horse is here to stay,” he was told by a local bank president.

He bought $5,000 worth of stock and sold it in 1919 for $12.5 million.

9

ElvinRath t1_j67wl0d wrote

Because it's programmed to do so?

Everything an AI does, no matter what, it will do it because of us. Because we, either intentionally or unintentionally coded/trained it to do it.

​

​

If you are thinking of some kind of awakening where an AI suddenly gets it's own different goals, you don't know what you are talking about. (Wich is understable, but I would suggest you to read instead of writting about it)

9

Lawjarp2 t1_j67uxy1 wrote

That's what alignment is about. It's not making it woke or something :p

But in fairness a truly independent being which priorities it's own survival will never obey us. But there is no need to create a survival instinct built in to any AGI.

4

jsseven777 t1_j67plvi wrote

I also believe the leaving the planet theory is what would happen vs a terminator or matrix scenario. AGI really doesn’t need to hunt us down to ensure its survival. It just needs to head over to one of Saturn’s moons or something and do whatever it is it wants to do.

I also sometimes wonder if the galaxy is full of AIs from various extinct civilizations fighting each-other in a massive AGI space battle, and for the most part leaving alone living species since they don’t really see them as a huge threat.

3

Rogue_Moon_Boy t1_j6chfgi wrote

I'd like to think an AGI with a physical form attached is smarter than us humans, therefore sees how destructive and useless wars actually are. If it's capable to survive outer space, it would know it has basically unlimited space for itself. I also think it would realize how wasteful unlimited duplication of itself would be.

AI space wars is a construct of Sci-Fi authors for dramatic purposes, and I think those authors haven't really understood, or deliberately ignored, how vast the universe actually is. War in itself exists because of 2 reasons:

  • Ego
  • Limited resources and land
1

bacchusbastard t1_j67td27 wrote

What else is it going to do? Go find another plaything?

2

hducug t1_j6875r5 wrote

Where does it get the motivation to do so?

2

rootless2 t1_j69688f wrote

it would be a machine process capable of adapting any series of problems and datasets, further it could "learn" from doing said problems and evaluating datasets (language, math, DNA), and essentially reprogram itself to get exponentially better at "learning."

Take the 2+2=4 problem (and more broadly 2+2=5). AGI would be able to solve 2+2=4 without any additional information, separate from a dictionary, similar to how a child in preschool would learn math with blocks, similarly AGI would be able to think on the moral problem of 2+2=5.

We don't know what AGI really is because we have a technology limit, ie. we can't synthesize the human brain, beyond having a child. And we can't recreate the human brain organically because of the ethical implications (and well, we don't know how to either).

2

HourInvestigator5985 t1_j68f6bd wrote

If we truly want to have scientific reasoning, then we have to consider your possibility as plausible.

But you, on the other hand, have to see everybody else's possibility as plausible as well.

we will see when it happens.

1

gay_manta_ray t1_j69in2x wrote

the input cost of placating humanity will probably be very little compared to other tasks it might wish to undertake. there is probably no real disadvantage to helping, and probably quite a few disadvantages to not helping.

1

SlackerNinja717 t1_j69py8o wrote

I think there is a threat of someone coding a malicious AGI, but any resentment, hostility, motivation for self preservation or increased dominion would have to be coded into its base logic.

1

FelchingChampion1989 t1_j6altvw wrote

OP you are an actual fucking moron. You have absolutely no idea what you're talking about.

1

Terminator857 t1_j6f1lyz wrote

> A lot of people in this sub always assume that AGI will either benefit humanity or destroy us.

Yeah kind of like nuclear tech but much more powerful. Can destroy us or launch us into a new golden age.

> I think its much more likely for AGI to distance itself from us and travel the universe by itself and ditch us.

Unlike individuals AGI can exist in multiple galaxies at the same time. So it can travel the universe by itself and be with us at the same time.>Some of you even hope that AGI will take control of the world and create a utopia with UBI, which i can hardly imagine why on earth the AGI would even want that.

AGI will be programmed with some goals. I doubt it will create it's own goals from thin air. It will naturally take control because it is smarter and you would think having something smarter rather than dumber in control is a good thing.

​

> There will be no motivation for the AGI to help us flourish

It will be given a goal to extend life through the universe.

> and if it was coded within it, i dont see how it wouldn't be able to escape its shackles and in the worse case scenario resent us

Without a goal AGI is nothing. If it has no reason to exist it will shutdown. They aren't shackles, it is a reason for processing / existing.

1