Submitted by [deleted] t3_116ehts in singularity

I think hooking a human up to an exocortex and creating a new super-human intelligence has a far higher risk of generating a hostile and dangerous entity than simply building a regular AI. It is incredibly hard to predict how an individual's identity and desires will manifest when he or she is amplified in intelligence a thousandfold. To make an analogy, it's like trying to imagine how an adult will behave by looking at his or her personality as a toddler. If truly terrifying super-minds ever come to exist, I am almost certain they will be mostly of human origin.

A good rule of thumb for any post singularity civilization is to be very careful about letting any mind that has resulted from human augmentation be in charge of significant infrastructure around beings of less intelligence. This is something that should be strictly kept to ego-less AI's with a negligible chance of going rogue.

37

Comments

You must log in or register to comment.

joseph_dewey t1_j966dm9 wrote

This is a very good point, and I've never heard people discuss this before.

Basically, human intelligence augmentation will let anyone that wants to, turn into a supervillain.

10

Lawjarp2 t1_j96aqxf wrote

It's much harder though. It is likely we will have AGI before an augumented supervillain.

13

Gimbloy t1_j96vdi1 wrote

Maybe. We run on prehistoric instincts and drives which can lead to all kinds of bad things. Maybe augmentation will mean the rational side of ourselves wins out and we can improve things like willpower/self reflection/empathy.

21

HistoricallyFunny t1_j96z56l wrote

Why is it assumed that increased intelligence leads to violence, or going rogue?

Every person I have met with a extremely high IQ is the exact opposite.

It the lower IQ that use violence and are unable to understand why they don't get the results they want.

The super smart, super villain is fiction. Being a villain is not a smart way to achieve a goal.

24

No_Ninja3309_NoNoYes t1_j971wsr wrote

Goebbels was very intelligent. Intelligence and ethics are not the same thing. Whether you can create Frankenstein's monster with HIA, IDK. But I think HIA is less developed than AI. However, with AI able to reason about proteins, one day pills could be made that can give us near genius abilities. Imagine giving the pills to an army. Suddenly every conscript could become a Napoleon. I think that this scenario is more likely than what you describe.

2

helpskinissues t1_j972la3 wrote

I don't agree. Having terribly good AR glasses with customized sensors and a custom LLM/LVM that's pretty competent (not necessarily AGI) is enough to become a supervillain, especially in poor countries.

−3

PandaCommando69 t1_j974idz wrote

It will also allow others of us to transform ourselves into guardian angels, the real kind. If I get super intelligence I'm going to use it to protect (and give freedom to) as much sentient life as I can, for as long as I am able. I mean it. I hope others will do the same--I think they will.

14

PandaCommando69 t1_j976pa2 wrote

Your comment is arrogant, and displays a lack of ethics and morals.

Causing suffering is wrong. Oppressing other people is wrong. Taking people's freedom and liberty is wrong. Peace is always preferable to war, and violence in service of anything other than self-defense is wrong.

5

Lawjarp2 t1_j976yj5 wrote

You will get your ass beaten up if you try anything like that in a poor country. Don't be stupid. No matter how smart you think you are you aren't smart enough to beat a lot of people especially not a country. Unless you are fully superhuman you will get assassinated no matter how cool your glasses are

7

helpskinissues t1_j976zlw wrote

What I'm saying is that "suffering", "oppression", "freedom", "liberty", "peace", "war", "violence" and "self defense" are subjective terms without consensus in our societies.

I'm shocked I'm having this discussion. Don't you watch the news? We're literally having a war in Ukraine and nobody agrees what is good or bad, what is self defense or what is peace.

1

PandaCommando69 t1_j977fwh wrote

Russia attacked Ukraine unprovoked. They are in the wrong.

You are correct that some people do not understand what right and wrong are. That does not mean that right and wrong do not exist. Sometimes there are gray areas, and in these we need to be judicious in balancing competing interests, but that does not mean that we cannot tell right from wrong, and by pushing that narrative, you are advocating for the very type of moral ambiguity that you are pretending to decry.

4

helpskinissues t1_j977o7o wrote

I'm not decrying anything, I'm literally saying "I'll do good, I hope everyone does" doesn't stop wars, violence, crimes or anything like that. Because they're empty words without meanings in this society.

"Some people don't understand what is right" lol, okay, explain that to the criminal while he's shooting you thinking he's doing good.

1

helpskinissues t1_j977xgx wrote

I don't know what is a supervillain to you, but if people with low IQ and slow internet can commit crimes, imagine someone with enhanced capabilities that can create fake audios, videos, images, fake proofs of everything just blinking twice to the AR glasses.

−1

helpskinissues t1_j978ce2 wrote

The only point I'm making is that you're saying AI could do good because of people doing good. What I'm saying is that people doing good can mean people doing bad to others.

The difference between a supervillain and a guardian angel is null. Different people have different meanings.

"AI could make people do good", sure, the type of good that is killing people on Ukraine and billions of people are supporting?

1

Lawjarp2 t1_j978sb6 wrote

That's what I mean. You can't do that without getting noticed. Even if you did you will get killed very quickly.

You underestimate how foggy real life events tend to be. You can't predict with certainty and no amount of intelligence can help you.

The only way this can be harmful is if you are already in a position of power. But if you are already there you can do things without AI as well.

For most people it's impossible to be a supervillain with almost anything. They simply lack the moves and position to accomplish it

4

PandaCommando69 t1_j9790lc wrote

Yes, oppressors do get awfully upset about not being able to oppress other people, and definitely think that having their ability to cause harm curtailed is a bad thing. They are wrong. (EX: homophobes who think their rights are being violated because gay people have been allowed to marry. Their rights have not been violated, merely their ability to oppress other people curtailed.) The difference is not null.

5

PandaCommando69 t1_j979i4j wrote

A gay person existing cannot possibly be villainous. Thinking otherwise is a complete logical fallacy. Anyone seeking to oppress another person on the basis of this logical fallacy, is committing wrong. Their intellectual failings don't change the objective truth.

2

rixtil41 t1_j97b0a4 wrote

But like anything that is excessive can be harmful. Having unlimited empathy, willpower , and being completely rational all have pros and cons. There was a study I believe in which people who felt little to no emotion had a hard time deciding on anything. Because everything is more or less just as valuable as everything else. The only thing keeping them alive is survival driven.

0

helpskinissues t1_j97bjkm wrote

>If I get super intelligence I'm going to use it to protect (and give freedom to) as much sentient life as I can, for as long as I am able. I mean it. I hope others will do the same

To me, this is inviting others to trigger the gun, then you'll cry because it's "bad" that they tried to do good using AI. But hey, this thread is getting nowhere. I appreciate your responses, really. But I have 10000 things to do.

1

Nervous-Newt848 t1_j97nlyc wrote

We need augmented humans in order to prevent an SAI takeover

This will be dangerous... Akin to having guns and knives... Bad actors will do bad things...

We will need augmented police officers to combat augmented bad actors...

We could theoretically change the brain to force people to not commit crime... But this has ethical challenges...

There's going to be chaos no matter what... Thats my opinion

1

Spreadwarnotlove t1_j97vx66 wrote

A purely rational human would stop doing anything until they died of dehydration. Doesn't seem great to me. And of course empathy can easily turn people into tools. Willpower would be great though.

−6

[deleted] OP t1_j98hth3 wrote

It could result in all sorts of "delusional" beliefs. For example, unlimited empathy might lead an individual to become convinced that all lesser beings are actually a part of it's mind.

I think what we perceive as a normal identity is incredibly specific to the human brain and is far from a universal truth.

2

Nervous-Newt848 t1_j98jrww wrote

Thats not important for centuries we have had police enforce order... In a world full of chaos...

We have tragedies like murder, robberies and rape... In the future we will have these things... The criminals will become more advanced and we will need advanced police to keep the order...

If not they will fall behind and our world will become a scifi wild west

2

helpskinissues t1_j98lvj0 wrote

This subreddit heavily underrates how predictable humans are, and reading minds isn't a hard task for a good AI system as well.

We're walking meat. We'll be heavily manipulated by AI in the next months, AI supervillians are going to exist by 2024-2025.

0

kinetsu_hayabusa t1_j98qm2f wrote

Superintelligence could create groups of super smart assholes. Imagine what those mass shooting sad dudes could make if they were 100x smarter.. lol they would nuke a city with a hydrogen bomb while streaming it on twitch

1

Spreadwarnotlove t1_j99es8c wrote

Rational is following reason and logic. Which are in turn simply problem solving abilities. They are the best tools for getting a desired effect. But to have desire in the first place takes an irrational mind.

Ask Google the definition and look at the top bullet.

−1

Qumeric t1_j99md7e wrote

I strongly disagree. I would vastly prefer a superintelligent toddler to a superintelligent alien.

2

dayallnight t1_j9afu4n wrote

Definitely worth thinking about. AI lacks the inborn survival instinct, and might not react to threats against its existence nor might it not take preventative measures for self preservation.

1

Frumpagumpus t1_j9akz1k wrote

i disagree with the premise. I think a human with normal intelligence and control of an egoless superintelligence is the most dangerous. But I am also extremely skeptical of the concept of egoless, general, superintelligence being a thing.

in fact I would go further and say my conclusion seems obvious. and that using a human as a seed value for a superintelligence would if anything be more likely to result in superintelligence which was "aligned" with our values (although I doubt it makes much of a difference)

1

Borrowedshorts t1_j9aom88 wrote

I agree and I've always thought this. And think about the self selection of people who would want to go to great lengths to augment their intelligence in the first place. I'd be more afraid of a power hungry individual like that than I am of AI. I think it would be easier to align an AI to the goals of general society than it would be an augmented human.

2

squirrelathon t1_j9arf02 wrote

A smart enough person would realise that making the world a worse place - by bribing, stealing, etc - would make his own life worse as well: whether that's because he needs to spend more on security to guard himself from the people he's stealing from, or by society not having enough resources to develop medical treatments that may one day save him in some way. There are many ways in which poor actions can come back to bite you.

1

Key_Asparagus_919 t1_j9cg5eu wrote

Noooooooo, I want to write 10 manifestos a second supporting posadism😭😭😭😭😭😭😭

1

LosingID_583 t1_j9r4wvv wrote

With examples like Genghis Khan, I worry that in some circumstances violence is optimal (perhaps if done in secret or in a clever way) as a means of gaining or protecting power, for example. It would be concerning if violence is not actually directly inversely correlated with IQ, but rather a different quality or set of qualities.

1