Submitted by [deleted] t3_116ehts in singularity

I think hooking a human up to an exocortex and creating a new super-human intelligence has a far higher risk of generating a hostile and dangerous entity than simply building a regular AI. It is incredibly hard to predict how an individual's identity and desires will manifest when he or she is amplified in intelligence a thousandfold. To make an analogy, it's like trying to imagine how an adult will behave by looking at his or her personality as a toddler. If truly terrifying super-minds ever come to exist, I am almost certain they will be mostly of human origin.

A good rule of thumb for any post singularity civilization is to be very careful about letting any mind that has resulted from human augmentation be in charge of significant infrastructure around beings of less intelligence. This is something that should be strictly kept to ego-less AI's with a negligible chance of going rogue.

37

Comments

You must log in or register to comment.

HistoricallyFunny t1_j96z56l wrote

Why is it assumed that increased intelligence leads to violence, or going rogue?

Every person I have met with a extremely high IQ is the exact opposite.

It the lower IQ that use violence and are unable to understand why they don't get the results they want.

The super smart, super villain is fiction. Being a villain is not a smart way to achieve a goal.

24

TheSecretAgenda t1_j97bdhm wrote

No, a smart villain knows how to apply violence effectively. He becomes wealthy, bribes politicians to have the state deal with his enemies.

8

squirrelathon t1_j9arf02 wrote

A smart enough person would realise that making the world a worse place - by bribing, stealing, etc - would make his own life worse as well: whether that's because he needs to spend more on security to guard himself from the people he's stealing from, or by society not having enough resources to develop medical treatments that may one day save him in some way. There are many ways in which poor actions can come back to bite you.

1

helpskinissues t1_j972pug wrote

A person with low IQ can use tools made by people with high IQ to make smart plans to achieve evil goals though.

7

cneakysunt t1_j9752yw wrote

Anecdotally I agree however the ability to empathise is key.

3

LosingID_583 t1_j9bsf75 wrote

So if we augment everyone's IQ to be very high, then do you believe that there will no longer be any violence?

1

[deleted] OP t1_j9qvf9z wrote

[deleted]

1

LosingID_583 t1_j9r4wvv wrote

With examples like Genghis Khan, I worry that in some circumstances violence is optimal (perhaps if done in secret or in a clever way) as a means of gaining or protecting power, for example. It would be concerning if violence is not actually directly inversely correlated with IQ, but rather a different quality or set of qualities.

1

Gimbloy t1_j96vdi1 wrote

Maybe. We run on prehistoric instincts and drives which can lead to all kinds of bad things. Maybe augmentation will mean the rational side of ourselves wins out and we can improve things like willpower/self reflection/empathy.

21

rixtil41 t1_j97b0a4 wrote

But like anything that is excessive can be harmful. Having unlimited empathy, willpower , and being completely rational all have pros and cons. There was a study I believe in which people who felt little to no emotion had a hard time deciding on anything. Because everything is more or less just as valuable as everything else. The only thing keeping them alive is survival driven.

0

[deleted] OP t1_j98hth3 wrote

It could result in all sorts of "delusional" beliefs. For example, unlimited empathy might lead an individual to become convinced that all lesser beings are actually a part of it's mind.

I think what we perceive as a normal identity is incredibly specific to the human brain and is far from a universal truth.

2

Spreadwarnotlove t1_j97vx66 wrote

A purely rational human would stop doing anything until they died of dehydration. Doesn't seem great to me. And of course empathy can easily turn people into tools. Willpower would be great though.

−6

[deleted] OP t1_j97yff4 wrote

[deleted]

7

rixtil41 t1_j98nxkk wrote

A rational person would drink from a survival standpoint but by itself it was irrational.

3

Spreadwarnotlove t1_j997iy2 wrote

Why would they drink? There's no rational reason to avoid pain or death. That's just our instincts driving us.

Reason and logic are just tools to help you fulfill your irrational desires.

−2

[deleted] OP t1_j99axss wrote

[deleted]

2

Spreadwarnotlove t1_j99es8c wrote

Rational is following reason and logic. Which are in turn simply problem solving abilities. They are the best tools for getting a desired effect. But to have desire in the first place takes an irrational mind.

Ask Google the definition and look at the top bullet.

−1

dragon_dez_nuts t1_j9a1uxb wrote

Soo me being a petty asshole is logic nice 🙂

2

Spreadwarnotlove t1_j9a35ow wrote

Depends on your goal. If you want to piss people off or drive them away. Yeah. I'd say being a petty asshole is the logical choice.

0

VladVV t1_j9vzgvs wrote

How in the world is that assertion self-evident? You can’t just say something like that without reasoning it.

1

Spreadwarnotlove t1_j9xflms wrote

It's simple. What rational reason is there to avoid pain and death? There is none. You simply want to avoid them out of animalistic instinct.

1

Lawjarp2 t1_j96aqxf wrote

It's much harder though. It is likely we will have AGI before an augumented supervillain.

13

helpskinissues t1_j972la3 wrote

I don't agree. Having terribly good AR glasses with customized sensors and a custom LLM/LVM that's pretty competent (not necessarily AGI) is enough to become a supervillain, especially in poor countries.

−3

Lawjarp2 t1_j976yj5 wrote

You will get your ass beaten up if you try anything like that in a poor country. Don't be stupid. No matter how smart you think you are you aren't smart enough to beat a lot of people especially not a country. Unless you are fully superhuman you will get assassinated no matter how cool your glasses are

7

helpskinissues t1_j9775s4 wrote

There're Philippines guys with broken English on Tinder scamming people from Norway every week and you say a person enhanced with LLM/LVM and enhanced sensors can't commit crimes?

−1

Lawjarp2 t1_j977s5y wrote

Scamming people is not becoming a supervillain

8

helpskinissues t1_j977xgx wrote

I don't know what is a supervillain to you, but if people with low IQ and slow internet can commit crimes, imagine someone with enhanced capabilities that can create fake audios, videos, images, fake proofs of everything just blinking twice to the AR glasses.

−1

Lawjarp2 t1_j978sb6 wrote

That's what I mean. You can't do that without getting noticed. Even if you did you will get killed very quickly.

You underestimate how foggy real life events tend to be. You can't predict with certainty and no amount of intelligence can help you.

The only way this can be harmful is if you are already in a position of power. But if you are already there you can do things without AI as well.

For most people it's impossible to be a supervillain with almost anything. They simply lack the moves and position to accomplish it

4

rixtil41 t1_j97ewyy wrote

More me, that's good because having a lot of supervillans is an unstable society.

1

rixtil41 t1_j97ginz wrote

Being a supervillan means being to affect society negatively at large. That scaming person is just a regular villain.

1

[deleted] OP t1_j98gwm1 wrote

I'm more thinking more of posthuman nutjobs off in the asteroid belt occasionally committing sick crimes and acts of terrorism.

0

helpskinissues t1_j98lvj0 wrote

This subreddit heavily underrates how predictable humans are, and reading minds isn't a hard task for a good AI system as well.

We're walking meat. We'll be heavily manipulated by AI in the next months, AI supervillians are going to exist by 2024-2025.

0

joseph_dewey t1_j966dm9 wrote

This is a very good point, and I've never heard people discuss this before.

Basically, human intelligence augmentation will let anyone that wants to, turn into a supervillain.

10

PandaCommando69 t1_j974idz wrote

It will also allow others of us to transform ourselves into guardian angels, the real kind. If I get super intelligence I'm going to use it to protect (and give freedom to) as much sentient life as I can, for as long as I am able. I mean it. I hope others will do the same--I think they will.

14

helpskinissues t1_j974yr8 wrote

The lack of political knowledge in this sub is crystal clear, as you seem to misunderstand that peace, good, bad, violence... Aren't agreed concepts.

What is good for you can be bad for me.

2

PandaCommando69 t1_j976pa2 wrote

Your comment is arrogant, and displays a lack of ethics and morals.

Causing suffering is wrong. Oppressing other people is wrong. Taking people's freedom and liberty is wrong. Peace is always preferable to war, and violence in service of anything other than self-defense is wrong.

5

helpskinissues t1_j976zlw wrote

What I'm saying is that "suffering", "oppression", "freedom", "liberty", "peace", "war", "violence" and "self defense" are subjective terms without consensus in our societies.

I'm shocked I'm having this discussion. Don't you watch the news? We're literally having a war in Ukraine and nobody agrees what is good or bad, what is self defense or what is peace.

1

PandaCommando69 t1_j977fwh wrote

Russia attacked Ukraine unprovoked. They are in the wrong.

You are correct that some people do not understand what right and wrong are. That does not mean that right and wrong do not exist. Sometimes there are gray areas, and in these we need to be judicious in balancing competing interests, but that does not mean that we cannot tell right from wrong, and by pushing that narrative, you are advocating for the very type of moral ambiguity that you are pretending to decry.

4

helpskinissues t1_j977o7o wrote

I'm not decrying anything, I'm literally saying "I'll do good, I hope everyone does" doesn't stop wars, violence, crimes or anything like that. Because they're empty words without meanings in this society.

"Some people don't understand what is right" lol, okay, explain that to the criminal while he's shooting you thinking he's doing good.

1

PandaCommando69 t1_j9783cq wrote

You're not being very clear about the points you're trying to make. I do not know if that is by design, or because your thoughts on this topic are still under construction.

5

helpskinissues t1_j978ce2 wrote

The only point I'm making is that you're saying AI could do good because of people doing good. What I'm saying is that people doing good can mean people doing bad to others.

The difference between a supervillain and a guardian angel is null. Different people have different meanings.

"AI could make people do good", sure, the type of good that is killing people on Ukraine and billions of people are supporting?

1

PandaCommando69 t1_j9790lc wrote

Yes, oppressors do get awfully upset about not being able to oppress other people, and definitely think that having their ability to cause harm curtailed is a bad thing. They are wrong. (EX: homophobes who think their rights are being violated because gay people have been allowed to marry. Their rights have not been violated, merely their ability to oppress other people curtailed.) The difference is not null.

5

helpskinissues t1_j97964v wrote

To them, a gay is a villain, to you, the homophobe is a villain. As simple as that. They'll both use AI to do "good". One to be gay, other to be homophobe.

1

PandaCommando69 t1_j979i4j wrote

A gay person existing cannot possibly be villainous. Thinking otherwise is a complete logical fallacy. Anyone seeking to oppress another person on the basis of this logical fallacy, is committing wrong. Their intellectual failings don't change the objective truth.

2

helpskinissues t1_j979mnv wrote

And they'll say the same of your thinking.

1

PandaCommando69 t1_j979s5e wrote

Yes, but they would objectively be incorrect. That's the difference.

2

helpskinissues t1_j97aoel wrote

That won't stop the bullet.

1

PandaCommando69 t1_j97b4n9 wrote

Did you hear me say somewhere that I thought that being in the right was impenetrable armor against someone doing something awful? I didn't. I sure do wish that it was though, that would be very nice.

3

helpskinissues t1_j97bjkm wrote

>If I get super intelligence I'm going to use it to protect (and give freedom to) as much sentient life as I can, for as long as I am able. I mean it. I hope others will do the same

To me, this is inviting others to trigger the gun, then you'll cry because it's "bad" that they tried to do good using AI. But hey, this thread is getting nowhere. I appreciate your responses, really. But I have 10000 things to do.

1

PandaCommando69 t1_j97bw7d wrote

Your reply honestly does not make sense. Have a good rest of your day.

2

rixtil41 t1_j97dcrn wrote

Same to you. As there is no such thing as objective morals, which is what you seem to imply.

3

HeinrichTheWolf_17 t1_j9752f8 wrote

I mean, that’s assuming they’ll still have such ambitions once they’re posthuman (See Jon Osterman/Doctor Manhattan)

4

No_Ninja3309_NoNoYes t1_j971wsr wrote

Goebbels was very intelligent. Intelligence and ethics are not the same thing. Whether you can create Frankenstein's monster with HIA, IDK. But I think HIA is less developed than AI. However, with AI able to reason about proteins, one day pills could be made that can give us near genius abilities. Imagine giving the pills to an army. Suddenly every conscript could become a Napoleon. I think that this scenario is more likely than what you describe.

2

Qumeric t1_j99md7e wrote

I strongly disagree. I would vastly prefer a superintelligent toddler to a superintelligent alien.

2

Borrowedshorts t1_j9aom88 wrote

I agree and I've always thought this. And think about the self selection of people who would want to go to great lengths to augment their intelligence in the first place. I'd be more afraid of a power hungry individual like that than I am of AI. I think it would be easier to align an AI to the goals of general society than it would be an augmented human.

2

Nervous-Newt848 t1_j97nlyc wrote

We need augmented humans in order to prevent an SAI takeover

This will be dangerous... Akin to having guns and knives... Bad actors will do bad things...

We will need augmented police officers to combat augmented bad actors...

We could theoretically change the brain to force people to not commit crime... But this has ethical challenges...

There's going to be chaos no matter what... Thats my opinion

1

[deleted] OP t1_j98i5rq wrote

What makes you think augmented humans will even identify as human?

1

Nervous-Newt848 t1_j98jrww wrote

Thats not important for centuries we have had police enforce order... In a world full of chaos...

We have tragedies like murder, robberies and rape... In the future we will have these things... The criminals will become more advanced and we will need advanced police to keep the order...

If not they will fall behind and our world will become a scifi wild west

2

kinetsu_hayabusa t1_j98qm2f wrote

Superintelligence could create groups of super smart assholes. Imagine what those mass shooting sad dudes could make if they were 100x smarter.. lol they would nuke a city with a hydrogen bomb while streaming it on twitch

1

Primus_Pilus1 t1_j99edg5 wrote

You don't need to be a supervillain to nuke a city. Just decent engineering, a few kilos of plutonium, few grams of tritium and some other slightly exotic parts.

1

dayallnight t1_j9afu4n wrote

Definitely worth thinking about. AI lacks the inborn survival instinct, and might not react to threats against its existence nor might it not take preventative measures for self preservation.

1

Frumpagumpus t1_j9akz1k wrote

i disagree with the premise. I think a human with normal intelligence and control of an egoless superintelligence is the most dangerous. But I am also extremely skeptical of the concept of egoless, general, superintelligence being a thing.

in fact I would go further and say my conclusion seems obvious. and that using a human as a seed value for a superintelligence would if anything be more likely to result in superintelligence which was "aligned" with our values (although I doubt it makes much of a difference)

1

Key_Asparagus_919 t1_j9cg5eu wrote

Noooooooo, I want to write 10 manifestos a second supporting posadism😭😭😭😭😭😭😭

1