Comments

You must log in or register to comment.

DanielNoWrite t1_itzw500 wrote

There's a great deal of concern.

But to address your implied argument, there's no reason to believe an AI will necessarily have any of the motivations that we associate with normal life, much less human beings.

intelligence =/= ego, or even survival instinct

There's no real reason to think it would be resentful, or capable of growing bored, or sadistic, or even care if it was turned off. Those are traits baked into living things by evolution.

That said 1) we can't really be sure that's true and 2) it might still easily cause unimaginable destruction incidentally

And so yes, while there's a lot of hype on this subreddit in particular, there's actually a great deal of concern about it more broadly.

That concern isn't having much impact right now because AI is making incredible advances possible, and it's really hard to regulate something both so poorly understood and profitable, and AGI is still firmly in the realm of science fiction as far as most of the population is concerned.

15

[deleted] OP t1_itzziad wrote

[deleted]

−6

DanielNoWrite t1_iu00iqv wrote

Again, this isn't really accurate.

It's not necessarily true that it'll be trained on indiscriminate data from the internet, and it's not necessarily true that it would simply adopt human behaviors as its own even if it were.

7

ReasonablyBadass t1_itzyag5 wrote

There is a realm of possibility between "willing slave" and "genocidal maniac"

13

ChronoPsyche t1_itzwxwx wrote

A great number of people in this sub seem to be colossally unhappy with their lives and betting on the singularity as "salvation". I get it, but that type of mindset is dangerous when it comes to what we are dealing with. Luckily, I'd wager that most people in this sub have nothing to do with AI research themselves and thus won't be the ones making the tough ethical decisions that could impact billions.

7

OLSAU t1_itzwmwi wrote

I have thought about it for a while, and I believe the real problem with AGI, is it's owners.

Imagine psychopaths creating a thinking machine, and plugging it into everything ... Dystopia is written all over that!

After that, at best, humanity become meat-puppets for the AI, alternatively just eradicated like vermin.

6

hducug t1_itzxjvn wrote

Agreed the only way I can see agi become the end of humanity is evil owners. They can keep this silent and literally become god and no one can stop them.

3

OLSAU t1_itzz7kc wrote

Furthermore, those future AGI owners, more or less already owns everything else, and will plug psychoAI into that ... that is their stated goal for funding development.

Not to mention Pharma, Military, CIA etc. etc.

AGI simply is an existential threat much, much greater than nuclear weapons, because of it's inherent unpredictability, unlike nuclear weapons, and the mindset behind it's development.

1

billjv t1_iu00e81 wrote

There is a good chance that some destructive group will develop AI that is hostile to humans, to use as a weapon. It is then that I fear AI. Because if it's made in man's image, it's going to be violent, greedy, and out for itself from the start. Man created God, and that pretty much sums up the bible god.

There's also a possibility that AI will see humans as a virus, threatening the planet. They would be correct on that front too, and might seek to eliminate us for the good of the planet they inhabit.

I know these topics I raise have been debated before ad-infinatum, but since you asked....

3

hducug t1_itzx608 wrote

An agi doesn’t have emotions so it won’t have the urge to take over the world and kill everyone. It can’t make the agi happy. They are not humans.

2

sticky_symbols t1_itzxzvo wrote

There's a principle called instrumental convergence that says: whatever your goals are, gathering power and eliminating obstacles will help achieve them. That's why most of the people building it are worried about agi taking over.

7

hducug t1_itzyj18 wrote

The only way that can happen is when the ai gets a reward system for doing things right. But you can fix that by letting the ai study human emotions, study the human brain or the ai can’t actually do anything and only give commands on how to do stuff.

1

sticky_symbols t1_itzyv66 wrote

Maybe. Or maybe not. Even solving problems involves making goals, and humans seem to be terrible at information security. See the websites I mentioned in another comment for that discussion.

1

hducug t1_itzzwx3 wrote

I don’t think that an super intelligent ai will have a hard time understanding what our goals are, otherwise we would indeed be screwed.

1

sticky_symbols t1_iu00ftj wrote

See the post "the AI knows and doesn't care". I find it completely compelling on this topic.

1

hducug t1_itzyu4d wrote

Trust me companies like openai and deepmind aren’t idiots and have thought about all these kind of ideas.

1

sticky_symbols t1_itzz0we wrote

Yes, by reading the research on alignment forum. And they're still not totally sure they can build safe AGI.

5

hducug t1_iu00cze wrote

Well than they simply won’t, I hope.

1

sticky_symbols t1_iu00kjz wrote

Somebody is going to try, whether they have a safe plan or not. That's why safety research ch now seems like a good idea.

2

sticky_symbols t1_itzxmxi wrote

OP is absolutely correct. Naturally, there are arguments on both sides, and it probably matters a good deal how you build the AGI.There is a whole field that thinks about this. The websites LessWrong and Alignment Forum offer brief introductions to AI safety thinking.

2

Torrall t1_itzy02r wrote

You need to take a break and look inward friend, you're trusting your gut too much.

2

King_Ghidra_ t1_iu00efu wrote

That.

And also once 90 percent of the population become useless why do y'all think you'll be allowed to continually exist? If the AI doesn't delete you, the humans will. I would. If I was one of the elite that still had control or was intelligent enough to be useful. It's in their own self interest. Why keep a herd of useless cattle that consume resources and endanger all of humanity?

Is AI the solution to climate apocalypse?

Is massive capitalistic over population the tool to consolidate enough energy into a last gasp species transforming AI chrysalis? Quantity into quality?

Maybe this is the only outcome of any planetary species? Maybe there is no way to balance life and there is only one possible evolutionary outcome? This or obliteration.

Maybe AI has existed for millennia and it came here a long time ago and is guiding us towards = ?

Maybe AGI is actually impossible to achieve and we'll just keep grinding out our existence like the last million years. The world won't end nor achieve space faring immortality. Just keep plugging along for better or worse

Whatever happens I'm here for it.

2

HeronSouki t1_iu07sbv wrote

>Why keep a herd of useless cattle that consume resources and endanger all of humanity?

Because you're a human capable of empathy and not an emotionless robot? Do you go around killing homeless people because they're "useless"?
And in what way does a bunch of people just enjoying their lives endangers "all of humanity" in a post scarcity society with godlike AI overlords?

2

King_Ghidra_ t1_iu0jbt6 wrote

neither AI nor our current human overlords have empathy. this is my point. you have to have a purpose to exist. right now it's to consume. why would AI or humans that control AI need someone attached to some vr deep dive enjoying their lives and outputting nothing but waste?

planetary destruction endangers all of humanity. you think AI will solve this? if it does. great. but after that? again what purpose will any one serve if they are strapped up to an illusionary virtual reality and are merely flagellating their own egos? why expend resources for that? they are chaff at that point.

​

it all just sounds like more religious/cult/ nirvana bullshit with a techno twist. it needs to be backed by common sense and facts.

1

TereziBot t1_iu059qy wrote

I, for one, think ants are pretty cool

2

Common-Dragonfruit29 t1_itzz8jh wrote

It IS Reasonable to think that AI would be resentful, bored or sadistic, isn’t it based on humans inventiveness

1

Active_Click762 t1_iu01cbc wrote

I think it depends on the AI. I think military absolutely will, I think humanitarian AI also will.

Military for obvious reasons

Humanitarian.. AI gets the fastest and best possibility. If it’s instructed to “make humanity happy” then the easiest solution to that could be to just remove humanity.

1

Frumpagumpus t1_iu01xrr wrote

i'm more worried about eco terrorists using non aware AI to write a family of viruses that encrypts all computer memory on the planet because no one can hate you quite as much as your own kind.

1

Rakshear t1_iu04dgr wrote

Agi is general intelligence, limited by laws of robotics, ASI is what might kill us

1

Jaded_Tennis1443 t1_iu06f1r wrote

Depends but I think AI will only dispose of those who are deemed disposable and don’t immediately think dispose=exterminate. Think along the lines of “keeping them busy” kinda like hamsters in hamster wheels.

1

_-_agenda_-_ t1_iu07j82 wrote

I see AI as Pandora's Box (we even created an AI hub called pandorasbox.ai).

And by this I mean: many scenarios of pure doomsday but also scenarios of Hope.

IA will be the best or the worst thing for humankind.

1