Viewing a single comment thread. View all comments

Hekantonkheries t1_irk3lmd wrote

Eh, server/data size limits, limitations in accessible processor speed, infrastructure, power cords.

Tons of things would hard block an AI long before it became problematic, let alone apocalyptic

−2

zenzukai t1_irk79dn wrote

You're assuming AI will be tethered by people. Don't you think a truly superior intellect couldn't persuade a large swath of people to help and protect it?

11

telos0 t1_irklk4n wrote

>Don't you think a truly superior intellect couldn't persuade a large swath of people to help and protect it?

Hell it doesn't even require an SAI to do this.

Even a dumb straightforward algorithm like Bitcoin is enough to ensure a large swath of people dedicate enormous amounts of energy and e-waste doing tremendous amounts of damage to the planet guessing random numbers.

If some random human could come up with Bitcoin, imagine the kind of economic-incentive-perverting-tragedy-of-the-commons attack a super intelligent AI could come up with, to get us to destroy ourselves without lifting a proverbial finger...

10

logginginagain t1_irmhrl7 wrote

Great point if AI can make our self destruction profitable it will win passively.

2

littlebitsofspider t1_irlg6kb wrote

Like propagandizing runaway climate change until public action to curb it is too late? 🤔

1

shawnikaros t1_irkp2z5 wrote

Hypotethically, what stops an AI from creating a self-replicating virus that transfers between BT, wi-fi, whatever singal that's attached to a processor and taking over those devices to increase it's processing power? If there's an AI that is capable of creating a better version of itself, it can propably melt our firewalls pretty easily.

Only way to stop that would be to unplug every smart device. Back to the 70s.

4

sadroobeer t1_irl0irf wrote

Been messing around with AI models quite a bit and yea. We would hit physical restrictions long before most apocalyptic scenarios

2