Viewing a single comment thread. View all comments

gaudiocomplex t1_j9czm28 wrote

This is a SPECTACULARLY terrible take. Maybe not #3 but the rest is so bad. 😂

OP: you're talking about AI alignment and yes, currently there's no way to prevent AI from killing us all if we were to develop AGI. The AI community talks a lot about this at lesswrong.com. I recommend going there instead of listening to idiots here.

Here's a fun one

Favorite part:

>"The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".

−3

Fluid_Mulberry394 OP t1_j9d3dcb wrote

Whatever, but that take certainly is the basis of an apocalyptic novel.

7

gaudiocomplex t1_j9d6xqo wrote

That would make a bad novel.

The very point is that it's spectacularly easy to kill us all without any drama or theatrics.

−5

Low-Restaurant3504 t1_j9d4txa wrote

"Your take is bad. My fanfiction proves it."

3

[deleted] t1_j9d6jwr wrote

[removed]

−5

[deleted] t1_j9d6use wrote

[removed]

0

Futurology-ModTeam t1_j9da7vw wrote

Hi, Low-Restaurant3504. Thanks for contributing. However, your comment was removed from /r/Futurology.


> > r/iamverysmart bait.


> Rule 6 - Comments must be on topic, be of sufficient length, and contribute positively to the discussion.

Refer to the subreddit rules, the transparency wiki, or the domain blacklist for more information.

[Message the Mods](https://www.reddit.com/message/compose?to=/r/Futurology&subject=Question regarding the removal of this comment by /u/Low-Restaurant3504&message=I have a question regarding the removal of this comment if you feel this was in error.

1