Viewing a single comment thread. View all comments

Low-Restaurant3504 t1_j9ct8c0 wrote

1: Asimov's 3 laws of robotics is about Artificial Intellegence. It already applies.

2: That is not how Artificial Intellegence works. If you don't want it to kill humans, don't build it to kill humans.

3: The random weird responses from currently available chatbots, are just that... random and weird. It's contextually responding based on Keywords. Anything it says has no thought, meaning, or intention behind it.

In conclusion, you can calm down, and maybe cut back on the scifi movies.

34

Lord0fHats t1_j9cz910 wrote

4: Assimov's writing has extensive exploration of the three laws being insufficient beyond hypotheticals to assuage the fear of robots in men or to answer any the moral and ethical dilemmas they present.

I feel like at least part of the point of it all was that while the three laws embodied good principals they're too rigid in practice to actually be the basis of any sort of programmed behavior.

One of his stories is about the second and third law contradicting each other and locking the robot in a loop.

Another explores the duality of lying to spare people their feelings/hurting them by not telling the truth.

Others explore the ways the laws could inevitably be turned against people themselves.

Because the point of the Three Laws isn't to provide an answer for people's fear of machines. It was mostly fodder to create interesting and dramatic moral dilemmas. I.E. The three laws are not a serious proposal for how we deal with this problem.

16

MarksmanKNG t1_j9eiitc wrote

Agreed on this. It provides a baseline foundation which can be comforting to the common layman at first glance.

But devil's in the details and as shown in his novels, there are a lot of details in a big spider web. And those details goes in both ways in more than one.

I'm hoping to pursue further in this with my own writing following Isaac Asimov's track. Truly a man of his time.

3

gaudiocomplex t1_j9czm28 wrote

This is a SPECTACULARLY terrible take. Maybe not #3 but the rest is so bad. 😂

OP: you're talking about AI alignment and yes, currently there's no way to prevent AI from killing us all if we were to develop AGI. The AI community talks a lot about this at lesswrong.com. I recommend going there instead of listening to idiots here.

Here's a fun one

Favorite part:

>"The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".

−3

Fluid_Mulberry394 OP t1_j9d3dcb wrote

Whatever, but that take certainly is the basis of an apocalyptic novel.

7

gaudiocomplex t1_j9d6xqo wrote

That would make a bad novel.

The very point is that it's spectacularly easy to kill us all without any drama or theatrics.

−5

Low-Restaurant3504 t1_j9d4txa wrote

"Your take is bad. My fanfiction proves it."

3

[deleted] t1_j9d6jwr wrote

[removed]

−5

[deleted] t1_j9d6use wrote

[removed]

0

Futurology-ModTeam t1_j9da7vw wrote

Hi, Low-Restaurant3504. Thanks for contributing. However, your comment was removed from /r/Futurology.


> > r/iamverysmart bait.


> Rule 6 - Comments must be on topic, be of sufficient length, and contribute positively to the discussion.

Refer to the subreddit rules, the transparency wiki, or the domain blacklist for more information.

[Message the Mods](https://www.reddit.com/message/compose?to=/r/Futurology&subject=Question regarding the removal of this comment by /u/Low-Restaurant3504&message=I have a question regarding the removal of this comment if you feel this was in error.

1