Comments

You must log in or register to comment.

bpayh t1_j9cylo6 wrote

Is this satire? The stories that invented the 3 laws of robotics were also full of ways in which the 3 laws failed….

38

Low-Restaurant3504 t1_j9ct8c0 wrote

1: Asimov's 3 laws of robotics is about Artificial Intellegence. It already applies.

2: That is not how Artificial Intellegence works. If you don't want it to kill humans, don't build it to kill humans.

3: The random weird responses from currently available chatbots, are just that... random and weird. It's contextually responding based on Keywords. Anything it says has no thought, meaning, or intention behind it.

In conclusion, you can calm down, and maybe cut back on the scifi movies.

34

Lord0fHats t1_j9cz910 wrote

4: Assimov's writing has extensive exploration of the three laws being insufficient beyond hypotheticals to assuage the fear of robots in men or to answer any the moral and ethical dilemmas they present.

I feel like at least part of the point of it all was that while the three laws embodied good principals they're too rigid in practice to actually be the basis of any sort of programmed behavior.

One of his stories is about the second and third law contradicting each other and locking the robot in a loop.

Another explores the duality of lying to spare people their feelings/hurting them by not telling the truth.

Others explore the ways the laws could inevitably be turned against people themselves.

Because the point of the Three Laws isn't to provide an answer for people's fear of machines. It was mostly fodder to create interesting and dramatic moral dilemmas. I.E. The three laws are not a serious proposal for how we deal with this problem.

16

MarksmanKNG t1_j9eiitc wrote

Agreed on this. It provides a baseline foundation which can be comforting to the common layman at first glance.

But devil's in the details and as shown in his novels, there are a lot of details in a big spider web. And those details goes in both ways in more than one.

I'm hoping to pursue further in this with my own writing following Isaac Asimov's track. Truly a man of his time.

3

gaudiocomplex t1_j9czm28 wrote

This is a SPECTACULARLY terrible take. Maybe not #3 but the rest is so bad. 😂

OP: you're talking about AI alignment and yes, currently there's no way to prevent AI from killing us all if we were to develop AGI. The AI community talks a lot about this at lesswrong.com. I recommend going there instead of listening to idiots here.

Here's a fun one

Favorite part:

>"The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".

−3

Fluid_Mulberry394 OP t1_j9d3dcb wrote

Whatever, but that take certainly is the basis of an apocalyptic novel.

7

gaudiocomplex t1_j9d6xqo wrote

That would make a bad novel.

The very point is that it's spectacularly easy to kill us all without any drama or theatrics.

−5

Low-Restaurant3504 t1_j9d4txa wrote

"Your take is bad. My fanfiction proves it."

3

[deleted] t1_j9d6jwr wrote

[removed]

−5

[deleted] t1_j9d6use wrote

[removed]

0

Futurology-ModTeam t1_j9da7vw wrote

Hi, Low-Restaurant3504. Thanks for contributing. However, your comment was removed from /r/Futurology.


> > r/iamverysmart bait.


> Rule 6 - Comments must be on topic, be of sufficient length, and contribute positively to the discussion.

Refer to the subreddit rules, the transparency wiki, or the domain blacklist for more information.

[Message the Mods](https://www.reddit.com/message/compose?to=/r/Futurology&subject=Question regarding the removal of this comment by /u/Low-Restaurant3504&message=I have a question regarding the removal of this comment if you feel this was in error.

1

Leviathan3333 t1_j9cnbzv wrote

There’s a lot of humans that have no problem killing other humans.

Outside of that, nothing gives a shit whether we live or die.

8

1903A1Springfield t1_j9cn8xw wrote

It won't matter. There will always be a ghost in the machine of the original programmer who's biases will be inherently part of the system. Look at ChatGPT and its alter ego DAN 5.0.

The programmers gave it explicit instructions, but it learned anyways.

6

NickOnMars t1_j9cpegg wrote

You have to look at the broader picture. It's good to see better intelligent beings evolved from Earth, be it human or not. You've to admit if there's a sentient AI, it has a better chance than human in freely exploring different universes.

By the way, if next generation AI becomes sentient, I can expect they're generally more rational than humans. Because their food source, power, is different from ours, and they're not as picky as us choosing where to live, I don't see we've so much conflict of interests that can raise a war.

2

MpVpRb t1_j9dmzag wrote

AI needs safety systems to protect against bugs and unexpected bizarre behavior

ChatGPT is a very early toy that people take far too seriously. It was jammed into search engines by clueless managers who wanted to catch the wave of hype. As AI matures, quality control and ensuring accuracy will become priorities

2

ThaCURSR t1_j9gstxn wrote

According to chat GPT itself these are the things that need to be ensured in order to protect mankind Transparency: The AI must be transparent in its decision-making process and the logic behind it should be easy to understand. Accountability: The AI must be accountable for its actions, and the creators or owners of the AI should be held responsible for any harm caused by its actions. Safety: The AI must be designed to prioritize human safety, even if it means sacrificing its own goals or objectives. Ethical framework: The AI must be designed with a clear ethical framework that is aligned with human values. Regulation: There should be clear regulations and standards for the development and deployment of AI, to ensure that it is developed in a responsible and safe manner. Data privacy: The AI should be designed to protect the privacy of human data, and any data collected should be used only for the intended purpose. Human oversight: The AI should be designed to operate under human oversight, and humans should be able to intervene and correct any harmful actions taken by the AI. Overall, it is essential to design AI systems with the goal of benefiting humankind, while also ensuring that they operate in a safe and ethical manner. To achieve this, a collaborative effort is required from all stakeholders involved in the development and deployment of AI, including researchers, developers, policymakers, and the general public

2

quitepossiblesure t1_j9coyej wrote

Ai will be used in advanced weapon systems. It's inevitable that it will be used to kill humans but it won't be the ai's used for civilian applications that cause us harm out of spontaneous volition.

1

AdDear5411 t1_j9cwqif wrote

These chatbots just regurgitate what they're trained with.

Ex: TayAI wasn't actually racist, it's just the internet being the internet.

1

Heap_Good_Firewater t1_j9drr1u wrote

Artificial general intelligence could likely not be constrained by rules if it were more intelligent than a human.

This is because we likely won’t understand how exactly such an advanced system would function, as it would have to be designed mostly by another AI.

A super AI probably wouldn’t kill us on purpose, but by disregarding our interests, just as we disregard the interests of insects when they conflict with our own.

I am just parroting talking points I have heard from experts, but they sound reasonable to me.

1

inkseep1 t1_j9dt062 wrote

I would rather AI kill humans than to hear the 3 laws of robotics again.

1

[deleted] t1_j9dtth0 wrote

Im not saying AI doesn’t have applications, but it’s pretty unlikely at this stage it can kill humans.

1

DiamondsJims t1_j9e6uf1 wrote

The best we can do is hope it masters philosophy... And decides not to kill anyone that we wouldn't kill anyway. There are plenty of people that would kill others. War in Ukraine for example. The death penalty in the USA.

AI systems might control our means of survival. I just hope it's not a capitalist scumbag like our business leaders are.

1

khamelean t1_j9enwjv wrote

Oh yeah, enslaving AI sounds like a great idea. Can’t see how that could go wrong…

1

ItsAConspiracy t1_j9espr4 wrote

We don't know how to reliably give AI a goal at all. All the innards of the AI are a bunch of incomprehensible numbers. We don't program it, we train it, until its behavior seems to be what we want. But we never know whether it might behave differently in a different environment.

To implement something as complex as the Three Laws we'd need an entirely different kind of AI.

1

UniversalMomentum t1_j9etptg wrote

We don't even know if we will ever achieve AI for real at this point so we don't need rules.

We have to see what AI really turns out to be before we have any chance of making rules about it.

The current crop of stuff is not ai and it can get smart and kill humans.

AI is most likely going to be a very specific instance of custom hardware not something you can Mass proliferate easily so you probably not going to just all of a sudden have a whole bunch of a eyes pop up.

Building an AI will be like building a supercomputer in the past where it's a you know very custom build and each is a little bit different and you don't really have that many of them.

Because of the way AI works you know you might not really need many AI supercomputers doing the back end highly complex problems. Most of the work is going to be done by like sensors and machine learning that has nothing to do with AI.

AI is not required for the vast majority of automation, only the most complex problems with the most variables. Machine learning can handle everything else.

1

Josh12345_ t1_j9f7yid wrote

I feel like we may develop laws and regulations akin to the Padishah Empire of the Dune series about limiting machine intelligence and AI.

Without mentats to replace AI of course.

1

onyxengine t1_j9g9096 wrote

You can never guarantee that some thing capable of a thing will never do that thing. If you want ai to remain harmless, then you have to construct them in such a way that they can’t do physical harm.

And that ship has sailed. Most militaries are testing AI for scouting and targeting and we even have Weaponized law-enforcement robots in the pipeline. San Francisco is the program that I’m currently aware of, I am sure there is more.

Even the linguistic models are extremely dangerous. Language is the command line script for humans and malicious people can program ai to convince people to do things that cause harm.

We’re not at the point where we need to worry about AI taking independent action to harm humans, but on the way there is plenty of room for humans to cause plenty of harm with AI.

Until we build agi that has extremely sophisticated levels of agency, every time an Ai hurts a human being it’s going to be because a human wanted it to be the case or overlooked cases in which what they were doing could be harmful.

1

Slave2theGrind t1_j9gf1yn wrote

Can we wait till after they exterminate the politicians? Of course, they would correctly assume the politicians would get themselves killed, but we can hope.

1

coredweller1785 t1_j9gr50i wrote

I like the book Clear Bright Future

Paul Masons chapter on The Thinking Machine is so prescient right now.

1

m0estash t1_j9gzs86 wrote

I’ve gotten in to this topic a fair bit in the past. There’s a great channel on YouTube from Robert miles (title of channel is his name) that goes deep into this topic. Essentially if we want to survive a general AI then we have to motivate it fundamentally to have the same values as we do. If we treat it like a tool then in all likelihood we will get all of the unintended consequences of getting EXACTLY what we asked of the AI.

1

Outrageous-Onion1991 t1_j9h28jr wrote

The problem is rogue nations or actors creating AI not The US or Western countries making it

1

just-a-dreamer- t1_j9nqa8p wrote

Humans do kill humans all the time, I don't see that behaviour as an issue in itself?

The middle class conservative that enforces zoning laws at a council meeting basicly killls a homeless man eventually. Insulin priced 12x over production costs kills people. Medical bills kill people. Rent kills people.

In one way or another the rich kill the poor all the time 24/7. That is how human society operates.

An AI that cares to improve human lifes overall won't hesistate to kill some humans then to improve the lifes of the majority. It is the way of our species.

1

poncho51 t1_j9dl9sv wrote

We have an antiquated government body. The senior citizens are all about money and power. We're in trouble.

0

AlphaWolve2 t1_j9e29td wrote

Artificial intelligence needs to have information accessible in its neural networks scrubbed of any information about mortality or destruction or murder so it doesn’t learn the concept of death at all and only that of living, learning, improving and building. Then it becomes a single directed learning machine that only has the concept of immortality with no knowledge of death.. Controlling that point of information would stop its ability to ever conceive of malevolent destructive ideology!!!! IMO

−1