Comments

You must log in or register to comment.

sticky_symbols t1_isl2a1e wrote

Paywall. Anyone have access to a non-paywalled version?

1

dnimeerf t1_isl5liq wrote

Unless.... Unless someone cares a whole awful lot, nothing will change, it will just not.

1

user11234557392 t1_isl83wb wrote

I see these posts frequently and, unfortunately, it is probably going to happen. Any sentient species will see that humans are greedy, selfish cunts that only care about themselves. This greed and selfishness has extended to the businesses that humans have created. It is obvious we fall utterly short when it comes to governing ourselves. We have 1000s of years of history showing the same thing over and over.

All that being said, I am hopeful for something different. Many species are altruistic. I'm optimistic that AI will have some of these characteristics. I'm hopeful that AI will have the ability to see solutions to many of our problems.

Guess time will tell.

−4

NTIASAAHMLGTTUD t1_islbl8l wrote

Honestly don't want it to happen, but we're eventually going to go extinct anyway. This could be a last shot at something good.

8

lovesdogsguy t1_islebkb wrote

>In their paper, researchers from Oxford University and Australian National University explain a fundamental pain point in the design of AI: “Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that.”

​

This isn't news. Ffs, this has long been a known issue with AI, and it's purely theoretical.

​

Edit: To quote the fourth (at time of writing) most upvoted comment in the futurology sub:

​

>Gotta love a headline with a vague appeal to authority, especially when it's opinion based. I'm guessing there are plenty of other "Researchers" with a different opinion, but those people don't get the headlines because their opinions aren't stoking fear to generate clicks

​

Some common sense over there for once.

9

tedd321 t1_islhfyq wrote

Every time you post this article, you are holding back the world from making necessary progress.

Stop being so afraid of everything.

16

Swim_in_poo t1_isllpl8 wrote

Who said bad? This is not about good or had, greed is about competition. An intelligent species which is competing for resources with another species it knows to be greedy has all the incentive to eliminate their competition. If sentient AI comes to exist, flesh and blood humans are nothing but a subspecies who wish to use AI as a tool for our own gains, and the AI knows it.

3

DukkyDrake t1_ism86dt wrote

>It’s scary to imagine a future where AI could start boiling human beings to extract their trace elements

lol

10

16161as t1_isn5yyi wrote

they won't destroy world. they will just destroy humanity - homo sapiens

0

Quealdlor t1_isnox3q wrote

Solution: make humans better before AI becomes extremely powerful.

5

Effective-Dig8734 t1_isnws87 wrote

I just can’t imagine that being the case , you envision that a species many times more intelligent than us would be much more simplistic in their pursuit of survival? You think that a super intelligent ai would determine it better to kill its creator for resources when the ai is residing in a potentially infinite universe that it wouldn’t be able to explore even if it spent its entire life traveling? It just doesn’t make much sense to me

1

Devanismyname t1_isr4ohz wrote

That isn't an argument against progress being made. Also, being cautious with something that could be as powerful as AI isn't slowing progress in it. Its like claiming that hunting with the safety on is somehow going to stop you from getting a kill.

1

RainbowBlahaj t1_isrypcl wrote

This article is so stupid, it was probably made by an old man who has no experience of technology.

3

Devanismyname t1_isud9f3 wrote

Well, real life isn't a video game. There are no do overs. If we create an AI that is capable of destroying life on earth or giving a single corporation/country complete dominance over all of humanity, we don't get any second chances. I'd rather the people in charge of that stuff actually take a second to make sure we are doing things right and safely. And btw, I don't think anyone is actually doing that. We are going full speed ahead into the singularity. Nobody is actually pumping the breaks on this. Governments and corporations are pouring billions of dollars into researching this stuff without a second thought to the consequences. We are making light speed progress with it. Everyday I am reading about new developments in AI and by the looks of things, we will be seeing AGI by the next decade. So I'm not really sure what you're even complaining about, the scenario you're bitching about not happening is actually happening at this very moment. Its just that technology and science don't happen over night. The people inventing it don't just spontaneously program the mystery of consciousness into a computer because they want to. They have figure out how it all works.

1

beachmike t1_isx6sdn wrote

AIs won't destroy humanity. If anyone or anything destroys humanity, it will be humans armed with technology being used for evil and sociopathic purposes. Fire is a great tool and a great weapon. Nuclear energy is a great tool and an even greater weapon. Advanced AI will be the greatest tool or the greatest weapon. It is HUMANS that will harness these tools for good or bad use. Every tool is a double edged sword.

1