Viewing a single comment thread. View all comments

lovesdogsguy t1_islebkb wrote

>In their paper, researchers from Oxford University and Australian National University explain a fundamental pain point in the design of AI: “Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that.”

​

This isn't news. Ffs, this has long been a known issue with AI, and it's purely theoretical.

​

Edit: To quote the fourth (at time of writing) most upvoted comment in the futurology sub:

​

>Gotta love a headline with a vague appeal to authority, especially when it's opinion based. I'm guessing there are plenty of other "Researchers" with a different opinion, but those people don't get the headlines because their opinions aren't stoking fear to generate clicks

​

Some common sense over there for once.

9