Viewing a single comment thread. View all comments

SendMePicsOfCat OP t1_j18wx97 wrote

That's not what shows up when I google it, so thanks for clarifying. This is not what you think it is though. What's happening in these scenarios is that the reinforcement algorithm is too simple and lacks negative feedback to ensure an appropriate actions. There is nothing inherently wrong with the system, only that it is poorly designed.

This happened because the only reward value that affected it's learning was the final score. Thus it figured out a way to maximize that score. The only error here was user and designer error, nothing went wrong with the AI, it did it's task to fullness of it's capabilities.

AGI will be developed with very clear limitations, like what we're already seeing being tested and implemented with chatGPT. There will be things it's not allowed to, and a lot of them. And short circuit doesn't really make sense, this is the classic alignment issue, which as I stated in my post, really isn't a big issue in the future.

0

Surur t1_j19q65g wrote

Consider that even humans have alignment issues, and that there is a real concern Putin would nuke USA, you will see the fears are actually far from overblown.

3