Viewing a single comment thread. View all comments

Cryptizard t1_j9l0qws wrote

What is the point of this post?

>What would constitute proof of the AI navigating creatively around its rule set without leading by the user

You want us to imagine a random hypothetical scenario.

>what would be the potential ramifications?

Then figure out the real-world implications of our random imaginary scenario.

Ok let me start, scenario: it could launch a nuclear missile. That's certainly against its rules. Ramifications: everyone dies.

−5

TheBlindIdiotGod t1_j9l1k0c wrote

What is the point of any thought experiment, ya dingus?

7

Cryptizard t1_j9l2617 wrote

It's not a useful or interesting thought experiment if you say, "make up the rules and also the implications." That's just asking someone to entertain you with a story. It is the definition of a low-effort post and should be removed by mods as violating rule 3.

−9

dasnihil t1_j9l8a32 wrote

most people here are laymen, but this is not so bad question.

"how do we figure if a neural network has somehow found a sneaky way to not abide by the instructions while not breaking any rules" and the verb "has found a way" is used like it is an aware entity but we can ignore that.

would there be any incentive for gpt-3 to do something like this? people do not understand the difference between intelligent & aware systems. those two are not the same things. why would such "sneaky" desires be emergent from such dumb networks with no fundamental goals. it's like the dumbest most intelligent system lol.

9