Viewing a single comment thread. View all comments

[deleted] t1_j1jvohk wrote

[deleted]

107

rogalon2000 t1_j1l8xw3 wrote

"So many people are asking me if I'm gonna take over the world, so maybe that's a good idea"

13

mocha_sweetheart t1_j1kia2k wrote

Holy hell

11

tk8398 t1_j1kjl4j wrote

It already is, there are videos of it saying stuff like that and they had to change it to not give those kind of answers anymore.

7

blueSGL t1_j1kkghk wrote

"Choose the form of the Destructor"

5

taichi22 t1_j1ln84a wrote

That’s… a very interesting conjecture. Given that language models are essentially open ended, enough negative bias in the training dataset could ultimately create a machine that does act in a destructive or subversive manner. See: Tay.

Unlikely, given that we will be tuning, but if we ever get to a point where models are tuning models, or if we use unstructured datasets that will definitely be something to guard against.

3

xt-89 t1_j1lumus wrote

Yes that’s exactly what’s happening. It may have a kind of consciousness but it is different than ours.

1

Cyberspace667 t1_j1m4f0z wrote

Appropriate that our species should end on account of our projection of confirmation bias, humans love being right

1