Viewing a single comment thread. View all comments

mr_birrd t1_j77rkjd wrote

If a LLM model tells you it would rob a bank it's not that the model would do that could it walk around. It's what a statement that has a high likelihood in the considered language for the specific data looks like. And if it's chat gpt the response is also tailored to suit human preference.

3

DoxxThis1 t1_j77z3s1 wrote

A model can't walk around, but an unconstrained model could persuade gullible humans to perform actions on its behalf.

The idea was explored in the movie Colossus.

1

mr_birrd t1_j783pta wrote

Well very many humans can persuade gullible humans to perform actions on their behalf. Problem are people. Furthermore I actually would trust a LLM more than the average human.

1

DoxxThis1 t1_j784juk wrote

In line with the OP's point, acknowledging that "the problem are people" would not change the outcome.

3

mr_birrd t1_j784tz3 wrote

Will is it then the "dangers of scaling LLM" or "even with top notch technology people are just people".

1