Viewing a single comment thread. View all comments

PredictorX1 t1_j3ca2pm wrote

What, specifically, are you suggesting?

1

Baturinsky OP t1_j3ch80z wrote

I'm not qualified enough to figure how drastic measures can be enough.

From countries realising they face a huge common crisis that they only survive it if they forget the squabbles and work together.

To using the AI itself to analyse and prevent it's own threats.

To classifying all trained general-purpose models of scale of ChatGPT and above and preventing the possibility of making the new ones (as I see entire-internet-packed models the biggest threat now, if they can be used without the safeguards)

And up to to forcebly reverting all publically avaiable computing and communication technology to the level of 20 of 30 years ago, until we figure how we can use it safely.

0

Blasket_Basket t1_j3h8t00 wrote

It sounds like you have some serious misunderstandings about what AI is and what it can be used for, rooted in the same sci-fi plots that has misinformed the entire public.

1

Baturinsky OP t1_j3hnmdc wrote

I'm no expert indeed, that's why I was asking.
But experts in the field also think that serious concerns on AI safety is justified

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence

Also, a lot of good arguments here:

https://www.reddit.com/r/ControlProblem/wiki/faq/

1

[deleted] t1_j40im8k wrote

[removed]

1

asingov t1_j40suvt wrote

Cherry picking Musk and Hawking out of a list which includes Norvig, Deepmind, Russel and "academics from Cambridge, Oxford, Stanford, Harvard and MIT" is just dishonest.

1

bob_shoeman t1_j40ukrr wrote

Alright, that’s fair - edited. I didn’t read through the first link properly.

Point remains that there is pretty generally a pretty complete lack of knowledge of what the field is like. r/ControlProblem most certainly is full of nonsense.

2