Viewing a single comment thread. View all comments

Baturinsky OP t1_j375886 wrote

Yes, exactly. Which is why it's important to not give access to dangerous things into hands of those who could misuse it with catastrophic consequences.

−7

Duke_De_Luke t1_j376emq wrote

Emails or social networks are as dangerous as AI. They can be used for phishing or identity theft.

Not to talk about a car, or chemical compounds used to clean your home or a kitchen knife.

AI is just a buzzword. You restrict certain applications, not the buzzword. Like you restrict selling of explosives, not chemistry.

8

Baturinsky OP t1_j379g68 wrote

Nothing we knew yet has the danger potential of the self-learning AI.
Even though it's still a potential still.
And it's true that we should restrict only certain applications of it, but it could be a very wide list of application, with very serious measures necessary.

−9

[deleted] t1_j375ru0 wrote

You mean like optimizing algorithms to grab people's attentions and/or feed them ads?

7

Baturinsky OP t1_j37g36w wrote

As far as I see, whoever is doing it is not doing it very good. Be it AI or human.

0

PredictorX1 t1_j3cacld wrote

>Which is why it's important to not give access to dangerous things into hands of those who could misuse it with catastrophic consequences.

What does "give access" mean, in this context? Information on construction of learning systems is widely available. Also, who decides which people "could misuse it"? You?

1

Baturinsky OP t1_j3chu4b wrote

Mostly, giving a source of trained models, and denying the possibility of making the new ones. I see unrestricted use of the big scale general-purpose models as a biggest threat, as they are effectivel "encyclopedias of everything", and can be used for very diverse and unpredictable things.

Who decides is also a very interesting question. Ideally, public consensus, but realisitcally, those who have the capabilities to enforce those limitations.

0