Viewing a single comment thread. View all comments

mocny-chlapik t1_j8z3vox wrote

How should we control the exposure for people with low cognitive capabilities that might not understand what they are interacting with.

12

BronzeArcher OP t1_j8z7yuo wrote

As in they wouldn’t interpret it responsibly? What exactly is the concern related to them not understanding?

0

currentscurrents t1_j8zz4n3 wrote

Look at things like replika.ai that give you a "friend" to chat with. Now imagine someone evil using that to run a romance scam.

Sure the success rate is low, but it can search for millions of potential victims at once. The cost of operation is almost zero compared to human-run scams.

On the other hand, it also gives us better tools to protect against it. We can use LLMs to examine messages and spot scams. People who are lonely enough to fall for a romance scam may compensate for their loneliness by chatting with friendly or sexy chatbots.

6

ilovethrills t1_j90noyx wrote

But that can be said on paper for thousands of things. Not sure if it actually translates in real life. Although there might be some push to label such content as AI generated, similar to how "Ad" and "promoted" are labelled in results.

−1

mocny-chlapik t1_j91uejr wrote

Yeah, I mean people with mental ilness (e.g. schizophrenia), people with debilitatingly low intelligence and similar cases. Who knows how they would interact with seeminingly intelligent LMs.

5