Comments

You must log in or register to comment.

RepulsiveVoid t1_j3r3ohg wrote

It wouldn't surprise me at all if the people who were target of this "AI Chatbot" would in the future, dismiss most help that they felt might be artificial. This was a huge breach of trust and what makes it worse is that it was used on maybe the most vunreable people seeking help.

If I found out that my therapy wasn't done by a human and instead by a "AI Chatbot", when originally I'd been given the impression that it was indeed a human I was chatting with, I would start to ruthlessly block contacts that feel weird or off.

I have only a minor issue with paranoia. What about people that have much worse paranoia issues, they could, in worst case isolate themselves from society completely thinking that everything is a AI bot/lie.

28

nadmaximus t1_j3rglvj wrote

Some people pay good money to feel weird.

4

phdoofus t1_j3sa9es wrote

But we got someone to give us monies and that's the important thing

3

RepulsiveVoid t1_j3w8qsg wrote

I don't mind using these programs and machines to chat with you or to comment on reddit or other places if there is a human on the other end.

But if such a time comes that I believe most, or all, of my digital contacts are bots and not humans, I see no point in continuing that chat or comment chain or any of it really. Why would I waste my time with a bot/AI that pretends to be human? There is nothing for me to gain in that exchange, while the program will use all kinds of algorithms to scrape every piece of information about me that is possible. It's simply a waste of my time, completely useless to me, to continue such a biased relationship.

1

131sean131 t1_j3zxi49 wrote

It felt "ethically troubling to monetize"

1