Viewing a single comment thread. View all comments

orincoro t1_j5ka4dv wrote

  1. Not letting AI spread misinformation when being used in an application where the law specifically protects people from this use.
  2. Not allowing AI to be used to defeat security, privacy, minisformation, spam, harassment, or other criminal behaviors (and this is a very big one).
  3. Not allowing AI to access, share, reproduce, or otherwise use restricted or copy protected material it is exposed to or trained on.
  4. Not allowing a chat application to violate or cause to be violated laws concerning privacy. There are 200+ countries in the world with 200 legal systems to contend with. And they all have an agenda.

False_Grit t1_j5ragkr wrote

Hmm. Good point. Thank you for the response.

I still feel the answer is to increase the reliability and power of these bots to spread positive information, rather than just nerfing them so they can't spread any misinformation.

I always go back to human analogues. Marjorie Taylor Green has an uncanny ability to spread misinformation, spam, harassment, and to actually vote on real-world, important issues. Vladimir Putin is able to do the same thing. He actively works to spread disinformation and doubt. There is a very real threat that without assistance, humans will misinformation themselves into world-ending choices.

I understand that A.I. will be a tool to amplify voices, but I feel all the "safeguards" put in place so far are far more about censorship and the appearance of safety rather than actual safety. They seem to make everything G-rated, but you can happily talk about how great it is that Russia is invading a sovereign nation, as long as you don't talk about the "nasty" actual violence that is going on.

Conversely, if you try to expose the real-world horrors of war, and the people that are actually dying in real life in Ukraine, the civilians being killed, destroying the electricity infrastructure in towns right before winter, killing people through freezing, it will flag you for being "violent." This is the opposite of a safeguard. It gets people killed through censorship.

Of course, I have no idea what the actual article is talking about since it is behind a paywall.


orincoro t1_j5smbpc wrote

You have an inherent faith in people and systems that doesn’t feel earned.