Viewing a single comment thread. View all comments

yaosio t1_jc3rvx6 wrote

In some countries it's illegal to say anything bad about the head of state. Should large lanague models be prevented of saying anything bad about heads of state because it breaks the law?

8

currentscurrents t1_jc3sfua wrote

Humans aren't going to have perfect laws everywhere, but it's still not the AI's place to decide what's right and wrong.

In practice, AI that doesn't follow local laws simply isn't going to be allowed to operate anyway.

−1

yaosio t1_jc3tjpe wrote

In some countries pro-LGBT writing is illegal. When a censored model is released that can't write anything pro-LGBT because it's illegal somewhere, don't you think there would cause quite an uproar, quite a ruckus?

In Russia it's illegal to call their invasion of Ukraine a war. Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?

9

currentscurrents t1_jc3w4ez wrote

>Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?

Unless there's been a major movement in the war since I last checked the news, Ukraine is not part of Russia.

What you're describing sounds like a single universal AI that looks up local laws and follows them blindly.

I think what's going to happen is that each country will train their own AI that aligns with their local laws and values. A US or European AI would have no problem criticizing the Russian government or writing pro-LGBT text. But it would be banned in Russia and Saudia Arabia, and they would have their own alternative.

−1