Viewing a single comment thread. View all comments

topcodemangler t1_jc2yjvw wrote

>Finally, we have not designed adequate safety measures, so Alpaca is not ready to be deployed for general use

You mean censorship?

−11

yaosio t1_jc3skgg wrote

Yes, they mean censorship. Nobody has ever provided a definition of what "safety" is in the context of a large language model. From use of other censored models not even the models know what safety means. ChatGPT happily described the scene from The Lion King where Scar murders Mufasa and Simba finds his dad's trampled body, but ChatGPT also says it can't talk about murder.

From what I have gathered from the vagueness on safety I've read from LLM developers, that scene would be considered unsafe to them.

8

currentscurrents t1_jc3dk1e wrote

At minimum AI is going to need to understand and follow the law.

This is getting pretty relevant now that AI can start interacting with the real world. The technology is here, it's only a matter of time until someone builds a Palm-E style robot with a gun.

−6

yaosio t1_jc3rvx6 wrote

In some countries it's illegal to say anything bad about the head of state. Should large lanague models be prevented of saying anything bad about heads of state because it breaks the law?

8

currentscurrents t1_jc3sfua wrote

Humans aren't going to have perfect laws everywhere, but it's still not the AI's place to decide what's right and wrong.

In practice, AI that doesn't follow local laws simply isn't going to be allowed to operate anyway.

−1

yaosio t1_jc3tjpe wrote

In some countries pro-LGBT writing is illegal. When a censored model is released that can't write anything pro-LGBT because it's illegal somewhere, don't you think there would cause quite an uproar, quite a ruckus?

In Russia it's illegal to call their invasion of Ukraine a war. Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?

9

currentscurrents t1_jc3w4ez wrote

>Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?

Unless there's been a major movement in the war since I last checked the news, Ukraine is not part of Russia.

What you're describing sounds like a single universal AI that looks up local laws and follows them blindly.

I think what's going to happen is that each country will train their own AI that aligns with their local laws and values. A US or European AI would have no problem criticizing the Russian government or writing pro-LGBT text. But it would be banned in Russia and Saudia Arabia, and they would have their own alternative.

−1