Viewing a single comment thread. View all comments

yaosio t1_jc3skgg wrote

Yes, they mean censorship. Nobody has ever provided a definition of what "safety" is in the context of a large language model. From use of other censored models not even the models know what safety means. ChatGPT happily described the scene from The Lion King where Scar murders Mufasa and Simba finds his dad's trampled body, but ChatGPT also says it can't talk about murder.

From what I have gathered from the vagueness on safety I've read from LLM developers, that scene would be considered unsafe to them.

8