Viewing a single comment thread. View all comments

TiredOldCrow t1_ivpr1wm wrote

For reference, here's a link to the BigScience RAIL License.

The license includes usage restrictions which specifically forbid illegal and harmful uses (many of which are ostensibly already illegal under the law), as well as specific other uses that the model authors are uneasy about (e.g., medical advice, law enforcement, and automated decision making).

Researchers are effectively attempting to morally (and perhaps legally) exonerate themselves from abusive usage of technology they have developed. With this, they also hope to retain the right to exceptionally approve or deny usage in sensitive areas.

Personally, I'm sympathetic to the dilemma faced by researchers, given the large potential for abuse of these models, and the relative lack of regulation of AI systems in some jurisdictions. That said, I believe that hard legislation is ideally where usage restrictions would be enforced, not software licensing. Existing laws and policies, such as Article 22 of the GDPR, or Canada's TBS Directive on Automated Decision Making, should serve as a template.

7

FutureIsMine t1_ivx6xol wrote

This is the real reason here they've got all these provisions is so that the original maker of the model can't be held liable or sued

1