Viewing a single comment thread. View all comments

ktpr t1_jeco4so wrote

I feel like a lot of folks are missing this point. They retraining on ChatGPT output or LLaMA related output and assume they can license as MIT or some such.


phire t1_jects6y wrote

It gets a bit more complicated.

OpenAI can't actually claim copyright on the output of ChatGPT, so licensing something trained on ChatGPT output as MIT should be fine from a copyright perspective. But OpenAI do have terms and conditions that forbid using ChatGPT output to train an AI... I'm not sure how enforceable that is, especially when people put ChatGPT output all over the internet, making it near impossible to avoid in a training set.

As for retraining the LLaMA weights... presumably Facebook do hold copyright on the weights, which is extremely problematic for retraining them and relicensing them.


pasr9 t1_jecwvck wrote

Facebook do not hold copyright to the weights for the same reasons they do not hold copyrights to the output of their models. Neither the weights or output meet the threshold of copyrightablitity. Both are new works created out of a purely mechanical process that lack direct human authorship and creativity (two of the prerequisites required for copyright to apply).

For more information:


phire t1_jed57od wrote

Hang on, that guidence only covers generated outputs, not weights.

I just assumed weights would be like compiled code, which is also a fully mechanical process, but copyrightable because of the inputs.... Then again, most of the training data (by volume) going into machine learning models isn't owned by the company.


EuphoricPenguin22 t1_jedhyci wrote

Using training data without explicit permission is (probably) considered to be fair use in the United States. There are some currently active court cases relating to this exact issue here in the U.S., namely Getty Images (US), Inc. v. Stability AI, Inc. The case is still far from a decision, but it will likely be directly responsible for setting a precedent on this matter. There are a few other cases happening in other parts of the world, and depending on where you are specifically, different laws or regulations may already be in place that clarify this specific area of law. I believe there is another case against Stability AI in the UK, and I've heard that the EU was considering adding or has added an opt-out portion of the law; I'm not sure.


phire t1_jedo041 wrote

Perfect 10, Inc. v., Inc. established that it was fair use for google images to keep thumbnail sized copies of images because providing image search was transformative.

I'm not a lawyer, but thumbnails are way closer to the original than network weights, and AI image generation is arguably way more transformative than providing image search. I'd be surprised if Stability loses that suit.


pm_me_your_pay_slips t1_jee2xtt wrote

Perhaps applicable to the generated outputs of the model, but it’s not a clear case for the inputs used as training data. It could very well end up in the same situation as sampling in the music industry. Which is transformative, yet people using samples have to “clear” them by asking for permission (usually involves money).


Sopel97 t1_jefa735 wrote

"terms and conditions" means that at worst openai will restrict your access to chatgpt, no?


artsybashev t1_jefp0o2 wrote

Yes the only thing they can do is ban you from their service


pasr9 t1_jecwbqm wrote

AI output is not currently copyrightable in the US.


Jean-Porte t1_jedkyhk wrote

Are the users responsible for using a model that was badly licensed?