phire

phire t1_jedo041 wrote

Perfect 10, Inc. v. Amazon.com, Inc. established that it was fair use for google images to keep thumbnail sized copies of images because providing image search was transformative.

I'm not a lawyer, but thumbnails are way closer to the original than network weights, and AI image generation is arguably way more transformative than providing image search. I'd be surprised if Stability loses that suit.

10

phire t1_jed57od wrote

Hang on, that guidence only covers generated outputs, not weights.

I just assumed weights would be like compiled code, which is also a fully mechanical process, but copyrightable because of the inputs.... Then again, most of the training data (by volume) going into machine learning models isn't owned by the company.

15

phire t1_jects6y wrote

It gets a bit more complicated.

OpenAI can't actually claim copyright on the output of ChatGPT, so licensing something trained on ChatGPT output as MIT should be fine from a copyright perspective. But OpenAI do have terms and conditions that forbid using ChatGPT output to train an AI... I'm not sure how enforceable that is, especially when people put ChatGPT output all over the internet, making it near impossible to avoid in a training set.

As for retraining the LLaMA weights... presumably Facebook do hold copyright on the weights, which is extremely problematic for retraining them and relicensing them.

43

phire t1_jc5kwkh wrote

Prompt: Write some python code to calculate the 900th prime number

from math import sqrt

def is_prime(n):
    if n < 2:
        return False
    for i in range(2, n):
        if n % i == 0:
            return False
    return True

# Calculate the 900th prime number
n = 899

# Iterate from 2 to n and check if the number is prime
while (n > 1):
    if is_prime(n):
       prime_number = n
       break
    n = n - 1

# Print the result
print(prime_number)

Well, it gets points for writing code that runs without syntax errors. And it did successfully create a function that checks if a number is prime.

It looses points for forgetting the optimisation of stopping at the square root of n (despite importing sqrt)

The actual search goes totally off the rails. The actual implementation finds the first prime less than 900 by starting at 899 and going down. The comments are more correct, implying it was planing to increment up. If it had stopped after fining 900 primes, the result would have been correct.

TBH, I'm pretty impressed for a 7B parameter model.

17