Viewing a single comment thread. View all comments

poo2thegeek t1_j6h8w9u wrote

I mean, yes an AI model learnt from other people’s examples, but is that also not what humans do?

4

Hmm_would_bang t1_j6i10j8 wrote

Humans get inspired by their own perception and imperfect memories of other artists and experiences in their life, AI models literally take the art and add it to their model.

Regardless, you seem to be proposing we treat AI models as if they are human beings and not products. We aren’t going to do that. It’s a nice philosophical game maybe, but if you just look at the facts of the matter you’re dealing with a case of a company taking unlicensed artwork and adding it into their product.

3

poo2thegeek t1_j6i5xxj wrote

AI models take the art, and add it to their training inputs.

It doesn't have perfect memory of the inputs - this can be demonstrated by the fact that model sizes are significantly smaller than the size of data used to train them. Similarly, 'own perception' is an interesting idea. What does it actually mean? I'd argue than in an ML model, utilising some random input when training, to allow for different outputs for the same input (e.g, how chat GPT can reply differently even if you ask it the exact same thing on two different occasions).

I'm not saying we should treat AI models as if they're human beings - I don't think an AI model should be able to hold a copyright for example, but the company thats trained that model should be able to.

Similarly, if the AI model were to output something VERY similar to some existing work, then I think that the company that owns said AI model should be taken to court.

2

oscarhocklee t1_j6i3ohl wrote

See, that's the thing. When humans copy work, we have laws that step in and allow the owner of the work to say "No, you can't do that". Humans could copy anything they see, but there are legal consequences if they copy the wrong thing - especially if they gain financially by doing so. This is very much an argument about whether what these tools are doing is sufficiently like what a human could do for the laws that apply to humans to apply.

If copilot for instance generates code that (were a human to write it) would be legally considered (likely after a long and damaging lawsuit) to be a derived work of something licensed under the GPL, then that derived work must also legally be licensed undrr the GPL.

What's more, there is no clear authorial provenance. Say you find a github repo that contains what looks like a near-perfect copy of some code you own and which you released under a license of your choice. If a human wrote it, that's a legal issue.

Fundamentally, we're arguing here if it's okay in a situation like this to say "Oh, no, it's legal because software did it for me". And remember, there's no way to prove how much of a text file was written by a human and how much by software once it's saved.

2

poo2thegeek t1_j6i59po wrote

So, while this is certainly true, for something to come under copy right it had to be pretty similar to whatever its copying.

For example, if I want to write a book about wizards in the UK fighting some big bad guy, that doesn't mean I'm infringing on the copy right of Harry Potter.

Similarly, I can write a pop song that discusses, idk, how much I like girls with big asses, and that doesn't infringe on the copyright of the (hundreds) of songs on the same topic.

Now, I do think that if an AI model output something that was too similar to some of its training material, and the company that owned that said AI went ahead and published it, then yeah the company should be sued for copyright infringement.

But, it is certainly possible for AI to output completely new things. Just look at the AI art that has been generated in recent month - it's certainly making new images based off what its learnt a good image should look like.

​

Also, on top of all this, its perfectly possible to ensure (or at lest, massively decrease probability of) outputting something similar to its inputs, by 'punishing' the model if it ever outputs something too similar to training inputs.

​

All this means that I don't think this issue is anywhere near as clear cut as a lot of the internet makes it out to be.

3

SerenumUS t1_j6ipa88 wrote

The AI model, presumably machine learning, is not even remotely close to being "like a human". It's called "artificial intelligence" for a reason. The "training" data heavily influences the output.

If you made a machine learning model on a very small scale, such as putting 10 images from artists as its training data, the produced work would very obviously be just portions of the images you fed it. This is no different than what we are seeing now, just on a bigger scale. The output for the source code generation, or art generation, is quite literally using stolen portions of code/images.

I feel people are looking at the final outcome rather than how it got there.

This is the equivalent of hiring one guy to just copy and paste code from the internet for every feature, etc. for a piece of software to be developed (with a lot of imperfections, mind you) and giving the guy a raise because the outcome works.

0

poo2thegeek t1_j6iq0d4 wrote

Yes, but if you took a 4 year old child who had never seen a painting before, showed them 10 paintings, and then asked them to make their own painting. Either, they’ll just scribble on the canvas randomly because they’re not competent enough to do anything, or they’ll end up making something very similar and nearly identical to those examples you’ve shown them.

You use the example of the programmer taking code off the internet… I’m not sure if you’re a programmer yourself, but you know that’s a meme right? The joke is that a big part of programming is finding the right stack overflow/blog/tutorial that has the code similar enough to what you need, and you change bits of it and incorporate it into your work.

3

SerenumUS t1_j6k9yvq wrote

Comparing a child painting stuff to an AI model stealing artwork without permission for others to use to generate art is apples and oranges. You still aren't addressing the blatantly obvious point - artwork on the internet being used without permission. People are selling or using these generated AI works (by themselves or apart of a book, etc.). This causes issues.

And I am a Software Engineer - yes I know it's a meme but I'm not referring to that. Good programmers don't copy and paste from the internet constantly. If it's an algorithm, sure that is fine. But a good programmer can generally develop features on the frontend/backend for software without needing heavy assistance.

1

poo2thegeek t1_j6lr85c wrote

Again, you keep bringing up the same point - “art work being used without permission” - and I keep arguing that this is no different to a person looking at a piece of art as inspiration.

It’s perhaps more of a philosophical issue, and it also relates to my personal belief that DL models are closer to analogous to the brain than a lot of people imagine - but this is purely conjecture.

1