Comments

You must log in or register to comment.

ChadFuckingThunder t1_ix7y9s4 wrote

I'm probably among the first on a chopping block as visual artist, and as nice these news sound, I don't buy it.

There is no way to stop the technology, especially if it's so hard to prove the sources.

This AIndustrial revolution will suck for so many people, but has potential to benefit the human kind.

16

Rakshear t1_ix80sjc wrote

Yeah this will be almost impossible to succeed. It should hopefully clear up some public domain rights which have been a mess for decades though.

6

TemetN t1_ix8q9be wrote

Technically such cases could make a disaster in the nations with them - this type of case is entirely capable of effectively ending generative AI models in the nations with such legal precedents, but they'd just continue in other nations.

​

I occasionally wonder if jumping to attempts to prevent data use was deliberate in attempts to destroy generative models, or if it's just people lashing out. In either case, these are potentially very dangerous, but yes the models and companies would just head elsewhere likely.

4

humanitarianWarlord t1_ix877eg wrote

How is it difficult to proce the sources? Type in the name of a painting, if it produces artwork based on that piece then it must have used it as a source.

−5

Reddituser45005 t1_ix8b898 wrote

What if it is based on a genre: surrealism or Impressionism or specific scenes within a genre, such as landscapes or portraits or tavern scenes? How do you determine if a generative AI work is copied or original.

5

humanitarianWarlord t1_ix8drxb wrote

Fair enough if it's a genre but with a good few tod you can pick a specific painting/artist. That seems fairly obvious to me.

−1

josefx t1_ix8bfdr wrote

Easy to defeat, just don't include the name of the painting in the training data.

Or you can just go the way the developers of github copilot apparently went when it was caught copy pasting source code from Quake 3 verbatim. Put the words used to get the proof on an internal ban list.

3

Fexxvi t1_ix8rs9x wrote

Based on =/= copying. Humans can make paintings inspired in others and it's legal.

3

MuNuKia t1_ixb1xv1 wrote

Programmers are copying other people’s images to build the AI. That’s the underlying issue.

0

Fexxvi t1_ixbzmbz wrote

No, programmers are using other people's images to teach their AI's, just like you would teach an art student to learn styles and techniques from previous paintings. Once the AI has learned, those images are not stored anywhere in the AI's code.

3

MuNuKia t1_ixcmn06 wrote

Tell me you know nothing about machine learning, without using the term machine learning.

−1

Fexxvi t1_ixcsoqs wrote

What I explained is literally how AI's work, what do you mean?

2

MuNuKia t1_ixcvnjp wrote

Machine Learning learning requires a training set to build the model. The models used build artwork is still a byproduct from using copying other people’s work. Also there is no comparison of AI learning and people learning new ideas. AI literally has to use copyrighted work to build the query setup by the user. A person is able to think on their own and create new work.

0

Fexxvi t1_ixcyagl wrote

AIs learn how other artists paint so they can paint in the same style. That's what literally every art student does and it's not the same as copying.

“AI literally has to use copyrighted work to build the query setup by the user.”

The AI learned from copyrighted material to produce original results with the given prompts, just like anyone can learn from copyrighted material and make paintings in the style of (not exactly like) said material.

2

MuNuKia t1_ixcyg2h wrote

People can also take inspiration from every other object to build artwork. This machine learning, is only using artwork, so the sample size is much smaller for the machine learning, and that means the machine learning has a limited scope to learn about art.

1

Fexxvi t1_ixczbrd wrote

So? That doesn't refute my point.

2

MuNuKia t1_ixczkro wrote

Yes it does. Because the machine learning algorithm is only using copyrighted work, to create an output. A human can look at a tree and use that as inspiration. It’s not that hard to comprehend.

1

Fexxvi t1_ixd3emf wrote

My original argument was:

“No, programmers are using other people's images to teach their AI's, just like you would teach an art student to learn styles and techniques from previous paintings. Once the AI has learned, those images are not stored anywhere in the AI's code.”

You said it was wrong, yet your comment

“People can also take inspiration from every other object to build artwork. This machine learning, is only using artwork, so the sample size [...]”

doesn't refute said argument. So either say something to refute my argument which, according to you is wrong or stop trying to move the goalposts.

0

MuNuKia t1_ixd3nyn wrote

Yes it does. You are just showing me you don’t know anything about AI.

1

Fexxvi t1_ixd48x4 wrote

OK, then. The argument is:

“Programmers are using other people's images to teach their AI's, just like you would teach an art student to learn styles and techniques from previous paintings. Once the AI has learned, those images are not stored anywhere in the AI's code.”

Now refute it.

2

MuNuKia t1_ixd6l5b wrote

The code in the AI has memory. The memory is updated using the copyrighted works. Then the code will call that memory to build a new image.

1

Fexxvi t1_ixd6vom wrote

The memory is updated using the copyrighted AI.”? Excuse me? Or do you mean “the memory is updated using copyrighted material ”?

2

MuNuKia t1_ixd79za wrote

Ya, updated the comment. However, my point stands. The programmer will setup the code to use the computer’s RAM. When then RAM is updated to build the training data, it’s part of the program.

1

Fexxvi t1_ixd9z7u wrote

I don't understand this. Explain exactly this refutes my point in simple words, please.

2

MuNuKia t1_ixdarvy wrote

Code takes data. Codes stores data in computer’s memory, code pulls data, to compute the algorithm, based on user input. Algorithm output is the combination of user input and the data used to build the model.

The biggest bottleneck in analytics is memory. That’s why Hadoop is also becoming a big deal, so an analyst can use the memory of multiple computers at the same time. Which means memory and the data used in memory is an aspect of a machine learning program.

1

Fexxvi t1_ixday6t wrote

OK, I think I got it. How does this disprove my comment again?

2

MuNuKia t1_ixdbns8 wrote

Because the AI code is storing data from the training set, to build the algorithm.

1

Fexxvi t1_ixdfq2g wrote

“Data” as in the knowledge the AI has gained from the training, yes. Just like an art student remembers the styles and techniques they've learnt.

1

Fexxvi t1_ix8rtet wrote

Based on =/= copying. Humans can make paintings inspired in others and it's legal.

1

darkstarmatr t1_ixa1rxk wrote

Humans can’t scrape the web for raw data that only exists thanks to actual artists putting their work on it. Not comparable

1

Fexxvi t1_ixcr51p wrote

Not comparable in terms of results, sure, but the process is still one of learning. Now, should AI's be prevented from learning just because they're faster than humans? That would be another debate.

1

Wiskkey t1_ixaicoa wrote

Trained artificial neural networks usually don't memorize images in the training dataset.

0

Gari_305 OP t1_ix7v48h wrote

From the Article

>A class-action lawsuit filed in a federal court in California this month takes aim at GitHub Copilot, a powerful tool that automatically writes working code when a programmer starts typing. The coder behind the suit argue that GitHub is infringing copyright because it does not provide attribution when Copilot reproduces open-source code covered by a license requiring it.
>
>The lawsuit is at an early stage, and its prospects are unclear because the underlying technology is novel and has not faced much legal scrutiny. But legal experts say it may have a bearing on the broader trend of generative AI tools. AI programs that generate paintings, photographs, and illustrations from a prompt, as well as text for marketing copy, are all built with algorithms trained on previous work produced by humans.

11

nyxnars t1_ix7wyzr wrote

This should be a very tricky lawsuit

9

ilrosewood t1_ix8x3pt wrote

This is an interesting case.

If a person studied open source code for years and then sat down and wrote a new program - would there be a problem if methods and code from open source applications could be found in there? That person didn’t, in this thought experiment, copy and paste code.

I would say no. So I wouldn’t think AI written code would be any different.

But if the answer is yes to the thought experiment than I would think AI would be the same way.

In other words let’s take the A(rtificial) out of the equation and just focus on the I(nteligence)

6

darkstarmatr t1_ixa219d wrote

Ai taking actual data to learn from and humans interpreting data is not the same. AI is not human, why do people speak like it is

−1

ilrosewood t1_ixavxta wrote

From a legal perspective I have to wonder what is the practical difference.

I guess from an abstract I have to wonder what the actual difference is.

It depends on the type of machine learning employed but in some respects, in the abstract, I don’t see much of a difference.

If I look at a lm OSS project and see a clever way a function was handled and then I use that method later - not the whole code, just the method - some would say that’s OK and others would say I’d be violating the OSS license.

2

darkstarmatr t1_ixax1gn wrote

There’s nothing abstract about it though. Someone intentionally designed a program that takes raw data from established artists without their permission and sure, it isn’t directly copying the art. But it wouldn’t exist without the actual human imagination and effort.

AI is not as complex as a human brain, and I could never give it credit for stealing data and generating “art” based on stolen data.

0

ilrosewood t1_ixayl0g wrote

I’d argue art and open source code are different.

Open source code I or a machine should be able to learn from.

Public art I suppose doesn’t have a license attached to it. But if I learned to paint by studying other paintings for years - am I as guilty as the computer in your mind?

(To be clear - I don’t have an answer. I find the topic interesting and I don’t firmly believe anything here yet. I know Reddit is full of trolls and bots and the like so please know I enjoy reading your replies. Thank you.)

2

darkstarmatr t1_ixb3xdc wrote

The difference between the computer taking actual data to learn, and a human using their time, effort and imagination to learn are the key differences here. Studying others art is a respectable way for beginning artists to learn, and it's advice that most professionals would give to a beginner. But that's because humans need patience and practice on their own, to learn this way.

I don't consider it the same as say, an algorithm scraping the internet for art, and using that data to generate art in similar styles. Because it's not human, and there's no time, effort or imagination to respect. That's my perspective anyway.

The AI art is kind of cool, somewhat. But the fact that it NEEDS real artists works to even function, is an issue. Artists should have been given a choice to opt into a program if they wanted their art used as data.

0

AceSevenFive t1_ixb8fpn wrote

Let's say I put Copilot on the far end of the scale and you on the close end. At what point on the continuum between you and Copilot does it become morally unacceptable to you?

2

hyletic t1_ix9b165 wrote

What kind of person cares about reuse of a short snippet of their code?

What is the world coming to?

Quick, somebody, go and patent the quicksort algorithm and we can charge people $1.99 per execution.

Or, better yet, somebody patent the for loop.

2

femmestem t1_ix9egoh wrote

The problem could arguably be the opposite. GNU and MIT license allow you to use the content with the understanding that you just perpetuate the license in your new work. If you use an AI, you don't know whether you can copyright the outputted IP or not.

3

FuturologyBot t1_ix7y90p wrote

The following submission statement was provided by /u/Gari_305:


From the Article

>A class-action lawsuit filed in a federal court in California this month takes aim at GitHub Copilot, a powerful tool that automatically writes working code when a programmer starts typing. The coder behind the suit argue that GitHub is infringing copyright because it does not provide attribution when Copilot reproduces open-source code covered by a license requiring it.
>
>The lawsuit is at an early stage, and its prospects are unclear because the underlying technology is novel and has not faced much legal scrutiny. But legal experts say it may have a bearing on the broader trend of generative AI tools. AI programs that generate paintings, photographs, and illustrations from a prompt, as well as text for marketing copy, are all built with algorithms trained on previous work produced by humans.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/z0y0m4/this_copyright_lawsuit_could_shape_the_future_of/ix7v48h/

1