Submitted by TiredOldCrow t3_yli0r7 in MachineLearning

Email announcement from OpenAI below:

> DALL·E is now available as an API

> You can now integrate state of the art image generation capabilities directly into your apps and products through our new DALL·E API.

> You own the generations you create with DALL·E.

> We’ve simplified our Terms of Use and you now have full ownership rights to the images you create with DALL·E — in addition to the usage rights you’ve already had to use and monetize your creations however you’d like. This update is possible due to improvements to our safety systems which minimize the ability to generate content that violates our content policy.

> Sort and showcase with collections.

> You can now organize your DALL·E creations in multiple collections. Share them publicly or keep them private. Check out our sea otter collection!

> We’re constantly amazed by the innovative ways you use DALL·E and love seeing your creations out in the world. Artists who would like their work to be shared on our Instagram can request to be featured using Instagram’s collab tool. DM us there to show off how you’re using the API!  

> - The OpenAI Team

422

Comments

You must log in or register to comment.

master3243 t1_iuz0d6h wrote

Do they implicitly mean DALLE 2 or do they actually mean 1?

I can't tell anymore and I feel it's definitely possible they try to push the generic name "DALL-E" to refer to their newest model.

I still sometimes jokingly refer to it as "unCLIP" as that is what they called their model in the original paper.

70

nomadiclizard t1_iuz219j wrote

How can you 'own' the generated images? When anyone else, using the same prompt, gets the same image? The only thing that makes sense if that you get a non-exclusive license to use it, but so does everyone else using that prompt.

15

yaosio t1_iuz3b7t wrote

In the US AI created art can't be covered by copyright so it doesn't matter if you own it, anybody can use it. Expect this to change when Disney uses AI to fully create something and demands copyright law changes.

−12

dat_cosmo_cat t1_iuz49g9 wrote

It is easy to read it like an ad for NFTs, we've seen so much bullshit out if that community I don't blame anyone for getting triggered. The implication behind this seems different though; it is advertising an opportunity to profit off of free use, rather than scarcity.

1

AuspiciousApple t1_iuzastm wrote

>Do they implicitly mean DALLE 2 or do they actually mean 1?

OpenAI is terrible for this. You should assume that it's some random black box that they can change at any time.

There recently was some minor scandal in the NLP community - the GPT3 variants in the API were trained differently from what the papers described, so it invalidated tons of papers.

87

AuspiciousApple t1_iuzb7wc wrote

It's still a fair question. Suppose I generate something with a common seed (1234, 42, 69420, whatever) and default settings of a popular stable diffusion UI. Other people might conceivably end up generating a very similar image or even the same if they use the exact same prompt.

In that case, does the first people to generate it have the copyright? Do they lose it once it's been generated a second time?

12

World177 t1_iuzgmpa wrote

Using the same seed, you'll get the the same image with the same prompt. Either way, they're both partially a generalization of what those words represent.

> When anyone else, using the same prompt, gets the same image?

Though, to answer them, legal copyright is concerned with human creative effort. Choosing novel and interesting inputs for a final art piece is somewhat comparable to choosing a gradient when using photoshop. The legal copyright being enforceable will likely be on if the court determines enough creative effort went into creating the image.


In the following video a copyright lawyer on YouTube (Lawful Masses) covers the spectrum of fair use and copyright ownership. I think this video provides a insight into how a the legal system might also determine if someone owns copyright to AI generated art. They also recently covered AI generated art in another video, but I think the first video better explains how law isn't simple binary choices.

19

World177 t1_iuzhhho wrote

Machine learning models of text are generalizations of what the text represents. A generalization being copyrightable seems like a bad idea, though, I don't think the legal system has really decided. In my opinion, owning a generalization is like stating that Apple should own all color gradients because they used them predominately in their advertising. It seems to cover too much, but, Apple probably does own copyright on final created art pieces that use uncopyrightable gradients to create something.

5

master3243 t1_iuzrd2n wrote

> In the US AI created art can't be covered by copyright

What? Literally the answer was one google search away

Kashtanova obtained a US copyright on the art compiled into 18-pages which was created by Midjourney

Sources:

Artist receives first known US copyright registration for latent diffusion AI art

A New York Artist Claims to Have Set a Precedent by Copyrighting Their A.I.-Assisted Comic Book. But the Law May Not Agree

5

ComplexColor t1_iuzt3zw wrote

I'm not very familiar with generative models, are there explicit or implicit "techniques" that would prevent the model from plagiarizing the training material? Otherwise it's seems rather problematic to claim copyright on what could be an existing piece of art.

I realize that the likelihood might be infinitesimal but after billions and billions of generations some unlikely but clearly plagiarized works could be produced.

20

midasp t1_iv0bjwv wrote

The text prompt "chicken" is just the first step. The user still has a mental model of what is considered an acceptable "chicken" and the act of selecting one image that best matches that mental model from a cluster of AI generated "chicken" images should also count for something where creativity and copyrighting is concerned.

2

aidv t1_iv0ec8i wrote

”We don’t want to deal with any legal cases, so we’ll let you deal with them instead”

13

hybridteory t1_iv0f5y5 wrote

Yes, I find it incredibly strange that when speaking about Codex, everyone is worried about the models regurgitating the code they have been trained on while citing GPL and other licenses; but this seems to not be that much of an issue when it comes to images (given anecdotal evidence from these discussions), even though they themselves have licenses. It just goes to show that humans perceive text and images very differently from a creative point of view.

−2

Saytahri t1_iv0jye8 wrote

It's just procedural generation, I don't think that just because someone could follow the same steps as you to produce the same thing that it can't be your intellectual property, otherwise how do people claim ownership of anything made in software? You could call the mouse and keyboard inputs "prompts" if you wanted, Dall-E is just easier to use.

In fact you could make a very simple neutral network that turns a seed into an image, that is capable of generating any image. The seed would just need to be as big as the output. That wouldn't invalidate my ability to have intellectual property on images just because someone could produce the same thing with the AI with the same input.

In fact we should expect that as image generators get better, eventually we should be able to generate pretty much anything with them with a detailed enough prompt, I don't see why this would affect the ability to own the outputs.

2

pdillis t1_iv0z284 wrote

I've been using AI/Neural networks since 2018 to make art and this is the argument that (very recently) has gained a lot of popularity in defense of AI art but baffles me the most. A human artist and a Neural Network are not the same: the NN is just a tool, that's why the user is still considered the artist. Giving human qualities to the NN, whenever convenient, is a detriment to the movement as a whole.

1

C0DASOON t1_iv1622m wrote

Stating that a model that uses existing art only to update its parameters should not need special permissions for being exposed to said art and drawing an analogy to how human artists do not need a permission to do so is not giving human qualities to a model, unless your argument is that the only reason humans don't need permission to view or take inspiration from art is because we're making a special exception for the acts of viewing and taking inspiration performed by human beings and that otherwise all exposure to art requires a permission from the copyright holder, which is just as stupid as the existence of copyright in the first place. You do not, and should not need a special permission to use art, or anything else, to update model parameters.

0

FutureIsMine t1_iv1annk wrote

OpenAI is feeling a lot of pressure from stability AI and they kinda have to do something how that they’ve got competition

6

petseminary t1_iv1b96a wrote

AI does not draw inspiration. Seeing something and being inspired by it is human. Processing lots of photos of artworks to produce similar works rehashes that data in a fundamentally different way.

−2

Bornaia t1_iv1bh1c wrote

What can you do with AI image? I mean, can you sell it. Even tho someone else could make the same one? What is AI art used for then?

1

mgostIH t1_iv1fosb wrote

If you'd look at any of the articles before stopping to the title you'd understand that's what referred to "AIs work can't be copywritten" is that you can't attribute copyright to the artificial intelligence itself, but all of these judgements allow any human that puts any minimal effort into the generation (for example typing the prompt) to own the copyright for the image instead.

1

Bearmam123 t1_iv1i6bn wrote

What about ais plagerizing/stealing art from artists?

2

kaibee t1_iv1kckb wrote

>AI does not draw inspiration. Seeing something and being inspired by it is human. Processing lots of photos of artworks to produce similar works rehashes that data in a fundamentally different way.

So like, Stable Diffusion, the model is 4gb and can be reduced to 2gb without much loss in quality. It was trained on ~5 billion images. 1 gigabyte is a billion bytes. It is effectively doing something like, compressing a 512x512x3 byte image into just a single byte. This is transformative, so fair use is a valid defense, imo.

3

yaosio t1_iv1krc6 wrote

https://www.smithsonianmag.com/smart-news/us-copyright-office-rules-ai-art-cant-be-copyrighted-180979808/

>The U.S. Copyright Office (USCO) once again rejected a copyright request for an A.I.-generated work of art, the Verge’s Adi Robertson reported last month. A three-person board reviewed a request from Stephen Thaler to reconsider the office’s 2019 ruling, which found his A.I.-created image “lacks the human authorship necessary to support a copyright claim.

AI created work can not be copywritten because a human must author it. If you want to copyright AI created work then you'll need to get the laws changed.

0

petseminary t1_iv1lbgu wrote

It ain't shit without all the human effort that went into creating the training data. To my displeasure, I think the law will see it your way, but I don't think people should be so flippant about marginalizing over so much human creative effort. I have no problem with acquiring the rights to photos to train image generators, because that's the true cost of these products. It has nothing to do with final file size.

1

kaibee t1_iv1rktu wrote

> It ain't shit without all the human effort that went into creating the training data. To my displeasure, I think the law will see it your way, but I don't think people should be so flippant about marginalizing over so much human creative effort. I have no problem with acquiring the rights to photos to train image generators, because that's the true cost of these products. It has nothing to do with final file size.

I'm not sure what you mean by 'marginalizing'. The contribution of the artists is valid and necessary. I know a lot the "common folk" in the SD community enjoy that some artists are upset by this whole thing, but like, I think on the whole the community is supportive of artists.

Though, I do have another angle here: Copyright is absolutely out of control and the vast majority of it at this point is accruing for the benefit of Disney as a result of lobbying on behalf of Disney and others. I think it is fundamentally absurd that children can grow up with beloved characters and die of old age before the copyright on those characters expires. And that's kind of the whole issue here right? Like, if artists wanted a 20 year copyright term on something, I think that is good and reasonable. They should be able to exclude their images from training data. I'd even be in favor of going as far as to say that there should be some associated metadata to facilitate that and that the government should enforce compliance, artists should be able to sue, etc the whole 9 yards.

But lets even say we keep copyright as it is: death of the author + whatever number of decades. Even if you could enforce the law (I can't even imagine how you would, especially in the coming years), all this does is push the problem for artists out until either models get better at learning from less data (so that you can make do with the far more limited amount of training data you buy the rights for) or enough data enters the public domain.

The Luddites weren't wrong. They really did suffer as a result of technological disruption. As with all things, the solution is a basic income funded by a land-value-tax.

2

Living-Substance-668 t1_iv22uy0 wrote

That may be, but either way there has been a dramatic transformation of the original works. Copyright is not an infinitely extended ownership right over information. It is a special exception (to free speech and press) we offer conditionally encourage people to produce things by allowing them to exclusively profit from their production. Like patents. Copyright does not prohibit producing a "similar" work to a copyrighted work, or using similar techniques as a copyrighted work, or else every drawing of a soup can would owe royalties to Andy Warhol

2

petseminary t1_iv26lvl wrote

I agree with you here. I think a reasonable example is the Wayback Machine. Very useful for archiving web content that has disappeared for whatever reason (usually lapse of web hosting). But if site/content creators want their content excluded, the Wayback Machine operators are very responsive and will stop hosting this content. I anticipate that asking for your content to be excluded from training sets after the fact will be much less pleasantly received, as the model would have to be relearned and this is expensive.

1

farmingvillein t1_iv2bbw6 wrote

  1. If there can be a lawsuit, there eventually certainly will be one.

  2. The issues here are--for now--different. The current claim is that Codex is copy-pasting things that need licenses attached. (Whether this is true will of course be played out in court.) For image generation, no one has made the claim--yet--that these systems are emitting straight copies (at any meaningful scale) of someone else's original pictures.

1

hybridteory t1_iv2ebe5 wrote

Codex is not technically copy pasting; it is generating a new output that is (almost) exactly the same, or indistinguishable on the eyes of a human, to the input. Sounds like semantics, but there is no actual copying. You already have music generating algorithms that can also generate short samples that are indistinguishable to the inputs (memorisation). Dall-E 2 is not there yet, but we are close to prompting "Original Mona Lisa painting" and be given back the original Mona Lisa painting with striking similarities. There are already several generative models of images that can mostly memorise inputs used to train it (quick example found using google: https://github.com/alan-turing-institute/memorization).

0

Western-Hawk4469 t1_iv2rswy wrote

do u ever get the same image with the same prompt and settings? the answer is NO, u will always get a unique image nomather what. U can get "similar" images and styles but never the same. so all results will be different, and all results will be unique. Also in my opinion progress and new inventions has always taken its toe on the present, Technology has been reducing the human workflow all over the world for manny years now, machines taking over if the companys can afford to buy the new robots and machines and technology. Why should it be anny different with art? i mean in the end art is what u see and feel and no one sees and feels the same about 1 piece of art. S0 if u can make someone feel something and see something they like, does it matter who and how we made it? i mean im sure there was a lot of people and companies who suffered economic loss when the wheel got invented or the car was invented. If u owned a horse carriage company at the time, well do the math. :P

0

farmingvillein t1_iv2vqmx wrote

> Codex is not technically copy pasting; it is generating a new output that is (almost) exactly the same, or indistinguishable on the eyes of a human, to the input.

Nah, it is literally generating duplicates. This is copying, in the eyes of the law. Whether this is an actual legal problem remains to be seen.

> Dall-E 2 is not there yet, but we are close to prompting "Original Mona Lisa painting" and be given back the original Mona Lisa painting with striking similarities.

This is confused. Dall-E 2 is "not there yet", as a general statement, because they specifically have trained it not to do this.

1

farmingvillein t1_iv38uzt wrote

That is my point? I'm not sure how to square your (correct) statement with your prior statement:

> Dall-E 2 is not there yet, but we are close to prompting "Original Mona Lisa painting" and be given back the original Mona Lisa painting with striking similarities

1

Florian-Dojker t1_iv3cbjt wrote

Yet I doubt an api is enough, what makes SD move so fast and create new innovative ways to use it is that the model can be tinkered with by anyone because it's all open source. I feel any comparable neural network, even if it performs slightly better, won't be able to compete with SD currently. Maybe when progress slows down and other models can put all the different things SD does in an api, then that model will be able to compete by just performing better, but not right now.

2

World177 t1_iv3vbgs wrote

I don’t agree that a sentence of text should grant copyright to a generalization of the meaning of those words. I think doing that could be harmful, and destroy actual creative copyrightable uses like if a developer used the model to rapidly develop a game, or an author used it to help illustrate their book.

Though, I am not sure how much the legal system will value the creation of a sentence for creative input

1

World177 t1_iv3wojd wrote

I don’t think it should be compared to a collage, because that’s not what the model is doing. It’s taking words, and predicting what humans expect to see when given these words describing the image. This is an attempt at generalization, and should start to look similar between models as they improve in quality.

If you take a course on Duolingo, and you learn a language using their copyrighted images, you didn’t steal Duolingo’s content when you applied the knowledge you learned to make creative works for someone in that new language. Though, I think there is some sentiment from people misunderstanding this process and believing that the original owner of the copyrighted content should be entitled to partial ownership too.

1

Oakenshield-468 t1_ivpmbnj wrote

I have used DALL-E online and am looking forwards to being able to compare the API version to other competitor models that have been coming out recently.

1