Comments

You must log in or register to comment.

IntelArtiGen t1_ir88qh5 wrote

>While our internal testing suggest much of explicit and violent content can be filtered out, there still exists social biases and stereotypes which are challenging to detect and filter. We have decided not to release the Imagen Video model or its source code until these concerns are mitigated.

I think they'll never be mitigated and we'll have to wait for other people trying to reproduce the results and make them open-source.

173

E_Snap t1_ir8fmal wrote

It’s good to see that hand models will have job security for a while yet

42

Unicycldev t1_ir8w2fj wrote

these videos are super trippy. It’s like these algorithms have taken shrooms.

31

waa007 t1_ir8zhmo wrote

It’s coming

0

bobwmcgrath t1_ir927y5 wrote

Think of all the porn it will make.

19

massimosclaw2 t1_ir947iu wrote

Can't wait til this actually starts getting indistinguishable in terms of quality

7

master3243 t1_ir95zxe wrote

It reminds me of image generation in the early days (a few years ago lol) when it wasn't yet super realistic.

Although this is faster than I expected it's still obviously not at the level of Imagen with image generation.

26

master3243 t1_ir9a5wt wrote

Image generation is by definition an easier task so the two will never catch up.

But do you not think that at some point in the future, video generation in the year 20XX will be better than image generation in 2022?

Even in the year 2050 or 2100?

20

canyonkeeper t1_ir9nb50 wrote

It’s not research if it’s not replicable /not s

22

RBUexiste-RBUya t1_ir9npts wrote

Our dreams or nightmares when the humans (and other animals) are asleep, are trippier than that :-D

That's how our brain fights agains itself to discard impossible things, movements, situations, physics, etc. Dreams and trippy minds are the best neuronal thinking (later, is needed a good discard of not wanted results, of course)

Do you remember that old cat-face-recognition that only saw cat faces in supermarket, stores, etc? That was very trippy too (and a little schizophrenic)

1

IanMazgelis t1_ir9q097 wrote

This alone is why I have zero interest in proprietary diffusion algorithms. I want to make whatever I can imagine, not what shareholders tell I'm allowed to think about.

26

LyutsiferSafin t1_ir9tx5e wrote

I made one too. I can’t show it to you guys, just like google. What’s the point of showcasing something and never giving access to it?

13

FriendlyRope t1_iragqww wrote

He does has a point, if there is no independent verification of an experiment (I.e. replication, or at least independent inspection of an experiment) the experiment can not be said to be trusted. For example the results shown could be "cherry picked" or the test data could be contaminated by training data.

2

ECEngineeringBE t1_irahdyd wrote

Sure, but just because you can't replicate it, doesn't mean that nobody can. We already had Facebook's paper on video generation a week ago, and we also have stability AI saying that they're planning their own model.

And also, just because the results can't be fully trusted (due to high barrier of replicability), does not mean that the publication isn't "research".

5

gwern t1_irassaq wrote

They said they were considering releasing Make-A-Scene but never wound up doing so, even though it's probably not much better than the released SD model and there would seem to be fairly minimal marginal harm from a release. So I don't expect Make-A-Video to be released either, even if they say they might.

6

that_boi_zesty t1_iraxzc0 wrote

could this be used to artificially continue videos like how some image models can "zoom out"?

3

cleverestx t1_irbc5ii wrote

Hopefully unstable diffusion gets a hold of this, and implements this technology, so people don't have to wait forever to be able to create stuff, as growups should be able to, without anything besides the most extreme legal restrictions in place.

2

Veedrac t1_irbdxgp wrote

It blows my mind that this sub has basically become ML Flat Earth. There's no legitimate way people here actually think this research can't or won't replicate. “Oh but I can't personally put my foot on the moon play with this exact model exactly today” is such a fake argument.

7

Gilberto125 t1_irbr5dc wrote

I want to see a movie done only with this. Just with the script as input.

2

xrailgun t1_irc7qic wrote

In case you're serious, physics papers are crammed full of mathematical derivation first to logically support their hypotheses, then include all relevant conditions and parameters such that IF/WHEN you get access to the collider, key in the same, you could replicate them.

In ML, mathematical support still exists to varying degrees, but without sharing the source code, even if you had access to Google's/OpenAI's/Nvidia's billion dollar hardware, you can't replicate it.

2

yaosio t1_irch8j4 wrote

It burns my bread that they are always worried about explicit scenes and violence. If they were producers for Django Unchained they would demand all the violence and bad words be removed.

2

eposnix t1_ircvtgw wrote

Wait, what? I mean, maybe not 100%, but there are metric fuckloads of open source implementations of closed source models replicated by just the method in the paper.

1

BalorNG t1_irdv27c wrote

Yea. The problem is never with the model - it is with the people. In a way, models trained on huge corpus of data is the most "democratic" way of representing reality - removing "biases" from it is castrating it. Those that are to exploit those biases needs to be dealt with on an individual basis.

3

brates09 t1_irhz7ml wrote

Are there examples of the recent big model work that haven’t been able to be replicated in terms of quality? Seems much more likely to attribute to conservatism of the companies rather than deception about the results.

1

Decent-Possible-9714 t1_irk8q2e wrote

what are the true technical limitations of their model (other than the obvious improve training/testing acc.)? they don't seem to explicitly detail it (meta explained numerous issues with their text-to-video model).

1