Comments

You must log in or register to comment.

IntelArtiGen t1_ir88qh5 wrote

>While our internal testing suggest much of explicit and violent content can be filtered out, there still exists social biases and stereotypes which are challenging to detect and filter. We have decided not to release the Imagen Video model or its source code until these concerns are mitigated.

I think they'll never be mitigated and we'll have to wait for other people trying to reproduce the results and make them open-source.

173

Erosis OP t1_ir8cdlx wrote

It seems that Google is being very conservative with the release of their diffusion models compared to even Meta and OpenAI's closed-source approach.

Luckily, Stability AI seems to be working on a video generating diffusion model.

77

KeikakuAccelerator t1_ir990ti wrote

Meta (FAIR) has been very open-source.

13

Erosis OP t1_ir9kj9k wrote

I'm referring to their new Make-A-Video model, but I suppose they just announced that a few days ago. Hopefully they fully release that model.

13

gwern t1_irassaq wrote

They said they were considering releasing Make-A-Scene but never wound up doing so, even though it's probably not much better than the released SD model and there would seem to be fairly minimal marginal harm from a release. So I don't expect Make-A-Video to be released either, even if they say they might.

6

dexter89_kp t1_iragyoj wrote

That was trained on shutterstock data. They can’t release it

−1

gpahul t1_irbobh4 wrote

I noticed MS is also contributing in doc related AI research!

1

zaptrem t1_ir8ws0l wrote

Don’t worry, all we have to do to make the problem go away is end racism and all other biases in real life!

41

BalorNG t1_irdv27c wrote

Yea. The problem is never with the model - it is with the people. In a way, models trained on huge corpus of data is the most "democratic" way of representing reality - removing "biases" from it is castrating it. Those that are to exploit those biases needs to be dealt with on an individual basis.

3

IanMazgelis t1_ir9q097 wrote

This alone is why I have zero interest in proprietary diffusion algorithms. I want to make whatever I can imagine, not what shareholders tell I'm allowed to think about.

26

EmbarrassedHelp t1_ircm1je wrote

Its certainly harder to trust closed source implementations can do what they claim to do

1

brates09 t1_irhz7ml wrote

Are there examples of the recent big model work that haven’t been able to be replicated in terms of quality? Seems much more likely to attribute to conservatism of the companies rather than deception about the results.

1

101111010100 t1_ir9axqd wrote

Thank god humanity is still save. Once there are open-source versions, a lot of people will be harmed. /s

19

sam__izdat t1_irc0229 wrote

Any confluence with the interests of private capital is purely coincidental with our selfless quest for the betterment of man.

3

yaosio t1_irch8j4 wrote

It burns my bread that they are always worried about explicit scenes and violence. If they were producers for Django Unchained they would demand all the violence and bad words be removed.

2

ellioso t1_ir9ncfb wrote

It'll probably never be perfect but that doesn't mean it won't get released. The SafeSearch filter for google is really good.

2

cleverestx t1_irbc5ii wrote

Hopefully unstable diffusion gets a hold of this, and implements this technology, so people don't have to wait forever to be able to create stuff, as growups should be able to, without anything besides the most extreme legal restrictions in place.

2

Decent-Possible-9714 t1_irk8q2e wrote

what are the true technical limitations of their model (other than the obvious improve training/testing acc.)? they don't seem to explicitly detail it (meta explained numerous issues with their text-to-video model).

1

E_Snap t1_ir8fmal wrote

It’s good to see that hand models will have job security for a while yet

42

BalorNG t1_irdv58b wrote

Handjob security personnel... hmm...

1

Unicycldev t1_ir8w2fj wrote

these videos are super trippy. It’s like these algorithms have taken shrooms.

31

master3243 t1_ir95zxe wrote

It reminds me of image generation in the early days (a few years ago lol) when it wasn't yet super realistic.

Although this is faster than I expected it's still obviously not at the level of Imagen with image generation.

26

ThePerson654321 t1_ir98sqw wrote

I find it difficult to believe we will achieve the same video fidelity compared to image generation.

−1

master3243 t1_ir9a5wt wrote

Image generation is by definition an easier task so the two will never catch up.

But do you not think that at some point in the future, video generation in the year 20XX will be better than image generation in 2022?

Even in the year 2050 or 2100?

20

ThePerson654321 t1_ir9adad wrote

Perhaps a few seconds but never a full movie.

−16

master3243 t1_ir9bp3h wrote

What about a coherent 30 second silent clip from a short description that is as difficult to distenguish from real images as current SOTA image generation.

6

cleverestx t1_irbcdi0 wrote

Why not? I admit it IS more challenging, but video is only a series of images...

1

RBUexiste-RBUya t1_ir9npts wrote

Our dreams or nightmares when the humans (and other animals) are asleep, are trippier than that :-D

That's how our brain fights agains itself to discard impossible things, movements, situations, physics, etc. Dreams and trippy minds are the best neuronal thinking (later, is needed a good discard of not wanted results, of course)

Do you remember that old cat-face-recognition that only saw cat faces in supermarket, stores, etc? That was very trippy too (and a little schizophrenic)

1

canyonkeeper t1_ir9nb50 wrote

It’s not research if it’s not replicable /not s

22

Veedrac t1_irbdxgp wrote

It blows my mind that this sub has basically become ML Flat Earth. There's no legitimate way people here actually think this research can't or won't replicate. “Oh but I can't personally put my foot on the moon play with this exact model exactly today” is such a fake argument.

7

ECEngineeringBE t1_ir9q399 wrote

Damn, that must mean that all those experiments they run at CERN aren't research because I can't replicate them in my kitchen.

−11

xrailgun t1_ira22vy wrote

Shit straw man take.

14

ECEngineeringBE t1_irac32j wrote

How so?

I could say the same about that "shit gatekeeping take"

1

xrailgun t1_irc7qic wrote

In case you're serious, physics papers are crammed full of mathematical derivation first to logically support their hypotheses, then include all relevant conditions and parameters such that IF/WHEN you get access to the collider, key in the same, you could replicate them.

In ML, mathematical support still exists to varying degrees, but without sharing the source code, even if you had access to Google's/OpenAI's/Nvidia's billion dollar hardware, you can't replicate it.

2

eposnix t1_ircvtgw wrote

Wait, what? I mean, maybe not 100%, but there are metric fuckloads of open source implementations of closed source models replicated by just the method in the paper.

1

FriendlyRope t1_iragqww wrote

He does has a point, if there is no independent verification of an experiment (I.e. replication, or at least independent inspection of an experiment) the experiment can not be said to be trusted. For example the results shown could be "cherry picked" or the test data could be contaminated by training data.

2

ECEngineeringBE t1_irahdyd wrote

Sure, but just because you can't replicate it, doesn't mean that nobody can. We already had Facebook's paper on video generation a week ago, and we also have stability AI saying that they're planning their own model.

And also, just because the results can't be fully trusted (due to high barrier of replicability), does not mean that the publication isn't "research".

5

throwawayguy91 t1_irbfj1m wrote

thats the whole reason ATLAS and CMS work independently from each other

1

bobwmcgrath t1_ir927y5 wrote

Think of all the porn it will make.

19

nomadiclizard t1_ir9de47 wrote

Yessssss! As soon as it's prised from Googles corporate nanny-state filters and trained up on some proper sources :D

17

LyutsiferSafin t1_ir9tx5e wrote

I made one too. I can’t show it to you guys, just like google. What’s the point of showcasing something and never giving access to it?

13

nraw t1_ira536l wrote

Media talk. Corporate buzzwords and wanting to jump on board.

2

massimosclaw2 t1_ir947iu wrote

Can't wait til this actually starts getting indistinguishable in terms of quality

7

that_boi_zesty t1_iraxzc0 wrote

could this be used to artificially continue videos like how some image models can "zoom out"?

3

Gilberto125 t1_irbr5dc wrote

I want to see a movie done only with this. Just with the script as input.

2

waa007 t1_ir8zhmo wrote

It’s coming

0