Comments

You must log in or register to comment.

LetterRip t1_j43v3yi wrote

This group did such a distillation but didn't share the weights, they got it down to 24 MB.

https://www.reddit.com/r/MachineLearning/comments/p1o2bd/research_we_distilled_clip_model_vit_only_from/

LAION or stability.ai or huggingface might be willing to provide free compute to distill one of the openCLIP models.

Come to think of it, stability.ai should be releasing the distilled stablediffusion latter this month (week or two?) and it presumably will have a distilled clip.

5

alkibijad OP t1_j462o4r wrote

Cool, I wasn't aware of the distilled diffusion! That could be useful, thanks for sharing!

3

LetterRip t1_j47qjhj wrote

I don't know for certain that the CLIP was distilled also, that is an assumption on my part. Also EMAD has been fuzzy about exactly when the release would be.

2

suflaj t1_j42i6pu wrote

Nope. Authors experimented with it but said performance is lost. You can try to replace the transformers with ResNet50, but you'll have to do it yourself AFAIK.

3

manOnPavementWaving t1_j42wiwx wrote

Ehm, CLIP actually has a resnet50 version. Its still too big, tho.

2

suflaj t1_j4308sf wrote

Ah, wasn't aware they published the weights. But if that's too big I am not aware of anything significantly smaller that would retain most of the performance.

It should be relatively easy to pretrain a significantly smaller network yourself given the pretrained resnet weights with good enough sampling and a month or so training...

2

alkibijad OP t1_j462x0f wrote

I was hoping to just fine-tune the model, let the training last days at most. Seems like my best chance is to wait for distilled stable diffusion, and use their clip encoder, as u/LetterRip mentions.

2

suflaj t1_j46gu2z wrote

I would proceed with caution because smaller models are generally not that easy to finetune. In fact, the whole point of a larger model is that it not only contains a lot of information, but that it is fairly easy to adapt to new tasks because it has plenty of "space" to restructure itself. A smaller model trying to restructure itself is more likely to diverge or not be able to adapt to the task at all.

It would be more viable in that case to run the larger model layer by layer, finetune it, and then distill onto a smaller one. That way you use the maximum potential of a larger model to adapt to a different task, and you distill it into whatever you need.

3