Submitted by shingekichan1996 t3_10ky2oh in MachineLearning

This thread is dedicated to exploring the various techniques used in self-supervised contrastive learning that utilize standard batch sizes. I am seeking information on the current methods in this field, specifically those that do not rely on large batch sizes.

I am familiar with the SimSiam paper published by META research, which utilizes 256 batch size for 8-GPUs. However, for individuals with limited resources such as myself, access to a large number of GPUs may not be feasible. As a result, I am interested in learning about other methods that can be used with smaller batch sizes and a single GPU, such as those that would be suitable for training on 1024x1024 input images.

Additionally, I am curious about any more efficient architectures that have been developed in this field. This includes, but is not limited to, techniques used in natural language processing that may have applications in other areas of artificial intelligence.

***posted the same question in PyTorch forums, reposting here for wider reach.

73

Comments

You must log in or register to comment.

melgor89 t1_j5u766t wrote

There is a great paper about analyzing batch size vs accuracy correlation. They propose loss function, which is able to learn SimClr on bs=256 instead of 4k. So, there is some research in this domain. https://arxiv.org/abs/2110.06848

17

koolaidman123 t1_j5uk2ai wrote

cache your predictions on each smaller batch w/ labels until you get a similar batch size, then run your loss function

so instead of calculating loss and accumulating like gradient accumulation, you only calculate loss once you reach the target batch size

10

rapist1 t1_j5xmv9n wrote

How do you implement the cacheing? You have to cache all the activations to do the bawards pass

3

[deleted] t1_j5w9rbv wrote

[deleted]

−8

koolaidman123 t1_j5wbk37 wrote

Thats not the same thing...

Gradient accumulation calcs the loss on each batch, it doesnt work with in batch negatives because you need compare input from batch 1 to inputs of batch 2, hence offloading and caching predictions, then calculating the loss with 1 batch

Thats why gradient accumulation doesnt work to simulate large batch sizes for contrastive learning, if youre familiar with it

8

mgwizdala t1_j5tyf1f wrote

If you are willing to trade time for batch size you can try with gradient accumulation

8

RaptorDotCpp t1_j5u0yxq wrote

Gradient accumulation is tricky for contrastive methods that rely on having lots of negatives in a batch.

13

altmly t1_j5uglpx wrote

I'm confused. Gradient accumulation is exactly equivalent to batching as long as the data is the same, unless you use things like batch norm (you shouldn't).

1

Paedor t1_j5ur6tx wrote

The trouble is that contrastive methods often compare elements from the same batch, instead of treating elements as independent like pretty much all other ML (except batchnorm).

As a simple example with a really weird version of contrastive learning: with a batch of 2N, contrastive learning might use the 4N^2 distances between batch elements to calculate a loss, while with two accumulated batches of N, contrastive learning could only use 2N^2 pairs for loss.

11

satireplusplus t1_j5v24u2 wrote

If you don't have 8 GPUs you can always run the same computation 8x in series on one GPU. Then you merge the results the same way the parallel implementation would do it. In most cases that's probably gonna end up being a form of gradient accumulation. Think of it this way: you basically compute your distances on a subset of n, but since there are much fewer pairs of distances, the gradient would be noisy. So you just run it a couple of times and average the result to get an approximation of the real thing. Very likely that this is what the parallel implementation does too.

1

koolaidman123 t1_j5ujfpv wrote

contrastive methods require in-batch negatives, you can't replicate that with grad accumulation

7

shingekichan1996 OP t1_j5u22zn wrote

Curious about this, I have not read any paper related. What is its effect on the performance (accuracy, etc) ?

1

mgwizdala t1_j5u2mgr wrote

It depends on implementation. Naive gradient accumulation will probably give better results than small batches, but as u/RaptorDotCpp mentioned, if you relay on many negative samples inside one batch, it will still be worse than a large batch training.

There is also a cool paper about gradient caching, which somehow solves this issue, but again with an additional penalty on training speed. https://arxiv.org/pdf/2101.06983v2.pdf

1

Purple_noise_84 t1_j5tl26u wrote

How about mocov2? That should work on a single gpu

6

IntelArtiGen t1_j5tijjx wrote

I managed to use SwAV on 1 GPU (8GB), batch size 240, 224x224 images, FP16, ResNet18.

Of course it works, the problem isn't just the batch size but the accuracy - batchsize trade-off, and the accuracy was quite bad (still usable for my task though). If 50% top5 on imagenet is ok for you, you can do it. But I'm not sure there are many tasks where it makes sense.

Perhaps contrastive learning isn't the best for single GPU. I'm not sure about the current SOTA on this task.

3

shingekichan1996 OP t1_j5tiupg wrote

For 224x224 images, sure. But for images with large sizes, for example satellite images, it is hard to get 200+ batch size for a single gpu.

1

shingekichan1996 OP t1_j5tjy44 wrote

I think single GPU for SSL contrastive learning is a research direction to pursue, I'm not sure if anyone published papers on it, but if there's none, I'm surprised.

1

Irate_Librarian1503 t1_j5uaqwq wrote

Barlow twins, maybe? Easy to implement and batch size effective.

3

squidward2022 t1_j5vmb95 wrote

(https://arxiv.org/pdf/2106.04156.pdf ) This was a cool paper from NeurIPS 2020 which aimed to theoretically explain the success of CL by relating it spectral clustering. They present a loss with a very similar form to InfoNCE, which they use for their theory. One of the plus sides found was it worked well with small batch sizes.

(https://arxiv.org/abs/2110.06848) I skimmed this work a while back, one of their main claims is that this approach works with small batch sizes.

2

youngintegrator t1_j61dfqk wrote

Is there any reason you'd like a contrastive algorithm? (intra-class discrimination?)

Barlow twins showed to work quite well with lower batches (32) and HSIC-SSL is a nice variant on this style of learning if you only care about clusters. Im sure simsiam is fine too (avoid BYOL for small batches).

In terms of contrastive approaches, methods that avoid any "coupling" mentioned in DCL for the negative terms will work with smaller batch sizes (contrastive estimates converge to mle assuming large noise samples). This is seen in the spectral algorithm or in align-uniform. These work because they ignore the comparing the representations from the same augmented samples. SWAV also does this by contrastive prototypes which are basically free variables which don't have gradients that conflict with any alignment goal. I think it's fair to say that algorithms with LSE transforms are less stable for small batch sizes since the gradients will be biases to randomly coupled terms. With sufficiently many terms this coupling matters less.

From what i've noticed, methods that avoid comparing the augmented views of the same base sample will require slightly more tuning to get things just right. (align + weight * diversity)

​

Notes: NNCLR is nicer than moco imo. VicReg is good but is a mess to finetune. I am assuming youre using a CNN and have omitted transformer and masked based algorithms.

2

kdqg t1_j5xzfx4 wrote

VICReg

1