_Arsenie_Boca_

_Arsenie_Boca_ t1_jd6u2my wrote

First time I hear sparse pretraining and dense finetuning. Usually its the other way around right? So that you get faster inference speeds. Is it correct that you are aiming for faster pretraining through sparsity here, while having normal dense inference speeds?

Also, could you provide an intuition on how cerebras is able to translate unstructured sparsity to speedups? Since you pretrained a 1.3B model, I assume it runs on GPU, unlike DeepSparse?

3

_Arsenie_Boca_ t1_jb1wjfi wrote

It does help but certainly doesnt make everything clear. I am confident I could run inference on it, but my interest is rather academic than practical.

What is the magic number 5 all about? It seems to appear all over the code without explanation.

Are the time mixing and channel mixing operations novel or were they introduced by a citable work?

How does the parallelization during training work?

5

_Arsenie_Boca_ OP t1_j9gix7q wrote

If I understand you correctly, that would mean that bottlenecks only interesting when

a) you further use the lower dimensional features as output like in autoencoders b) you are interested in knowing if your features have lower intrinsic dimension

Both are not met in many cases such as normal ResNets. Could you elaborate how you believe bottlenecks act as regularizers?

2

_Arsenie_Boca_ OP t1_j7miglb wrote

Thanks, good pointer. I am particularly interested in the different mechanisms how the embeddings might be integrated into LMs. E.g. in PaLI and SimVLM, the external embeddings (here image encodings) are simply treated as token embeddings. Others use modified attention mechanisms to potentially make better use of the information. Are you aware of a work that directly compares multiple integration mechanisms?

1