Submitted by CS-fan-101 t3_11xskuk in MachineLearning

Hey everyone!

Cerebras is excited to share that our sparsity paper is now available on arxiv and has been accepted into the ICLR 2023 Sparsity in Neural Networks workshop!

This research demonstrates the ability to pre-train large GPT models with high levels of sparsity followed by dense fine-tuning to maintain accuracy on downstream tasks.

We achieved this using Cerebras CS-2, a system that accelerates unstructured sparsity and allows exploration of machine learning techniques at a larger scale than previously possible.

The researchers used simple, static sparsity and evaluated model sizes up to GPT-3 XL with 1.3B parameters. We were able to pre-train GPT-3 XL with up to 75% unstructured sparsity, and 60% fewer training FLOPS on Cerebras CS-2. These findings show the promise of sparse training and motivate exploration of more advanced sparse techniques for even larger models.

This is the first time a large GPT model has been pre-trained with high sparsity without significant loss in downstream task metrics, and the results are exciting for the industry as it offers a fundamental enabler to reduce the compute to train these models.

47

Comments

You must log in or register to comment.

Carrasco_Santo t1_jd4wigs wrote

I like to see all these advances optimizing machine learning more and more. In 10 years (being pessimistic) it will be very interesting, and I sincerely hope that neuromorphic processors leave the laboratory and become real, this would advance the area even further.

7

brownmamba94 t1_jd6zyd5 wrote

I totally agree, and really wonder how the landscape will look in 10 years when it comes to ML model architectures, training strategies, optimization techniques, etc...it'll be very interesting.Although plasticity-based learning, spiking neural networks, and other neuromorphic algorithms that use local learning rules don't get the same kind of attention as gradient based learning, I do believe mimicing the neural activity of the brain through emulating spiking neural networks could potentially one day be a good solution for inference (in terms of cost and power efficiency). Though, currently, implementing spike-based learning and training has still proven to be a challenge. But hey, one thing is common is that sparsity is a key enabler for these types of hardware.

3

Carrasco_Santo t1_jd8e34b wrote

Imagine the situation where after so many studies, some international team manages to optimize the functioning of artificial neurons to a point where they are more efficient than biological neurons? We would automatically be outclassed.

And this is possible, scientists around the world have studied ways to optimize natural processes for some purpose, for example, ways to reduce the number of necessary steps that photosynthesis needs to produce sugar, making the process faster and more economical, it may be that the same happens with the functioning of neurons and their capacities.

2

brownmamba94 t1_jdawqp9 wrote

That's a pretty interesting thought...reminds me of this research from MIT that came out last summer. hmm...how computationally complex is a single neuron? Work like this can potentially help advance the field of analog deep learning. I think sparsity will play a role here in both at the connection-level and neuron-level, potentially further reducing energy consumption and allowing for better resource utilization.

1

kilow4tt t1_jd68jnd wrote

Was there any effort to go from 75% sparsity during pre-training to a less sparse (e.g. 25%) sparsity during fine-tuning rather than strictly going from 75% sparsity to 0%?

6

brownmamba94 t1_jd6j6pt wrote

Hi, this is the first author on the paper. You asked a great question and it’s something we are pursuing internally. In this study we kept things simple and switched from sparse to completely dense during finetuning. But as for future work, you’re right, we can certainly vary the amount of “redensification” as well (e,g., 25%, 50%, or possibly some schedule). This is a very interesting research direction, because the full dense capacity of the model may not be needed to recover performance on the downstream task.

9

_Arsenie_Boca_ t1_jd6u2my wrote

First time I hear sparse pretraining and dense finetuning. Usually its the other way around right? So that you get faster inference speeds. Is it correct that you are aiming for faster pretraining through sparsity here, while having normal dense inference speeds?

Also, could you provide an intuition on how cerebras is able to translate unstructured sparsity to speedups? Since you pretrained a 1.3B model, I assume it runs on GPU, unlike DeepSparse?

3

brownmamba94 t1_jd6xm1n wrote

Yes, that's right, usually it's the other way around and that's usually because for the average researcher its computationally expensive to pre-train the LLM from scratch. So, they often typically take existing pre-trained LLM checkpoints and perform fine-tuning on them on a domain specific task. The FLOPs required for pre-training is several orders of magnitude more FLOPs than fine-tuning.

In this work, like you said, we're aiming to show that thanks to the Cerebras CS-2, we can achieve faster pre-training with unstructured weight sparsity, and fine-tune dense to recover the performance on the downstream task. The ability to do faster pre-training opens up a lot of potential for new directions in LLM research. Note that an interesting extension of our work is to do sparse pre-training followed by parameter efficient fine-tuning using techniques like LoRA from Microsoft.

There's actually a couple really nice blogs from Sean Lie, our Co-founder and Chief Hardware Architect, discussing how the Cerebras CS-2 can translate unstructured sparsity to realized gains unlike traditional GPUs. All the experiments in our paper were done on the CS-2, including the 1.3B GPT-3 XL. There was no GPU training here. I encourage you to check out these blogs:

Harnessing the Power of Sparsity for Large GPT AI ModelsCerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning

4

geneing t1_jd6016n wrote

Is this a workaround for the weird Cerebras chip architecture? Would mainstream users who train on GPU benefit?

2

CS-fan-101 OP t1_jd649ko wrote

I wouldn't call it a workaround but rather an advantage.

Neural network models are made up of layers of neurons and connections between them. When there are missing connections, represented as zeros in the weight matrices, we refer to the model as sparse.

Sparsity comes in different forms. It is common for sparsity to occur naturally in the model structure itself if the pattern of connections is designed to only connect a subset of the neurons. Often, models are constructed this way intentionally with a predefined pattern and we refer to this as structured sparsity.

It turns out that even fully dense models, such as GPT, can be made sparse by inducing unstructured sparsity. In this form of sparsity, certain weights are set to zero, which effectively prunes the connections within the model. When the pruning is done without a fixed pattern, we refer to this as unstructured sparsity.

A key benefit of unstructured sparsity is the model retains the original baseline structure, without the need to create a new model architecture. Additionally, the sparse model can provide speedup in both training and inference.

The Cerebras CS-2 is designed to accelerate unstructured sparsity, whereas GPUs are not.

If you are interested in learning more, please check out our blog - https://www.cerebras.net/blog/harnessing-the-power-of-sparsity-for-large-gpt-ai-models

8

maizeq t1_jd6kpnj wrote

> The Cerebras CS-2 is designed to accelerate unstructured sparsity, whereas GPUs are not.

Don’t modern NVIDIA GPUs (2000s+) have strong support for sparsity (maximum theoretical flops are doubled when doing sparse computation?). From their documentation the type of sparsity they support is also unstructured (e.g randomly pruned values in tensors). Does the Cerebras chip have higher sparse flops, or does the comparison not make sense?

5

artsybashev t1_jd6l85h wrote

nvidia has structured sparsity

3

maizeq t1_jd6u4kb wrote

The sparsity they describe in this link entails randomly pruning weights (i.e. not specific channels like depthwise convolutions), which is what Graphcore is calling "unstructured".

3

osdd_alt_123 t1_jd6ufjz wrote

Nvidia has 2:4 structured sparsity in the Ampere architecture and one or two below as well, if memory serves.

So in a block of 4, you have to have 2 dropped and 2 retained. It's how they claim their 2x throughput at the hardware level.

You can, however, emulate sparsity in a variety of other ways that are higher than the hardware level. Hope this helps.

5

maizeq t1_jd76a7x wrote

Ah I see, thank you for the clarification.

2

brownmamba94 t1_jd8lqry wrote

Also, the N:M sparsity structure is much more constrained in terms of mask diversity compared to unstructured sparsity. In Table 1 in the N:M Transposable sparsity paper, they present the mask diversity constraint between different sparsity techniques (both unstructured and structured), and as expected unstructured sparsity achieves the best. I think this is important especially for dynamic sparse training because now the algorithm has a much larger search space to explore sparse subnetworks. Also, imposing structured sparsity like N:M sparsity tends to reduce the expressivity of a weight matrix at higher sparsity levels, which can be a constraint if you want to get high compression ratios.

3