Submitted by Singularian2501 t3_z1b2rp in MachineLearning

Paper: https://arxiv.org/abs/2211.10438

Github: https://github.com/mit-han-lab/smoothquant

Abstract:

Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, for LLMs beyond 100 billion parameters, existing methods cannot maintain accuracy or do not run efficiently on hardware. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs that can be implemented efficiently. We observe that systematic outliers appear at fixed activation channels. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the GEMMs in LLMs, including OPT-175B, BLOOM-176B and GLM-130B. SmoothQuant has better hardware efficiency than existing techniques using mixed-precision activation quantization or weight-only quantization. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. Thanks to the hardware-friendly design, we integrate SmoothQuant into FasterTransformer, a state-of-the-art LLM serving framework, and achieve faster inference speed with half the number of GPUs compared to FP16. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs. Code will be released at: https://github.com/mit-han-lab/smoothquant in ~2 weeks.

​

https://preview.redd.it/0cnoctu0id1a1.jpg?width=933&format=pjpg&auto=webp&s=f236832ce22e9528c73a1e27f1a1efec09f3066e

https://preview.redd.it/etmfxsu0id1a1.jpg?width=608&format=pjpg&auto=webp&s=8d48ce7257abc7aaa72eec530b8d980c349f2d83

https://preview.redd.it/n0tl4xu0id1a1.jpg?width=1385&format=pjpg&auto=webp&s=7eb7f32d717185a324c18dac5117ac8de9ae745d

https://preview.redd.it/fvv8kuu0id1a1.jpg?width=553&format=pjpg&auto=webp&s=9c0f59f6aaa2739bd534afbac2ae77febb9f5790

https://preview.redd.it/ijsl5xu0id1a1.jpg?width=878&format=pjpg&auto=webp&s=1ab39dc7166d1d1d3d833fa47426195ee163f2af

https://preview.redd.it/nzoib1v0id1a1.jpg?width=903&format=pjpg&auto=webp&s=62df05b59f6d97f0c5620f9a82be23b62b6b5a9a

55

Comments

You must log in or register to comment.

Acceptable-Cress-374 t1_ixbzdfe wrote

Would this mean that it could become feasible to run gpt-neox inference on a 3090/4090 w/ 24 GB VRAM? That would be huge!

8

singularperturbation t1_ixat83k wrote

This is highly relevant for my work, I'm very excited about this!

(Ah n/m I assumed the submitter was one of the authors.)

I saw that you've uploaded activation scales (equation 4) for a number of models, but if calculating this for a new model, how do you use a calibration dataset when doing that? Do you take the maximum across all of the calibration values, or calculate the maximum for each value individually and then average? I see that

> Code will be released at: https://github.com/mit-han-lab/smoothquant in ~2 weeks.

so I guess I may just need to be patient until this is released lol.

3

zaptrem t1_ixbniz5 wrote

Could these advancements in quantization be used to train in int8 as well?

3

CommunismDoesntWork t1_ixcqvfr wrote

What's the theory behind PTQ? As in, if quantization can preserve accuracy and create a massive speed up, why wouldn't you train on int8 to begin with? Speeding up training allows you to use even more parameters, or cut costs.

3

diviramon t1_iydkhtc wrote

Quantization only really works for inference. During training, the gradients are very sensitive to the decimal precision so FP32 is needed to compute them and for the training to converge. I have not seen a lot of training in INT8.

2

CommunismDoesntWork t1_iydruw8 wrote

Has anyone checked to see if training fundamentally needs all that precision? Intuitively, I can understand why it works better that way, but if a model can be converted to int8 after the fact without taking a huge hit in accuracy, then I don't see why an optimizer couldn't find that int8 network in the first place.

1

diviramon t1_iydw5aq wrote

Yeah - a quick search showed some attempts on RN50 and Mobilenet, but nothing on transformers (not surprising since INT8 quant for Bert is very hard). However, it seems like all the INT8 focus is shifting towards MF8 (edit FP8) which should be more suitable for training as well.

2

PaulTheBully t1_ixbuwat wrote

Is it applicable for only LLM or any transformer architecture? (I’m sorry if my question is stupid, I’m new to the field)

2