Submitted by Singularian2501 t3_z1b2rp in MachineLearning
CommunismDoesntWork t1_iydruw8 wrote
Reply to comment by diviramon in [R] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models - Massachusetts Institute of Technology and NVIDIA Guangxuan Xiao et al - Enables INT8 for LLM bigger than 100B parameters including OPT-175B, BLOOM-176B and GLM-130B. by Singularian2501
Has anyone checked to see if training fundamentally needs all that precision? Intuitively, I can understand why it works better that way, but if a model can be converted to int8 after the fact without taking a huge hit in accuracy, then I don't see why an optimizer couldn't find that int8 network in the first place.
diviramon t1_iydw5aq wrote
Yeah - a quick search showed some attempts on RN50 and Mobilenet, but nothing on transformers (not surprising since INT8 quant for Bert is very hard). However, it seems like all the INT8 focus is shifting towards MF8 (edit FP8) which should be more suitable for training as well.
CommunismDoesntWork t1_iyecw45 wrote
> MF8
I've never heard of this and google isn't being helpful. Any links?
diviramon t1_iyejg7z wrote
It is the new Nvidia FP8 data type: https://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/
Viewing a single comment thread. View all comments