Sylv__

Sylv__ t1_j2yrga7 wrote

Well, you can always debug / try quantization configs with fake quantization on GPU. And once one is good enough for you, move to TensorRT, although AFAIK the support in TRT is very limited. Of course, this will only allow you to benchmark configs for prediction quality, not speedup.

Maybe there will be a support for quantized kernels in torchinductor? I recall reading around this in a github issue at some point.

Otherwise you could try bitsandbytes, and pass the good param to do all computations in 8-bit.

The authors of SmoothQuant implemented as well torch-int, which is a wrapper around CUTLASS to use int8 GEMM. You can find it on github!

2