Submitted by fxmarty t3_z1titt in MachineLearning
fxmarty OP t1_ixgnwin wrote
Reply to comment by Lewba in [P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models by fxmarty
Unfortunately, the ONNX export with BetterTransformer will not work. It's a bit unfortunate the model optimization / compression efforts are spread out between different (sometimes) incompatible tools, but then again different use cases require different toolings.
Lewba t1_ixgp0jc wrote
Understandable. I'll just have to choose between ORT optimized and BetterTransformer in the meantime.
Viewing a single comment thread. View all comments