Comments

You must log in or register to comment.

kkchangisin t1_j5gcgbe wrote

Nice work! Triton already looks good but have you tried optimizing with the Triton Model Analyzer?

https://github.com/triton-inference-server/model_analyzer

In various models I use with Triton I've found the output model formats and configurations for use with Triton can provide drastically increased performance whether that be throughput, latency, etc.

Hopefully I get some time soon to try it out myself!

Again, nice work!

5

NovaBom8 t1_j5h30af wrote

Very cool, great work!!

In the context of running .pt (or any other device-agnostic filetypes), I’m guessing dynamic batching is the reason for Triton’s superior throughout?

3

kkchangisin t1_j5ijvdy wrote

Looking at the model configs in the repo there’s definitely dynamic batching going on.

I think what’s really interesting is the fact that even with default parameters for dynamic batching the response times are superior and very consistent.

3