#
**big_dog_2k**
OP
t1_iu6yjf3 wrote

Reply to comment by **yubozhao** in **[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?** by **big_dog_2k**

Hi! Can you give the elevator pitch for Bento? When should I use it and for what part of my model serving problems will it solve? If you integrate with another serving solution - how much more complexity is that going to add and how are you thinking about deployment?

Viewing a single comment thread. View all comments