Viewing a single comment thread. View all comments

uMar2020 t1_iwm19fq wrote

Say someone built/trained a neural network with PyTorch on their own machine (w/ GPUs etc.), but someone with little/no programming experience and a low-end computer needs to use the final network (i.e., supply input, get output), what’s the best way to package/ship the model for this person/case? How would one go about minimizing reliance on the original hardware / software environment?

1

XGDragon t1_iwtot63 wrote

Docker container and an external GPU platform such as Amazon or other

1

uMar2020 t1_iwwjt7d wrote

Thank you! I’m aware of Docker, but may be a slight learning curve for us to use it? Good idea on the external computing, maybe even Google Collab would work for us, it’s a small research project. Is this a common thing, and if so, is the method you mentioned standard/common practice?

1