Viewing a single comment thread. View all comments

Purple_noise_84 t1_itgynxx wrote

Few more years and it will be as good as pytorch today :)

22

BITE_AU_CHOCOLAT t1_ithud6t wrote

Eh... I'm currently training a model with 700M parameters (most of which being in the emdeddings used as input, not so much the hidden layers themselves) and Pytorch pretty much required at least 50GB per GPU while Tensorflow was happy to train on 3090s, which were way, wayyyy cheaper to rent than a6000s even though Pytorch managed better GPU utilization. So I think I'm just gonna stick with TF/Keras and TFLite for now.

−8

learn-deeply t1_itikofd wrote

PyTorch doesn't inherently use more or less memory than TensorFlow, there's a bug in your code. If it's easier to switch frameworks than debug, more power to you.

14

BITE_AU_CHOCOLAT t1_itiofjz wrote

Well, I haven't "switched" since I've been using Tensorflow since the start of the project. I was just curious to see if Pytorch could allow me to squeeze more juice and after spending a weekend trying to learn assembly Pytorch syntax it turns out that yes, but actually no. So yeah I'm perfectly content with using model.fit and calling it a day for the time being.

Oh and I also forgot: Pytorch won't train with a distributed strategy in a Jupyter environment. KEK.

−11