vprokopev OP t1_j0zu1ke wrote
Reply to comment by Zealousideal_Low1287 in [D] Why are we stuck with Python for something that require so much speed and parallelism (neural networks)? by vprokopev
But I do want to use pytorch, I like it very much.
I just usually have a lot of specific modifications to data and find myself avoiding native python and loops/indexing, because it makes things way slower.
It would still be slower if I implemented it in C++ then in pytorch, but at least not like way slower and would not create a bottleneck
Zealousideal_Low1287 t1_j0zuq39 wrote
I have no idea what you’re suggesting. Use C++ instead of vectorising properly and using PyTorch? Do you currently do much compute ‘outside’ PyTorch?
vprokopev OP t1_j0zy68v wrote
Mostly use vectorized pytorch operations.
Sometimes use just native loops and indexing.
Yes, unfortunately there are specific data preprocessing cases where I have to do stuff outside of pytorch, it's just more convenient.
And even when mostly using pytorch I still want the freedom to just use native functionality of a language without a huge hit to a performance.
But I know pytorch vectorized ops will still be faster and are suitable for majority of tasks
Zealousideal_Low1287 t1_j0zycf1 wrote
And you feel if you wrote raw C++ it would be as fast as the PyTorch ops, or you seek to replace the Python part, or something else?
vprokopev OP t1_j0zzfpg wrote
Just seek to replace a python part with something that is slower (obviously) but not like way slower then Pytorch vectorized ops.
And to have freedom to use more native structures and a bit less thinking about how to vectorize every algorithm possible
Zealousideal_Low1287 t1_j10wgd9 wrote
Cool just write a deep learning framework
Viewing a single comment thread. View all comments