AerysSk
AerysSk t1_j71kz0d wrote
Reply to comment by the_architect_ai in [D] Understanding Vision Transformer (ViT) - What are the prerequisites? by SAbdusSamad
This is the correct attitude. Dive in, and if you meet obstacles, find it. It's what makes the learning journey fun: you don't just learn one thing, but many things.
AerysSk t1_j2cgmo4 wrote
Reply to comment by Realistic-Bed2658 in [D] GPU-enabled scikit-learn by Realistic-Bed2658
If you are looking for a GPU version of scikit-learn, I think Nvidia is making one, and they call it cuml. Note that, not all algorithms are implemented, and there will also be some missing functions as well.
However, a note about Apple and AMD GPU thing: they are on the rise, but not until a few years later that they will become usable. My lab has only Nvidia GPUs but we already have a lot of headache dealing with Nvidia drivers and libraries. At least for a few years, we do not see any plan switching to AMD or Apple.
AerysSk t1_iycz3fs wrote
No, dealing with Nvidia dependencies are just too enough. My department sticks with Nvidia.
AerysSk t1_jbt5j4l wrote
Reply to [D] Is Pytorch Lightning + Wandb a good combination for research? by gokulPRO
My main problem is Lightning itself. I don’t find the flexibility of it like pytorch. Tried migrate an old code, gave up along the way, and still using pytorch now.