Viewing a single comment thread. View all comments

Realistic-Bed2658 OP t1_j2by65z wrote

thanks for the links, but I disagree for the most part.

DBSCAN and LOF most likely would benefit. Even their own MLP model would inherently benefit from it ( I do believe somebody willing to train a neural network most likely would use PyTorch of TF).

Also, the fact that today non-DL ML is mainly CPU-based doesn’t mean that in 5 years from now this won’t change. Personal opinion here, though.

−5

AerysSk t1_j2cgmo4 wrote

If you are looking for a GPU version of scikit-learn, I think Nvidia is making one, and they call it cuml. Note that, not all algorithms are implemented, and there will also be some missing functions as well.

However, a note about Apple and AMD GPU thing: they are on the rise, but not until a few years later that they will become usable. My lab has only Nvidia GPUs but we already have a lot of headache dealing with Nvidia drivers and libraries. At least for a few years, we do not see any plan switching to AMD or Apple.

11

Realistic-Bed2658 OP t1_j2cld4m wrote

Totally understandable. I only use nvidia at work too.
Thanks for the info about the nvidia package!

2