fcharras

fcharras t1_j2c2ec6 wrote

It's not available yet, but there is exploratory work being done by scikit-learn devs on the matter. I'm a dev being specifically tasked with this work myself.

The path to a possibly GPU-enabled scikit-learn that is being explored currently is by having a plugin-based system that would enable users to leverage performances from accelerators including gpu-accelerated computing. The hardware-specific optimizations would be maintained and distributed in separate plugins rather than in the main scikit-learn package.

Many people seem to be interested by this idea, including people at Nvidia (for cuda-enabled plugins) and Intel (for optimizations targetting intel GPUs). I'm myself more specifically tasked at using a SYCL-based software stack, which for now is mostly designed with compatibility with intel hardware in mind but has potential of interoperability with Nvidia and AMD hardware at its core.

More about this:

I wouldn't recommend trying to use those experimental branches yet but the 2023 year might see announcements and releases related to this if the project is successful. In the meantime, early users, feedback, and contributions would be very welcome.

6