Viewing a single comment thread. View all comments

chatterbox272 t1_j2c9gnj wrote

>I do feel that Apple’s gpu availability and the popularity of AMD demand a more thorough coverage.

Apple's GPUs are 2 years old, and although you didn't mention them Intel's dGPUs are <1 year old. Both account for a relatively small portion of users and an effectively zero percent of deployment/production.

Most non-deep ML techniques aren't built on a crapload of matmuladd operations, which is what GPUs are good at and why we use them for DL. So relatively few components of sklearn would benefit from it and I'd be deeply surprised if those parts weren't already implemented for accelerators in other libraries (or transformable via hummingbird). Contributing to those projects would be more valuable than another reimplementation, lest you fall into the 15 standards problem

10