Viewing a single comment thread. View all comments

5death2moderation t1_iumr1v1 wrote

As someone who actually owns an M1 and has a job running large models in the cloud - it's not nearly as bad as I was expecting. mps support in pytorch is growing every day, most recently I have been able to finetune various sentence transformers and GPT-J at reasonable speeds (before pushing to gpus in the cloud). If I was choosing the laptop I would go with linux + gpu obviously, but our mostly clueless executive chose the M1. The upside with the M1 is that I can use the 64gb of system memory for loading models whereas the most gpu memory I could get in a nvidia laptop is 16-24.

1

C0hentheBarbarian t1_iutruwo wrote

Hey, I was facing issues with sentence transformers and M1 (some missing layers not implemented for MPS). Could you tell me how you are getting around that?

1