Comments

You must log in or register to comment.

colugo t1_j1y5k71 wrote

I kind of feel like the answer is, if you are doing the kind of work that needs more RAM, you'd know.

In deep learning in particular, RAM would affect your maximum batch size which could limit how you train models. I'm not sure which particular hard limits you'd come up against in other machine learning. More RAM is helpful, sure, but you can usually use less with more efficient code.

9

peno8 OP t1_j1y5vrf wrote

Hey, thanks for the reply.

I know using macbook for DL is kind of unusual, so for DL I will use Google Colab or buy desktop. So for DL I will use my laptop for feature calculation, and the batch size will not be a problem to me.

3

barvazduck t1_j1yl9s5 wrote

I work on LLMs and it's my setup (I have M1 32gb, though it has little influence as everything is done on colab/server jobs).

Local decice can have more influence if you plan to use the laptop for testing/inference colab made model.

1

Artgor t1_j1y6emf wrote

It depends on your data. If your datasets aren't large, it will be fine.

At my job, we were issued Macbooks pro M1 with 16 GB RAM. Sometimes (but not often) I hit the limits of the RAM.

3

peno8 OP t1_j1y6yyr wrote

Hello, thanks for the reply.

May I ask which area are you in or what kind of data is it, roughly?

2

Artgor t1_j1y757h wrote

It it 4-7 mlns of rows. Sometimes I hit memory limits when I try to create ~100 columns or when calculating complex rolling features. Also, I have multiple programs running - like Pycharm, so they contribute to the problem.

2

bokuWaKamida t1_j1yhrka wrote

i would want 32 gb regardless of ML, 16 is just too low for me nowadays ... but im also the type of guy that has 100chrome tabs and 5 intelliJ projects and a bunch of docker containers open at the same time ...

2

voiser t1_j1yfpf3 wrote

My experience is not very positive. DL tools rely on CUDA and MKL and while there are alternatives that use Metal, I've ran into dependency conflicts that ended up in systematic segfaults.

At the moment I wouldn't go for a Mac for those things.

1

ZachVorhies t1_j1yagxw wrote

Macs are not good for machine learning. What’s good for machine learning is an nvidia graphics card of 8gb or higher. So that’s going to be Windows or Linux only.

With a mac you’ll only be able to run in cpu mode. And right now many models don’t support mac at all and in some cases will require specific packages that are compiled to m1. It’s kind of a nightmare.

Edit: Downvote me all you want haters. I use ML apis to make apps that run on linux, mac and windows and Mac only has cpu inference, not CUDA acceleration and are therefore 10x slower. Your downvotes are a giant fanboy cope.

https://github.com/zackees/transcribe-anything

−3

peno8 OP t1_j1yc0lr wrote

Hm I saw many people doing ML on macbook and they say it's doable...

Are you talking about like this? https://scikit-learn.org/stable/install.html#installing-on-apple-silicon-m1-hardware

I know pytorch is not available for apple silicon at the moment and it's ok to me because if I'm not wrong pytorch is more about DL and I will do it from Colab or dedicated desktop.

It would be great if you can show me a example about what kind of other models do not support mac, then I will check and see it's a deal breaker for me.

2

ZachVorhies t1_j1zxoyr wrote

Pytorch does have preview builds to run on the Mac M1 CPU. But they can’t ever offer CUDA acceleration because these macs don’t have nvidia graphics cards.

1

twohusknight t1_j1yg7md wrote

I regularly do ML professionally and as a hobby on a 2015 MacBook Pro, where I can do computer vision with DL in inference on a CPU. It’s plenty fast enough for PGMs, SVMs, decision trees, etc. Not every problem requires a GPU, but if I’m doing anything at scale or that requires power and speed then I’ll just setup a cloud instance.

I also have a M1 Mac mini that sits on my desk that I remote login to that I have used for GPU-based DL training and inference. It’s great for small experiments and writing the code, but cloud instances are the way to go until the US silicon fabs start opening and prices drop.

1

ZachVorhies t1_j1zx7ci wrote

If your target are small ML models then it will run on anything. If you want to do serious stuff like whisper AI you’ll need an nvidia graphics card or you’ll be running 10x slower.

0