You must log in or register to comment.

pommedeterresautee t1_iyza0g2 wrote

I know very little about CPUs, but wondering why do you think more cache would help?

Intuitively I would think it would be the case if training was for most of the time memory bandwidth limited but the issue with CPUs (vs GPUs) is that during training, model is computed bounded.


PresentGrapefruit451 OP t1_iyzemhl wrote

I thought so cause more l3 cache can keep more cpu instructions and for preprocessing and data loading stages fast cpu processing might be of help. Though not sure if the improvement will be significant.


AnnualDegree99 t1_iyzpxid wrote

Wouldn't you be using a GPU to train your model, that would be probably hundreds of times faster?


PeterIanStaker t1_iz268nu wrote

If you're training neural networks, I'm not sure - I would wait for reviews of the 7950x3d (though in this case, you should almost always prefer a mid-tier GPU like a 3060ti)

If you're doing something like random forests or SVM, you're going to find yourself limited by SSD or even RAM speed loooooong before the difference between a 7950x3d and 7950x ever becomes relevant.


PresentGrapefruit451 OP t1_iz3a2h1 wrote

I am mainly looking for retraining models inception, yolo, etc. Using tenserflow and keras.


PeterIanStaker t1_iz3gh0k wrote

In that case don’t even worry about cpu, really you’d be fine with a 5600x equivalent. If the 7950x3d winds up being better value for money then fine, but otherwise your limiting factor will be your gpu, if not your data pipeline.