Submitted by simorgh12 t3_zcchr4 in deeplearning

My use case will be scientific machine learning on my desktop. Specifically, solving neutral networks. 4090 only seem to be available at scalper prices. However, a used 3090 seems to be a better value than a new 4080. Nonetheless, which is better in performance? A 4080 or 3090?

11

Comments

You must log in or register to comment.

notgettingfined t1_iyvqw00 wrote

I would go with the 3090 for the 24GB of ram. I don’t believe the performance difference will be that big of an issue. However not having enough ram will just make doing something things very difficult

24

suflaj t1_iyvwqn7 wrote

The 4080 is slightly faster, but after the 4090, the 3090 is the best bang for the buck in DL, and VRAM is invaluable, while performance is generally not.

12

incrediblediy t1_iyy49kh wrote

4080 16 GB should be actually the 4070 TI
3090 24GB would be a better choice specially with VRAM, you can also get an used card which would be much cheaper

3

animikhaich t1_iyzan51 wrote

3090 hands down. The VRAM benefit is invaluable. However, if you are getting a 3090 at nearly 1K USD price point, then I recommend checking out the 3090Ti. The founders edition for 3090Ti in stock on Nvidia’s official store for 1100 USD + Tax.

3

allanmeter t1_iyz8yy4 wrote

One more consideration, if you’re using Cuda check cuda version compatibility for the 4000 series, also check cudnn compatibility as well. Sometimes newest cards are more of a pain than simple incremental value.

2

magicview t1_j02aund wrote

I am kind of facing the same issue. I partially want to try some moderate deep learning project, while also interested in other aspect of the GPU, for example, video editing, GPU acceleration for data processing (Matlab, python, etc), etc. just got a computer with i7-12700k, to upgrade GPU.

now 3090 Ti FE is available at $1099, while 4080 costs about $1199 (but less availability). it seems 4080 is better than 3090 Ti in almost every aspect excluding unknown performance in deep learning (didn't find any benchmark/comparison on DL).

so not sure how to choose. worth it to trade VRAM/deep learning for other performance?

2

[deleted] t1_iyxc83z wrote

[deleted]

0

incrediblediy t1_iyy4mua wrote

I am not sure about this, even my GTX1060 3 GB was kinda fast than K80 on Google Colab. Also think about storage size/speed, internet upload speed, security/restrictions of data, 12 hour limitation etc.

3

[deleted] t1_iyzkdpg wrote

[deleted]

1

incrediblediy t1_iyzkww4 wrote

> K80

Yes, I meant that I got K80 and I was doing some CNN/BERT etc. Just checked, K80 (single unit) has similar TFLOPs to GTX1060 3GB so with other overheads in cloud (slow CPU, drive storage etc), Colab could be slower anyway.

Now I have a PC with dual GPU setup (RTX3090 + RTX3060) and have access to GPU servers at Uni, so no more Colab :)

> have a 1650 which is no slouch but colab trained in 5s what took my GPU 10 minutes.

is that a laptop GPU ?

1