Viewing a single comment thread. View all comments

Creepus_Explodus t1_j2zwyoz wrote

Radeon Software has hardware monitoring built in. If your junction temperature reaches 110 degrees C, your card is thermal throttling. When this happens, the clock speed of the GPU gets cut back significantly which shows as stuttering in gameplay.

46

Blastoxic999 t1_j2zx75q wrote

So, if it gets too hot too fast and it lags, that means it's thermal throttling?

19

Creepus_Explodus t1_j2zy0eq wrote

Essentially, yes. It's a protection mechanism for the GPU to prevent overheating, so it slows down to reduce temperature. Stuttering can have other causes though, so it's just an indicator that thermal throttling may be occurring. To actually validate it, you need some way to monitor the GPU temperature under load. You can enable an overlay in Radeon Software that shows your hardware stats during gameplay, or use some other software like GPU-Z or HWInfo.

30

Blastoxic999 t1_j2zygn4 wrote

So what's the problem here with those AMD GPUs? The GPU overheats and it gets damaged? The GPU throttles too fast even tho it was not hot?

7

Creepus_Explodus t1_j2zzgyz wrote

GPU junction temperatures are meant to be between 70-90°C on these air coolers. Thermal throttling should never occur unless something is seriously wrong. The card is designed to maintain peak performance at all times, while staying within the power and thermal limits. When a card thermal throttles it is exceeding its thermal limits, and must reduce its performance to maintain stability. Shutoff temperature is at 115°C, your card will actually just turn off to prevent damage when it reaches that temperature.

22

Masters_1989 t1_j310sir wrote

Great series of responses.

That was very well said, and very helpful information for someone that doesn't know about GPUs like the person you replied to. Nice job - was a pleasure to see.

7

onecrazyguy1 t1_j30pk04 wrote

I usually just drip water on some of the metal components and am careful to avoid the PCB.

−5

JaggedMetalOs t1_j31avgw wrote

The problem is you get terrible performance because the cooler isn't working, so the chip gets hot and slows itself right down to prevent damage.

Current theory is there isn't enough liquid in the vapor chamber so it stops working when the chip heats up.

1

JCastin33 t1_j30139g wrote

From a few videos I've watched about it, it looks as though a part in it is not well designed, specifically the vapor chamber. That being said, I don't think AMD has come out and said what the issue is specifically yet.

Anyway, it seems like the vapor chamber gets locked up and stops cooling properly, resulting in temps climbing to 110°, and then the GPU slows itself down to try drop the temps, which fails, so it just keeps slowing itself further.

0

ChrisFromIT t1_j30v9pa wrote

I don't think it is junction temperature that is reaching 110 degrees C, it is the GPU hotspot reaching 110 degrees C.

3

Comander_K33N t1_j310bmt wrote

110c!! Holy moly. Old 3090ti would run at 70c, which was normal. At that temp my case was putting out some serious heat. Can’t even imagine 110c.

1

JaggedMetalOs t1_j31b1ay wrote

The problem is the cooler isn't actually moving the heat to the fins, so the heat is trapped on the chip and so the exhaust temperature is probably cool...

8

DeBlackKnight t1_j32ihr9 wrote

So for one, the heat coming out of your case is solely do to the amount of wattage being used. A card can run at 60c peak and still pump out 40c+ air if it's drawing 400-500w.

For two, we are talking about junction (or hotspot, in Nvidia's case) temp, not edge temp. I do not believe for a second that anything other than a watercooled 3090ti is running at 70c junction temp. If you're comparing a watercooled cards' temps to a reference cards temps, I don't know what to tell you.

I believe that the AMD GPUs in question actually maintain fairly decent edge temps, while actively thermal throttling due to junction temps.

2

sonoma95436 t1_j377yov wrote

Also has to do with efficiency. Die size shrinks generally help efficiency. Heat is wasted energy. Optimized in a perfect scenario it would minimally heat.

1

BobisaMiner t1_j3lxt92 wrote

In computer chips pretty much all energy ends up as heat. But it's also not wasted energy like it would be in an internal combustion engine.

1

sonoma95436 t1_j3ndd43 wrote

Why not. In a ICE you can recover some wasted heat with a turbocharger. How do you recover wasted heat with a CPU? In fact you have to use more energy to cool it.

1

BobisaMiner t1_j3ohwj6 wrote

Sorry I wasn't clear. My point was heat in a cpu is a by-product by design and yeah it's always going to be 100% wasted. I guess it heats our rooms, that's something.

But in an ICE where heat(thermal energy) is what is converted to movement.

1

sonoma95436 t1_j3oy1wy wrote

In a ICE expanding heated gas from combustion is converted to mechanical energy but waste heat is inefficiency. More direct heat to energy would be a steam engine which is external combustion although steam is released which is a waste heat.

1

BobisaMiner t1_j3lxk6d wrote

I'm pretty sure your 3090ti puts out more heat, since it can eat a lot more power(450W TDP). How hot the card gets has little effect on its heat output.

1