Viewing a single comment thread. View all comments

2Punx2Furious t1_iv7lc68 wrote

Yeah, but what about quantum tunneling?

8

Down_The_Rabbithole t1_iv9g8vz wrote

Quantum tunneling has been a problem since 32nm. The solution to it is to just have hardware that does the calculation multiple times to ensure a bit didn't get switched, the result that comes up most often is assumed to be the correct one.

Jim Keller has an entire talk about how to manage quantum tunneling bit flips statistically.

Sadly it means more and more of the actual silicon is used for redundancy stuff like this instead of actually used for normal computing.

We can clearly see this as a CPU from 2008 (I7 920) and a CPU from 2022 (I7 13900k) have almost 100x difference in amount of transistors, yet the 13900k is "only" 5-10x faster.

10

2Punx2Furious t1_iv9jgib wrote

> The solution to it is to just have hardware that does the calculation multiple times to ensure a bit didn't get switched, the result that comes up most often is assumed to be the correct one.

So we have to do the same calculation multiple times, effectively negating any gains coming from smaller transistors? Or even counting the additional calculations, it's still worth it? I assume the latter, since we're still doing it.

> We can clearly see this as a CPU from 2008 (I7 920) and a CPU from 2022 (I7 13900k) have almost 100x difference in amount of transistors, yet the 13900k is "only" 5-10x faster.

Ah, there's the answer. Thanks.

3