Viewing a single comment thread. View all comments

swisstraeng t1_j08a2s9 wrote

Well, they didn't. In reality the 2nm process has an expected gate size of around 45nm.

That doesn't mean they aren't finding cool ways to make the chips even more compact. Lots of less known terms like GAAFET. (some kind of vertical 3D transitors)

But the main issue with all of this, is that the prices to manufacture a single chip is higher and higher. Since now it's not a matter of size, but also of fabrication complexity and time.

If I were to guess, we'll get slowly stuck in 2025-2030 era regarding our current technology. I think this will be when we'll need to use alternatives, more power efficient ARM architecture, which is what Apple is already using for its M1 and M2 chips.

163

orincoro t1_j08c6qx wrote

Yeah, I thought I read this, that the obvious next step is to just build the wafers in a 3D architecture, but it’s super complicated to fabricate.

57

IlIIlllIIlllllI t1_j08pupi wrote

heat is a bigger problem

41

Hodr t1_j0bvehy wrote

Heat is more of a materials issue. Once they hit the wall they can move to GaAs or other semiconductors.

The only reason we still use silicon is the existing infrastructure and the relative abundance of the element.

3

swisstraeng t1_j09a5ge wrote

Yeah, and the main issue is that, when you add layers on top of layers, you are less and less flat. And at some point you're a whole layer wrong, so you have to do long and expensive processes to try to flatten the thing again.

Cooling is partially an issue, but that's also because CPU/GPU manufacturers push their chips to their limits in an attempt to make them appear better. And end up selling stuff like RTX4090 that is clocked way too high and end up eating 600W, when it could have 90% of the performances for 300W. But hey. They're not the ones paying the power bill.

32

orincoro t1_j0c7tzb wrote

I wonder how much electricity globally is consumed by needlessly overclocked GPUs.

1

swisstraeng t1_j0ei32s wrote

Surprisingly not much. If we only look at industry grade hardware. Consumers? Yeah, a lot is wasted.

All server and industrial stuff is actually not too bad. For example, the chip used in the RTX 4090 is also used in a Quadro card.

It is the AD102 chip. Used in the RTX 6000 Ada gpu, which has only 300W TDP compared to the RTX 4090 that has 450W and is pushed to 600W sometimes. Or worse, 800W in the RTX 4090ti.

We're talking about the same chip and a 300W versus 800W difference.

Anyone using a rtx 4090ti is wasting 500W into a bit of extra computing power.

But hey, kwh costs about 0.25euros in the EU depending where you live. This means, you pay 1 euro every 8h of use for a rtx4090ti that could be saved by downclocking the card.

1

SneakyCrouton t1_j09z32d wrote

See that's just a marketing name for it, it's actually just a 2D transistor but they draw a little D on there for legality purposes.

5

Jaohni t1_j09jjfj wrote

PSA: ISA =/= implementation.

While it was common to suggest in the late 90s and early 2000s that there was a strong distinction between CISC and RISC styles of architecture, owing to CISC having a wide variety of purpose built instructions that aided in accomplishing specific tasks quickly, while RISC would have fewer transistors sitting around doing nothing (idle transistors do still consume some power, btw) as a consequence of bloated instruction sets, in reality, modern ISAs have a mix of CISC and RISC philosophies built in, and more important than a core being ARM or x86, is the way that core is implemented.

In reality, if you look at a variety of implementations of ARM cores, there actually isn't as big an efficiency improvement gen over gen as you would expect, as seen in the Snapdragon 865, 870, 888, and 8 gen 1 all performing relatively closely in longer tasks (though they do benchmark quite differently in benchmarks that test a series of tasks in very short bursts), and actually not being that out of line with certain x86 chips, such as something like a 5800X3D (were one to extrapolate its performance when compared to a 5800X power limited to similar wattage to the SD SoCs), or say, a Ryzen 6800U processor power limited to 5W.

​

That's not to say that there isn't ARM IP out there that can be beneficial to improving performance at lower power draw, but I'd just like to highlight that a lot of the improvements you see in Apple Silicon aren't necessarily down to it being ARM, but due to it being highly custom, and due to Apple having varying degrees of control over A) the hardware, B) the drivers / OS / software stack, and C) the actual apps themselves. If you're able to optomize your CPU architecture for specific APIs, programming languages, use cases, and operating systems, there's a lot of unique levers you can pull as a whole ecosystem, as opposed to say, just a platform agnostic CPU vendor.

Another thing to note is that while Apple saw a very respectable increase when jumping from Intel to their in house M1 chips, it's not entirely a fair comparison between x86 and ARM as instruction sets, as the Intel implementation was implemented on a fairly inferior node (14 nanometer IIRC), while the M1 series was implemented on a 5nm family node, or possibly more advanced. When taking this into account, and comparing the Intel versus M1 macs, you may want to remove anywhere between 80 to 120% of the performance per watt improvements to get a rough idea of the expected impact of the node, with what's left being a combination of the various ecosystem controls Apple has available.

When compared to carefully undervolted Raptor Lake chips, or equally carefully managed Zen 4 processors, the Apple SoCs, while respectable in what they do (and being respectable as a result of many things not owing to their ARM ISA), they aren't alien tech or anything; they're simply a well designed chip.

14

frozo124 t1_j09mnnx wrote

It’s true. I work for ASML and things keep getting smaller

4

Ultra-Metal t1_j093vt3 wrote

Well, you have to do that for the gate until they come up with something better. Quantum tunneling is very much a thing at this size.

0

Mango1666 t1_j08v67v wrote

idk if gaafet will come to consumers in the capacity finfet and mosfet have reached, gaa is a very supply limited substance in comparison!

−3

jjayzx t1_j08zqx2 wrote

What do mean substance? GAAFET(Gate All Around FET) is a design, not a material.

6

swisstraeng t1_j09bv7d wrote

True that it's not a material, BUT there is a valid point that, such 3D ways of doing transistors are expensive to manufacture.

And we, consumers, don't like expensive things. We want performance/price most of the time.

Not a lot of us would be ready for a 4000$ CPU if it meant 30% better perfs over a 900$ CPU.

1

jjayzx t1_j09ddkj wrote

Different designs is how things have been moving forward and how they've been targeting the performance/price ratio. If the device does not require much processing power there are other processors still made on older nodes for a lower price point. The majority of pricing is in the machines, wafers and yields.

1

dreamwavedev t1_j0abb8c wrote

I think you might mean GaN which is a different semiconductor material they're using in some power supplies.

GAA stands for gate-all-around and describes the geometry of the transistors (the gate surrounds the channel on all sides, FinFET was just surrounding it on 3 side) not what they're made of

1