Viewing a single comment thread. View all comments

Thrawn89 t1_j6mvpr4 wrote

It's a great explanation, but a few issues with the metaphor's correctness.

The kids are all working on the exact same step of their individual problem at the same time. The classroom next door is on a different step for their problems. The entire school is the GPU.

Also replace kids with undergrads, and they don't work on 1+1 problems, they work on the exact same kind of problems the CPU does.

To translate, the reason they are undergrads and not mathematicians is because GPUs are clocked lower than CPUs so they don't do the individual work as fast. However the gap between mathematician and kids was a little too many orders of magnitudes.

Also, they do work on the same complexity of problems, GPUs have been more heterogeneous compute platforms than strictly graphics since the programmable shader model was introduced making them Turing complete. Additionally, the GPU's ALU and shader model is as complex as a C program these days.

The classroom analogy is what DX calls a wave and each undergrad is a lane.

In short there is no large difference between GPU and CPU besides the GPU uses what is called SIMD (single instruction, multiple data) architecture which is what this analogy was trying to convey.

Programs either CPU machine code or GPU machine code are basically a list of steps to do. CPUs run the program by going through each step and running it on a single instance of state. GPUs however, run the same step on multiple instances of state at the same time before moving onto the next step. An instance of state could be a pixel or a vertex or just a generic compute instance.

27

espomatte t1_j6n3owq wrote

Sir, this is an ELI5

51

Thrawn89 t1_j6n6gt9 wrote

Sir, read rule 4.

−2

ap0r t1_j6nbo15 wrote

As a person who has over ten years of experience building and repairing computers, I understood what you meant, but I also can see how a layperson would not understand anything you wrote.

27

Yancy_Farnesworth t1_j6nladg wrote

> In short there is no large difference between GPU and CPU besides the GPU uses what is called SIMD (single instruction, multiple data) architecture which is what this analogy was trying to convey.

The GPU is heavily geared towards floating point operations, while the CPU is less so. CPUs used to have to use a separate FPU chip. Transistors got small enough where they could fit the FPU on the CPU. Then the need for dedicated floating point performance skyrocketed with the rise of 3D games, which ultimately required a separate dedicated chip that could do absurd numbers of floating point operations in parallel, resulting in the GPU.

This floating point performance is why GPUs are a great tool for AI/ML and why Nvidia came to dominate hardware dedicated to AI/ML applications.

4

Thrawn89 t1_j6no21t wrote

GPUs are not better at floating point operations, they are just better at doing them in parallel as per SIMD just like any other operation benefitting from SIMD.

In fact floating point support is generally not quite as good as CPU. Some GPUs do not even natively support double precision or natively all floating point operations. Then there's denorm behavior and rounding modes that have been scattered across each implementation. Many GPUs take short cuts by not implementing a full FPU internally and convert to fixed point instead.

−1

BobbyThrowaway6969 t1_j6p2not wrote

Double precision is the black sheep of the family. It was just thrown in for convenience. GPUs don't have double precision because what do you care if a vertex is a millionth of a pixel off or a billionth? Graphics has no use for double precision so why make the chip more expensive to produce?

Compute programming might need it but not for the general public.

3

Thrawn89 t1_j6p76g9 wrote

Agreed, which is why it's wrong to say that GPUs are better at floating point operations than CPU.

1

BobbyThrowaway6969 t1_j6pcmfu wrote

Depends how you look at it. Their circuitry can handle vector math more efficiently

2

Thrawn89 t1_j6pdf0b wrote

No, most GPUs haven't had vector instructions for maybe a decade. Modern GPUs use SIMD waves for parallelization with scalar instructions.

2

BobbyThrowaway6969 t1_j6p131z wrote

I left "1+1 math problems at the same time" pretty vague on purpose. Math in my analogy isn't referring to processor arithmetic, it refers to "stuff" a processor can do. They don't all have to be on the same task. Some can handle vertices while others handle pixels.

>they work on the exact same kind of problems the CPU does.

They can do arithmetic the same way, sure, but you wouldn't exactly expect to be able to communicate with a mouse & keyboard using one of the cores in a GPU.

The instruction set for a GPU (based around arithmetic) is definitely nothing like the instruction set of a CPU lol. That's what I meant by 2nd grader vs mathematician.

3

Thrawn89 t1_j6p8b42 wrote

Each wave can only work on the same task. You can't process vertices and pixels in the same wave (classroom). Other cores (classrooms) can be working on other tasks though which is what I said above.

1