Viewing a single comment thread. View all comments

luxmesa t1_j6lba7a wrote

If we’re talking about a 3D game, the information that the CPU passes to the GPU is stuff like the shape of the objects in a scene, what color or what texture that object has and where they are located. The GPU will turn that into a picture that your monitor can display. The way you go from a bunch of shapes and colors to a picture involves a matrix multiplication, which is something that a GPU can do a lot faster than a CPU.

6

Iz-kan-reddit t1_j6m4buu wrote

To dumb it down some more, the CPU tells the GPU to draw a 45 degree line from A (pixel point 1, 1) to B (pixel point 1,000,1,000.)

The GPU puts a pixel at A, then adds 1 to each coordinate and puts a pixel there (at point 2,2.) It repeats this 999 times until it gets to B.

In this case, the math is really simple. X+1, Y+1. Rinse and repeat.

A CPU can do that simple math, but a GPU can do even that simple math faster. The more complicated the calculations are, the more advantage the GPU has, as the CPU is a jack of all trades, while a GPU is a math wizard.

4

WeirdGamerAidan OP t1_j6lbxyk wrote

Ah, so essentially (probably oversimplified) the cpu gets a bunch of values for objects and the gpu interprets those values into an image, kinda like decoding Morse code?

1

luxmesa t1_j6lcxv3 wrote

Yeah, sort of. Another way of thinking about it is that the CPU is giving the GPU a bunch of legos and instructions because the GPU is faster at building legos than the CPU.

6

Mayor__Defacto t1_j6lr02x wrote

To add to what FenderMoon said, think of being assigned to write out a sentence on a blackboard 50 times. A CPU, you, can only write one letter at a time, because you only have one writing hand. You can think of a GPU as having basically, 50 hands, so it’s able to write out all 50 lines at once, as long as they’re all doing simple tasks. So the CPU instead tells the GPU what letter to write next, rather than spending its time writing out letters.

5

FenderMoon t1_j6lg2w0 wrote

Yea, the CPU is basically giving the GPU commands, but the GPU can take those and execute them far faster than the CPU can.

GPUs are very good at things that involve tons of parallel processing calculations. E.g. "Take this texture and apply it over this region, and shade it with this shader." CPUs would sit there and just calculate all of that out one pixel at a time, whereas the GPU has the hardware to look at the entire texture, load it up, and do tons of pixels in parallel.

It's not that the CPU couldn't do these same calculations, but it'd be way slower at it. GPUs are specifically designed to do this sort of thing.

4

echaa t1_j6lckyo wrote

Basically the CPU figures out what math needs to be done and tells the GPU to go do it. GPUs are then designed to be especially good at the types of math that computer graphics use.

2

Thrawn89 t1_j6myw1u wrote

The explanation you are replying to is completely wrong. GPUs haven't been optimized for vector math since like 20 years ago. They all operate on what's called a SIMD architecture, which is why they can do this work faster.

In other words, they can do the exact same calculations as a CPU, except they run each instruction on like 32 shader instances at the same time. They also have multiple shader cores.

The Nvidia cuda core count they give is this 32*number of shader cores. In other words, how many parallel ALU calculations they can do simultaneously. For example the 4090 has 16384 cuda cores so they can do 512 unique instructions on 32 pieces of data each.

You CPU can do maybe 8 unique instructions on a single piece of data each.

In other words, GPUs are vastly superior when you need to run the same calculations on many pieces of data. This fits well with graphics where you need to shade millions of pixels per frame, but it also works just as well for say calculating physics on 10000 particles at the same time or simulating a neural network with many neurons.

CPUs are better at calculations that only need to be done on a single piece of data since they are clocked higher and no latency to setup.

2

Zironic t1_j6nh61n wrote

>You CPU can do maybe 8 unique instructions on a single piece of data each.

A modern CPU core can run 3 instructions per cycle on 512 bits of data, making each core equivalent to about 96 basic shaders. Even so you can see how even a 20 core CPU can't keep up with even a low end GPU in raw parallel throughput.

>CPUs are better at calculations that only need to be done on a single piece of data since they are clocked higher and no latency to setup.

The real benefit isn't the clockrate, if that was the main difference we wouldn't be using CPU's anymore because they're not that far apart.

What CPU's have which GPUs do not is branch prediction and very very advanced data pipelines and instruction queue's which allow per-core performance a good order of magnitude better then a shader for anything that involves branches.

1

Thrawn89 t1_j6nkd1y wrote

True, SIMD is absolutely abysmal at branches since it needs to take both true and false cases for the entire wave (usually). There are optimizations that GPUs do so it's not always terrible though.

It sounds like you're discussing vector processing instruction set with 512 bits which are very much specialized for certain tasks such as memcpy and not much else? That's just an example of a small SIMD on the CPU.

1

Zironic t1_j6nzase wrote

>It sounds like you're discussing vector processing instruction set with 512 bits which are very much specialized for certain tasks such as memcpy and not much else? That's just an example of a small SIMD on the CPU.

The vector instruction set is primarily for floating point math but also does integer math. It's only specialized for certain tasks in so far those certain tasks are SIMD, it takes advantage of the fact that doing a math operation across the entire memory of the CPU is as fast as doing it on just a single word.

In practice most programs don't lend themselves to vectorisation so it's mostly used for physics simulations and the like.

1