Viewing a single comment thread. View all comments

Yancy_Farnesworth t1_j6nladg wrote

> In short there is no large difference between GPU and CPU besides the GPU uses what is called SIMD (single instruction, multiple data) architecture which is what this analogy was trying to convey.

The GPU is heavily geared towards floating point operations, while the CPU is less so. CPUs used to have to use a separate FPU chip. Transistors got small enough where they could fit the FPU on the CPU. Then the need for dedicated floating point performance skyrocketed with the rise of 3D games, which ultimately required a separate dedicated chip that could do absurd numbers of floating point operations in parallel, resulting in the GPU.

This floating point performance is why GPUs are a great tool for AI/ML and why Nvidia came to dominate hardware dedicated to AI/ML applications.

4

Thrawn89 t1_j6no21t wrote

GPUs are not better at floating point operations, they are just better at doing them in parallel as per SIMD just like any other operation benefitting from SIMD.

In fact floating point support is generally not quite as good as CPU. Some GPUs do not even natively support double precision or natively all floating point operations. Then there's denorm behavior and rounding modes that have been scattered across each implementation. Many GPUs take short cuts by not implementing a full FPU internally and convert to fixed point instead.

−1

BobbyThrowaway6969 t1_j6p2not wrote

Double precision is the black sheep of the family. It was just thrown in for convenience. GPUs don't have double precision because what do you care if a vertex is a millionth of a pixel off or a billionth? Graphics has no use for double precision so why make the chip more expensive to produce?

Compute programming might need it but not for the general public.

3

Thrawn89 t1_j6p76g9 wrote

Agreed, which is why it's wrong to say that GPUs are better at floating point operations than CPU.

1

BobbyThrowaway6969 t1_j6pcmfu wrote

Depends how you look at it. Their circuitry can handle vector math more efficiently

2

Thrawn89 t1_j6pdf0b wrote

No, most GPUs haven't had vector instructions for maybe a decade. Modern GPUs use SIMD waves for parallelization with scalar instructions.

2