Viewing a single comment thread. View all comments

BroscientistsHateHim t1_j6len76 wrote

A matrix is a bunch of numbers arranged in a rectangle that is X numbers wide and Y numbers long

So if X is 10 and Y is 10, you have a 10 by 10 square filled with random (doesn't matter) numbers. A total of 100 numbers fill the matrix.

If you tell the cpu you want to add +1 to all of the numbers, it does them one by one, left to right, top to bottom one at a time. Let's say adding two numbers together takes 1 second, so this takes 100 seconds, one for each number in our square

If you instead tell a GPU you want to add +1 to all of the numbers, it adds +1 to all the numbers simultaneously and you get your result in 1 second. How can it do that? Well, it has 100 baby-CPUs in it, of course!

So as others have said a CPU can do what a GPU can do, just slower. This crude example is accurate in the sense that a GPU is particularly well-suited for matrix operations... But otherwise it's a very incomplete illustration.

You might wonder - why doesn't everything go through a GPU if it is so much faster. There are a lot of reasons for this but the short answer is the CPU can do anything the baby-CPU/GPU can, but the opposite is not true.

4

aspheric_cow t1_j6lfvew wrote

This exactly, but also, GPUs are optimized for FLOATING POINT matrix calculations, as opposed to integers. To over-simplify, floating numbers are like the scientific notation for numbers.

3