Comments

You must log in or register to comment.

BobbyThrowaway6969 t1_j6lg0ft wrote

The CPU is a mathematician that sits in the attic working on a new theory.

The GPU is hundreds of thousands of 2nd graders working on 1+1 math all at the same time.

These days, the CPU is now more like 8 mathematicians sitting in the attic but you get the point.

They're both suited for different jobs.

The CPU could update the picture that you see on the display, but that's grunt work.

Edit: I don't mean the cores in a GPU are stupid, but their instruction set isn't as complex & versatile as a CPU's which is what I meant.

663

TheRomanRuler t1_j6m5ezh wrote

That is actually a good visualization, thank you.

120

Dysan27 t1_j6nwk50 wrote

No no. This is a good visualization.

72

JackSartan t1_j6or86g wrote

I knew exactly what that was before clicking on it. Mythbusters and paintball is a good mix. If anyone is confused, that's the video to watch.

21

Icolan t1_j6ou77p wrote

Wow, I don't know how I have not seen that before.

4

Thrawn89 t1_j6mvpr4 wrote

It's a great explanation, but a few issues with the metaphor's correctness.

The kids are all working on the exact same step of their individual problem at the same time. The classroom next door is on a different step for their problems. The entire school is the GPU.

Also replace kids with undergrads, and they don't work on 1+1 problems, they work on the exact same kind of problems the CPU does.

To translate, the reason they are undergrads and not mathematicians is because GPUs are clocked lower than CPUs so they don't do the individual work as fast. However the gap between mathematician and kids was a little too many orders of magnitudes.

Also, they do work on the same complexity of problems, GPUs have been more heterogeneous compute platforms than strictly graphics since the programmable shader model was introduced making them Turing complete. Additionally, the GPU's ALU and shader model is as complex as a C program these days.

The classroom analogy is what DX calls a wave and each undergrad is a lane.

In short there is no large difference between GPU and CPU besides the GPU uses what is called SIMD (single instruction, multiple data) architecture which is what this analogy was trying to convey.

Programs either CPU machine code or GPU machine code are basically a list of steps to do. CPUs run the program by going through each step and running it on a single instance of state. GPUs however, run the same step on multiple instances of state at the same time before moving onto the next step. An instance of state could be a pixel or a vertex or just a generic compute instance.

27

espomatte t1_j6n3owq wrote

Sir, this is an ELI5

51

Thrawn89 t1_j6n6gt9 wrote

Sir, read rule 4.

−2

ap0r t1_j6nbo15 wrote

As a person who has over ten years of experience building and repairing computers, I understood what you meant, but I also can see how a layperson would not understand anything you wrote.

27

Yancy_Farnesworth t1_j6nladg wrote

> In short there is no large difference between GPU and CPU besides the GPU uses what is called SIMD (single instruction, multiple data) architecture which is what this analogy was trying to convey.

The GPU is heavily geared towards floating point operations, while the CPU is less so. CPUs used to have to use a separate FPU chip. Transistors got small enough where they could fit the FPU on the CPU. Then the need for dedicated floating point performance skyrocketed with the rise of 3D games, which ultimately required a separate dedicated chip that could do absurd numbers of floating point operations in parallel, resulting in the GPU.

This floating point performance is why GPUs are a great tool for AI/ML and why Nvidia came to dominate hardware dedicated to AI/ML applications.

4

Thrawn89 t1_j6no21t wrote

GPUs are not better at floating point operations, they are just better at doing them in parallel as per SIMD just like any other operation benefitting from SIMD.

In fact floating point support is generally not quite as good as CPU. Some GPUs do not even natively support double precision or natively all floating point operations. Then there's denorm behavior and rounding modes that have been scattered across each implementation. Many GPUs take short cuts by not implementing a full FPU internally and convert to fixed point instead.

−1

BobbyThrowaway6969 t1_j6p2not wrote

Double precision is the black sheep of the family. It was just thrown in for convenience. GPUs don't have double precision because what do you care if a vertex is a millionth of a pixel off or a billionth? Graphics has no use for double precision so why make the chip more expensive to produce?

Compute programming might need it but not for the general public.

3

Thrawn89 t1_j6p76g9 wrote

Agreed, which is why it's wrong to say that GPUs are better at floating point operations than CPU.

1

BobbyThrowaway6969 t1_j6pcmfu wrote

Depends how you look at it. Their circuitry can handle vector math more efficiently

2

Thrawn89 t1_j6pdf0b wrote

No, most GPUs haven't had vector instructions for maybe a decade. Modern GPUs use SIMD waves for parallelization with scalar instructions.

2

BobbyThrowaway6969 t1_j6p131z wrote

I left "1+1 math problems at the same time" pretty vague on purpose. Math in my analogy isn't referring to processor arithmetic, it refers to "stuff" a processor can do. They don't all have to be on the same task. Some can handle vertices while others handle pixels.

>they work on the exact same kind of problems the CPU does.

They can do arithmetic the same way, sure, but you wouldn't exactly expect to be able to communicate with a mouse & keyboard using one of the cores in a GPU.

The instruction set for a GPU (based around arithmetic) is definitely nothing like the instruction set of a CPU lol. That's what I meant by 2nd grader vs mathematician.

3

Thrawn89 t1_j6p8b42 wrote

Each wave can only work on the same task. You can't process vertices and pixels in the same wave (classroom). Other cores (classrooms) can be working on other tasks though which is what I said above.

1

Ancient-Ad6958 t1_j6mkl09 wrote

this made me chuckle. someone please draw this

10

AndarianDequer t1_j6mrm0z wrote

Yes, and please draw the rest of the characters around the house that do everything else.

2

Horndog2015 t1_j6n2tbo wrote

I can see the GPU walking around pushing the characters. Some of the fans waving a giant leaf at parts to cool them. The heat sink is talking to the CPU, saying to put the heat in em. The keyboard delegating duties to the CPU, and the GPU is saying "alright then." The ram is auditing everything, and the HDD is like "Wow! This is an all you can eat buffet.! The OS is just saying "I'll throw some papers on your desk. Figure it out." The MoBo is preaching to everyone how it brings everyone together. The Disk player just says "Whatever! I'm calling in today!"

3

Ancient-Ad6958 t1_j6n6ong wrote

And make it cartoonish enough so we can use it to teach kids how computers work

2

Agifem t1_j6mqhze wrote

>but that's grunt work.

Excellent ELI5 explanation, through and through to the last word.

7

BigDisk t1_j6mu8to wrote

You just made me compare whether it makes sense that hundreds of thousands of 2nd graders really are more expensive than 8 mathematicians.

I still could not come up with an answer.

EDIT: I'm getting downvoted and "um, ackshually"'d because of a dumb joke. Never change, Reddit.

4

king_27 t1_j6mvk9n wrote

I honestly have no idea what a good salary is for a mathematician but according to google the average median salary is around $100k p/a in the US.

Let's say we're paying the 2nd graders in cookies and juice boxes, even if you're only spending $1 per child, that's still $100k per day as a minimum. The math checks out.

6

RhynoD t1_j6nkzsz wrote

  1. It may be thousands of cores in the GPU, but probably not.

  2. The GPU is almost its own complete computer system that comes with everything else needed to run, where the CPU is only the CPU and maybe an OK heat sink and fan. The GPU unit comes with its own cooling solution, its own RAM, essentially its own motherboard to control the chips and interface with the actual motherboard, etc.

So to take the analogy way too far: the CPU is just the PhD mathematician but you still have to pay for pay for his office and air conditioning and all the paper he needs to do the work and the whole campus full of TAs and whatnot.

The GPU is like paying for the entire elementary school complete with teachers, cafeteria, and supplies, and you drop that entire school onto your campus next to the PhD's office.

3

rob_allshouse t1_j6o83cf wrote

These are incorrect.

The GPU and CPU are similar. A “graphics card” has all of these things. A GPU in an SoC would have similar limitations to a CPU.

But consumers don’t buy GPUs, companies like MSI do and integrate them into a graphics card. They do buy CPUs.

2

LiamTheHuman t1_j6o9di5 wrote

GPU is often used to refer to the graphics card as a whole

5

Zombieattackr t1_j6ohnr9 wrote

Yep, no reason a CPU couldn’t output video, in fact some do! And I’m not even referring to integrated graphics in APU’s, I’m talking about arduinos and stuff. Hook them up to little LED matrices, 32x128 oled displays, or whatever else you desire, and boom, you have a CPU giving a display output. These are just very low resolution, low refresh rate, and often black and white. Could obviously do better with a real CPU, but it would still be reallly bad

1

SoulWager t1_j6lc1v9 wrote

There's no calculation a GPU does that a CPU cannot, and in the very old days the CPU just wrote to a particular location in memory when it wanted something to show up on screen. The reason you need a GPU is that displays have millions of pixels which need to get updated tens to hundreds of times per second, and GPUs are optimized to do a whole lot of the operations needed to render images all at the same time.

It's sort of like asking why we need container ships when aircraft exist that can carry cargo, and the answer is that the container ship can move a whole lot more cargo at once, even if it has some restrictions on where it can take that cargo.

176

TheLuteceSibling t1_j6la8wn wrote

The CPU is really good at task-switching, doing a bunch of things basically all at once.

They GPU is designed to configure itself to ONE task and to do the same thing bazillions of times per second.

It's like comparing a tractor to a sports car. They're fundamentally different machines.

108

Easy_Reference6088 t1_j6lappn wrote

To add onto this. The cpu could be the gpu as well. It would just be painfully slow. It's called "software rendering"

Edit: The cpu doing software rendering would not technically be a gpu in name, but it's acting as a really slow one.

58

WeirdGamerAidan OP t1_j6lb06f wrote

Yes, it's annoying when that happens, sometimes my computer will forget it has integrated graphics when I load a game then it renders with the cpu and gives me like 1 fps

12

kanavi36 t1_j6ligt3 wrote

Small correction, integrated graphics is usually the name for the CPU doing the graphics workload. The GPU would be dedicated or discrete graphics.

−12

rob_allshouse t1_j6lkimz wrote

No. There is a GPU in “integrated graphics” provided by Intel or AMD. It is comparatively weak to a discrete GPU, and often uses shared memory instead of dedicated memory, but it’s definitely a GPU.

14

Mayor__Defacto t1_j6lpxre wrote

Integrated graphics is not the CPU doing the graphics workload, it’s the name for a CPU designed with a GPU component and shared memory between the CPU and GPU. It’s a sort of SOC.

10

WeirdGamerAidan OP t1_j6ljzn3 wrote

In that case why does windows recognize them separately in task manager (with separate status windows and everything)?

5

ThatGenericName2 t1_j6lmf86 wrote

It doesn't, the person that replied to you is incorrect.

Some CPUs have a low power GPU integrated into it, hence Integrated GPU, and not all CPUs have them.

These Integrated GPUs are very weak and are meant for the bare minimum of graphics processing, enough to draw the interface of a word document or other programs and that's about it. Attempting to do anything complicated such as 3D rendering, even really simple ones will start to strain it.

Despite this these Integrated GPUs are still much better at graphics processing than software renderers that actually use the CPU.

27

salvodan t1_j6m0a82 wrote

And before integrated GPUs or even discrete low-powered GPUs the User Interface was rendered using the CPU itself (software rendering). This was a long time ago, back in the old days of text interface in CGA and simple 2D sprites. (1980s)

6

Bensemus t1_j6nrvtz wrote

This is more true of Intel iGPUs. AMD APUs were actually designed for light gaming as their integrated GPUs actually had some power.

1

AdiSoldier245 t1_j6lp924 wrote

An integrated GPU is a seperate GPU inside the processor, it's not using the CPU to compute graphics. If you look at internal layouts of a processor with an iGPU, you'll see the GPU as a seperate object. So displaying doesn't slow down the CPU part of the CPU(that much, there could be bandwith issues).

A discrete GPU is what goes into a pcie slot and is a GPU outside of the processor. This is what most games require as the iGPU is mostly only enough for displaying and maybe processing video.

Software rendering is using the CPU cores themselves to do graphics tasks.

5

kanavi36 t1_j6lnygb wrote

After posting my reply I considered I might have been incorrect in what I assumed from your comment, so I apologise. I assumed you were talking about when a game doesn't select the dedicated graphics and defaults to the integrated graphics on the CPU, which is quite common and would lead to low frame rates. I've never actually heard of a CPU skipping the integrated graphics and directly rendering a game, which sounds interesting. What does it appear as in your task manager?

2

WeirdGamerAidan OP t1_j6miuo3 wrote

When it happens, as soon as I load up the game cpu usage skyrockets to 100 and gpu stays low, and the game runs at like 1 fps. Happens the most with Roblox and Superliminal. I usually have to reload those games several times before it works properly.

1

WeirdGamerAidan OP t1_j6miy85 wrote

I don't recall if in task manager it shows it's using the gpu

1

kanavi36 t1_j6o4db3 wrote

Interesting, and also quite strange. What CPU do you have?

2

WeirdGamerAidan OP t1_j6oevbx wrote

Uuh I think it's an i7 but I'm not at home rn so I can't check. I'll try to find the laptop online and see what the cpu is. If it helps task manager displays the integrated graphics as "intel hd graphics 620"

1

WeirdGamerAidan OP t1_j6ogx1l wrote

Online search was partly successful. It is either an i7-7500U, i5-8250U, i5-7200U. Iirc I think it's the i5-8250U

1

TheSkiGeek t1_j6p31rr wrote

“Integrated graphics” or an “integrated GPU” these days almost always refers to a small(er)/weak(er) GPU that is included in the CPU itself.

From the perspective of the operating system, a ‘discrete’ GPU and the ‘integrated’ GPU are both rendering devices that it can access. In a laptop with discrete graphics, both of these are usually able to output to the built in display, so a game or other application can choose to render to either one. That’s usually where you see things getting confused, as the BIOS or OS might be configured with the integrated graphics chip as the first/default rendering device.

It’s also possible to do pure software rendering using only the CPU. Nobody actually wants to do this these days for real time applications, since it is painfully slow. But it is an option.

1

Sneak-Scope t1_j6lrgbb wrote

It's been a minute, but is this just incorrect? The CPU is much worse at task switching than the GPU.

The CPU is meant to be a generalist and so is bereft the purpose built hardware to excel at anything. Where the GPU is built to slam numbers together in a disgustingly parallel fashion.

I have been in airports for twenty hours now so I'm sorry if that's wrong!

10

ExtremelyGamer1 t1_j6mh9z7 wrote

Yes you’re right the GPU is better at task switching which is why it is good at parallel tasks. It can switch between threads so that it never has to wait too long on it to finish.

5

psycotica0 t1_j6mmonh wrote

I think it depends on what they meant by task switching. I think they meant "do a bit of game, then do a bit of web browser, then read some files, then back to game".

The GPU is good at doing the same "task", but a billion times, often with a huge amount of parallelism. So it's obviously good at switching from doing that task on one thing to doing that task on the next thing, but in the end it's still the same task.

4

Sneak-Scope t1_j6nptjl wrote

I guess so, though I think it's being used wrong in that context. In the context of CPU/GPU, 'task switching' describes the hardware's ability to park a thread and start a different one executing.

In the case of the CPU, it has to dump cache to main memory, load new information and continue. Where the GPU, usually, is just like 'lol pause play!'

2

TheSkiGeek t1_j6p44cp wrote

That’s because GPUs don’t really cache anything, they’re running a program that streams data from one part of VRAM, transforms it, and writes it back to another part of VRAM.

If the OS wants to change what the CPU is doing it just jumps it to another block of code in RAM. Programs can spin up their own threads in real time. With a GPU there’s a whole process that has to be gone through to load or unload shaders, map and allocate VRAM, etc. — it’s much less flexible, and the latency of swapping from one kind of calculation to another is much higher.

2

spectacletourette t1_j6luakc wrote

Just to add…

> and to do the same thing bazillions of times per second.

That’s why GPUs are also used for cryptographic calculations, even though that’s not the task they were originally intended for.

9

WeirdGamerAidan OP t1_j6laroy wrote

Yes, but wouldn't the GPU not know what to display if the CPU didn't tell it what to display? Or is it more like the CPU tells it "throw this object here and this object here, figure out how to put it on a screen" (albeit much more complex and many more objects)?

3

lygerzero0zero t1_j6lhvyy wrote

The CPU hands the recipe to the GPU, and the GPU actually cooks it. Knowing the recipe is not the time-consuming part, it’s the actual cooking.

33

neilk t1_j6lf4hb wrote

The CPU is like "here are all the 30 thousand triangles that represent this thing, and here is the angle from which I would like to view it. Please do the complex mathematical transformations that a) rotate all 30 thousand triangles in space b) project all 30 thousand triangles from 3D space into 2D triangles on a screen"

There's also stuff to figure out what parts of the model are hidden from view, reflections, textures, shadows, etc, but you get it.

24

TheLuteceSibling t1_j6lbu0x wrote

There are different techniques, but you can think of it like the CPU figuring out where all the objects are and then handing it to the GPU.

The GPU applies, color, texture, shadow, and everything else. Putting chess pieces on a board is easy. Making them look like marble is much more intense.

9

RSA0 t1_j6lt8mn wrote

The programmer decides, which tasks will be performed on CPU, and which - on GPU. GPU is a full capability processor that runs its own program, so in theory there is no limit on what it can do - the only limit is time. The programmer must write a program for GPU as well as for CPU. Programs for GPU are called "shaders".

If we talk about games, the GPU usually does at least 3 jobs: convert all 3D models to screen-relative coordinates, sample colors from texture images, and calculate light and shadow. However, more tasks get moved on GPU with time: modern games use it for simple physics simulation (hair, clothes, fire, smoke, rain, grass), and for post-processing (color correction, blur).

GPU can also be used in tasks unrelated to graphics. Many scientific physics simulators and machine learning tools have an option to run on GPU.

4

Xanjis t1_j6lf95a wrote

It's the second. The GPU takes all the information for the scene (the location/size/rotation) of objects in the scene and then calculates what color each pixel the screen can display. That calculation needs to be done millions of time per second but it's a very simple calculation so the GPU is suited for the task because it has a huge number of weak cores. Whereas a cpu has a small number of incredibly powerful cores.

3

Sevenstrangemelons t1_j6lcdrq wrote

Generally yes, but the GPU is the one actually executing the instructions containing the calculations that would otherwise be really slow on the CPU.

2

LSF604 t1_j6lkljr wrote

GPUs are better at parralel processing thanCPUs but much more special case.

3

led76 t1_j6lxf6p wrote

It might be eli5 to say that the GPU is really good at multiplying grids (matrix) of numbers together. Lots of them. At the same time. And grids of numbers are great at representing things in 3D, like in games.

A CPU can do it, but when there’s bajillions of them to multiply together best to go with the thing that can do hundreds at a time instead of a handful.

2

DMCer t1_j6m3wzz wrote

If rendering were squats: The CPU is the brain, the GPU = the leg muscles.

1

neilk t1_j6lcux6 wrote

Think of it this way.

The CPU is like a chef in a restaurant. It sees an order coming in for a steak and potatoes and salad. It gets to work cooking those things. It starts the steak in a pan. It has to watch the steak carefully, and flip it at the right time. The potatoes have to be partially boiled in a pot, then finished in the same pan as the steak.

Meanwhile, the CPU delegates the salad to the GPU. The GPU is a guy who operates an entire table full of salad chopping machines. He can only do one thing: chop vegetables. But he can stuff carrots, lettuce, cucumbers, and everything else, into all the machines at once, press the button, and watch it spit out perfect results, far faster than a chef could do.

Back to the programming world.

The CPU excels at processing the main logic of a computer program. The result of one computation will be important for the next part, so it can only do so many things at once.

The GPU excels at getting a ridiculous amount of data and then doing the same processing on ALL of it at the same time. It is particularly good at the kind of math that arranges thousands of little triangles in just the right way to look like a 3D object.

28

Gigantic_Idiot t1_j6ldrul wrote

Another analogy I've seen. Mythbusters explained the difference by making a painting with paintball guns. A CPU is like shooting the same gun 1000 times, but a GPU is like shooting 1000 guns all at once

19

HappyGick t1_j6oszfu wrote

Oh my god, that's actually a genius analogy, it's as close to real life as you can get without details

4

iamamuttonhead t1_j6latp4 wrote

Computers don't "need" GPUs. It's just that if you have the CPU doing all of the processing for images then there is a whole lot less CPU "time" available to do all the general-purpose stuff a computer does and everything would be slower - including the graphics. GPUs are designed to mathematical processing very quickly and can do graphics processing while the CPU is doing other general-purpose stuff. There are lots of chips on a motherboard doing special purpose stuff so that the CPU doesn't have to do it (that's why phones now have SoC - they put a bunch of special purpose shit on the same die as the CPU).

26

luxmesa t1_j6lba7a wrote

If we’re talking about a 3D game, the information that the CPU passes to the GPU is stuff like the shape of the objects in a scene, what color or what texture that object has and where they are located. The GPU will turn that into a picture that your monitor can display. The way you go from a bunch of shapes and colors to a picture involves a matrix multiplication, which is something that a GPU can do a lot faster than a CPU.

6

Iz-kan-reddit t1_j6m4buu wrote

To dumb it down some more, the CPU tells the GPU to draw a 45 degree line from A (pixel point 1, 1) to B (pixel point 1,000,1,000.)

The GPU puts a pixel at A, then adds 1 to each coordinate and puts a pixel there (at point 2,2.) It repeats this 999 times until it gets to B.

In this case, the math is really simple. X+1, Y+1. Rinse and repeat.

A CPU can do that simple math, but a GPU can do even that simple math faster. The more complicated the calculations are, the more advantage the GPU has, as the CPU is a jack of all trades, while a GPU is a math wizard.

4

WeirdGamerAidan OP t1_j6lbxyk wrote

Ah, so essentially (probably oversimplified) the cpu gets a bunch of values for objects and the gpu interprets those values into an image, kinda like decoding Morse code?

1

luxmesa t1_j6lcxv3 wrote

Yeah, sort of. Another way of thinking about it is that the CPU is giving the GPU a bunch of legos and instructions because the GPU is faster at building legos than the CPU.

6

Mayor__Defacto t1_j6lr02x wrote

To add to what FenderMoon said, think of being assigned to write out a sentence on a blackboard 50 times. A CPU, you, can only write one letter at a time, because you only have one writing hand. You can think of a GPU as having basically, 50 hands, so it’s able to write out all 50 lines at once, as long as they’re all doing simple tasks. So the CPU instead tells the GPU what letter to write next, rather than spending its time writing out letters.

5

FenderMoon t1_j6lg2w0 wrote

Yea, the CPU is basically giving the GPU commands, but the GPU can take those and execute them far faster than the CPU can.

GPUs are very good at things that involve tons of parallel processing calculations. E.g. "Take this texture and apply it over this region, and shade it with this shader." CPUs would sit there and just calculate all of that out one pixel at a time, whereas the GPU has the hardware to look at the entire texture, load it up, and do tons of pixels in parallel.

It's not that the CPU couldn't do these same calculations, but it'd be way slower at it. GPUs are specifically designed to do this sort of thing.

4

echaa t1_j6lckyo wrote

Basically the CPU figures out what math needs to be done and tells the GPU to go do it. GPUs are then designed to be especially good at the types of math that computer graphics use.

2

Thrawn89 t1_j6myw1u wrote

The explanation you are replying to is completely wrong. GPUs haven't been optimized for vector math since like 20 years ago. They all operate on what's called a SIMD architecture, which is why they can do this work faster.

In other words, they can do the exact same calculations as a CPU, except they run each instruction on like 32 shader instances at the same time. They also have multiple shader cores.

The Nvidia cuda core count they give is this 32*number of shader cores. In other words, how many parallel ALU calculations they can do simultaneously. For example the 4090 has 16384 cuda cores so they can do 512 unique instructions on 32 pieces of data each.

You CPU can do maybe 8 unique instructions on a single piece of data each.

In other words, GPUs are vastly superior when you need to run the same calculations on many pieces of data. This fits well with graphics where you need to shade millions of pixels per frame, but it also works just as well for say calculating physics on 10000 particles at the same time or simulating a neural network with many neurons.

CPUs are better at calculations that only need to be done on a single piece of data since they are clocked higher and no latency to setup.

2

Zironic t1_j6nh61n wrote

>You CPU can do maybe 8 unique instructions on a single piece of data each.

A modern CPU core can run 3 instructions per cycle on 512 bits of data, making each core equivalent to about 96 basic shaders. Even so you can see how even a 20 core CPU can't keep up with even a low end GPU in raw parallel throughput.

>CPUs are better at calculations that only need to be done on a single piece of data since they are clocked higher and no latency to setup.

The real benefit isn't the clockrate, if that was the main difference we wouldn't be using CPU's anymore because they're not that far apart.

What CPU's have which GPUs do not is branch prediction and very very advanced data pipelines and instruction queue's which allow per-core performance a good order of magnitude better then a shader for anything that involves branches.

1

Thrawn89 t1_j6nkd1y wrote

True, SIMD is absolutely abysmal at branches since it needs to take both true and false cases for the entire wave (usually). There are optimizations that GPUs do so it's not always terrible though.

It sounds like you're discussing vector processing instruction set with 512 bits which are very much specialized for certain tasks such as memcpy and not much else? That's just an example of a small SIMD on the CPU.

1

Zironic t1_j6nzase wrote

>It sounds like you're discussing vector processing instruction set with 512 bits which are very much specialized for certain tasks such as memcpy and not much else? That's just an example of a small SIMD on the CPU.

The vector instruction set is primarily for floating point math but also does integer math. It's only specialized for certain tasks in so far those certain tasks are SIMD, it takes advantage of the fact that doing a math operation across the entire memory of the CPU is as fast as doing it on just a single word.

In practice most programs don't lend themselves to vectorisation so it's mostly used for physics simulations and the like.

1

zachtheperson t1_j6lngib wrote

The CPU is really smart, but each "core," can only do one thing at once. 4 cores, means you can process 4 things at the same time.

A GPU has thousands of cores, but each core is really dumb (basic math, and that's about it), and is actually slower than a CPU core. Having thousands of them though means that certain operations which can be split up into thousands of simple math calculations can be done much faster than on a CPU, for example doing millions of calculations to calculate every pixel on your screen.

It's like having 4 college professors and 1000 second graders. If you need calculus done, you give it to the professors, but if you need a million simple addition problems done you give it to the army of second graders and even though each one does it slower than a professor, doing it 1000 at a time is faster in the long run.

6

BroscientistsHateHim t1_j6len76 wrote

A matrix is a bunch of numbers arranged in a rectangle that is X numbers wide and Y numbers long

So if X is 10 and Y is 10, you have a 10 by 10 square filled with random (doesn't matter) numbers. A total of 100 numbers fill the matrix.

If you tell the cpu you want to add +1 to all of the numbers, it does them one by one, left to right, top to bottom one at a time. Let's say adding two numbers together takes 1 second, so this takes 100 seconds, one for each number in our square

If you instead tell a GPU you want to add +1 to all of the numbers, it adds +1 to all the numbers simultaneously and you get your result in 1 second. How can it do that? Well, it has 100 baby-CPUs in it, of course!

So as others have said a CPU can do what a GPU can do, just slower. This crude example is accurate in the sense that a GPU is particularly well-suited for matrix operations... But otherwise it's a very incomplete illustration.

You might wonder - why doesn't everything go through a GPU if it is so much faster. There are a lot of reasons for this but the short answer is the CPU can do anything the baby-CPU/GPU can, but the opposite is not true.

4

aspheric_cow t1_j6lfvew wrote

This exactly, but also, GPUs are optimized for FLOATING POINT matrix calculations, as opposed to integers. To over-simplify, floating numbers are like the scientific notation for numbers.

3

FellowConspirator t1_j6lmx2z wrote

A computer doesn’t need a GPU.

What a GPU is good at is performing the same task on a bunch of pieces of data at the same time. You want to add 3.4 to a million numbers? The GPU will do it much faster than a CPU can. On the other hand, it can’t do a series of complex things as well as a CPU, or move stuff in and out of the computer’s memory or from storage. You can use the GPU’s special abilities for all sorts of things, but calculations involving 3D objects and geometry is a big one — it’s super useful in computer graphics (why it’s called a Graphics Processing Unit) and games. If you want stunning graphics for games, the GPU is going to be the best at doing that for you.

The CPU talks to a GPU using a piece of software called a “driver”. It uses that to hand data to the GPU, like 3D shapes and textures, and then it sends commands like “turn the view 5 degrees”, “move object 1 left 10 units”, and stuff like that. The GPU performs the necessary calculations and makes the picture available to send to the screen.

It’s also possible to program the GPU to solve math problems that involve doing the same thing to a lot of pieces of data at the same time.

3

cataraqui t1_j6luhhq wrote

Many many years ago, there were only CPUs, and no GPUs.

Take the ancient Atari 2600 games console as an example. It did not have a GPU. Instead, the CPU would have to make sure that the screen is drawn, at exactly the right moment.

When the TV was ready to receive the video signal from the games console, the CPU would have to stop processing the game so that it could generate and start the video signal that would be drawn on the screen. Then, the CPU would have to keep doing this for the entire screen frame's worth of information. Only when the video signal got to the bottom opposite corner of the screen could the CPU actually do any game mechanics updates.

This meant that the CPU of the Atari 2600 could only spend from memory about 30% of its power doing game processing, and the remaining 70% entirely dedicated to video updates as the CPU would literally race the electron beam in the TV.

So later on, newer generations of computers and game consoles started having dedicated circuitry to handle the video processing. They started out as microprocessors in their own right, eventually evolving into the massively parallel processing behemoths they are today.

3

squigs t1_j6lv93k wrote

They don't need a GPU. Until the 1990s, computers didn't have one at all, unless you count a fairly simple device that reads from a chunk of RAM and interprets it as video data a GPU. Early 3D games like Quake were perfectly fine on these systems, and did all the work on the CPU.

What the CPU sends is a lot of textures, and a bunch of triangles, plus information on how to apply the textures to the triangles.

The CPU could do this but the GPU is a lot faster at certain tasks. Drawing triangles being one such task. Twisting the textures around being another.

Early GPUs just did the texturing. That's the most CPU intensive task.

3

Affectionate_Hat_585 t1_j6lec1r wrote

One explanation I like is comparing cpu with superman and gpu with 1000 normal people. The cpu is powerful and can perform lots of instructions just like superman can lift heavy things easily. Gpu can perform simple calculations parallely just like 1000 children who are taught to do math calculation can outperform even superman or cpu if you can divide a task. The pixels need individual calculation. Cpu is slow because it is a single one or its cores are countable in hands which is doing the task. Gpu on the other hand has a lot of small micro cpu with a lot of core count. 1050 ti has about 768 cuda cores.

2

DragonFireCK t1_j6lfyqb wrote

The key difference is how the two processors function. A GPU is designed to do the same calculation lots of times at once, though with differing values, while a CPU is designed to do lots of different calculations quickly.

A simple way to think about this logic is that a single object on the screen in a game will be on multiple pixels of the screen at once, and each of those pixels will generally need to do the exact same set of calculations with just different input values (think like a*b+c with differing values for a, b, and c). The actual rendering process does the same idea at multiple levels, where you are typically going to position and rotate the points (vertices) of each object in the same way. It also turns out that this same style of calculation is useful for a lot of other stuff: physics calculations*, large math problems*, and artificial intelligence*, to name a few.

However for general program logic you aren't repeating the same calculations over and over with just different data, but instead need to vary the calculations constantly based on what the user is trying to do. This logic often takes the form of "if X do Y else do Z".

Now, modern CPUs will have some hardware designed to function like a GPU, even if you discount any embedded GPU. Using this is very good if you just need to do a small amount of that bulk processing, such that the cost of asking the GPU to do it and receiving the result will be too expensive, however its no where near as fast as the full capabilities of a GPU.

Beyond those design differences which are shared between dedicated and embedded GPUs, a dedicated GPU has the benefit of having its own memory (RAM) and memory bus (the link between the processor and memory). This means both the CPU and GPU can access memory without stepping on each other and slowing each other down. Many uses of a GPU can see massive benefits from this, especially games using what is known as "deferred rendering" which requires a ton of memory.

As a note, there is no reason you couldn't just do everything with one side, and, in fact, older games (eg Doom) did everything on the CPU. In modern computers, both the CPU and GPU are what is known as Turing complete, which means they can theoretically perform every possible calculation. Its just that each is optimized to perform certain types of calculations, at the expense of other kinds.

* As a note, artificial intelligence heavily relies on linear algebra, as does computer rendering. Many other math problems can be described as such, converting the problem into a set of matrix operations, which is specifically the specialization of GPUs.

2

lappyg55v t1_j6li09b wrote

Most modern CPUs can do exactly what you are stating, as they have the graphical processing capability right on the chip itself. This is enough for most office or school computers, as well as low powered laptops. Generally, GPUs are needed for more difficult tasks that are beyond the capabilities of what a CPU can handle. For example, a very detailed game or computer graphical design will almost require a separate GPU in the PC. However, there are some CPU models that lack the "graphics chips" or are there but disabled by the manufacturer, generally a cheaper model of a CPU.

​

With that being said, if someone was building a budget PC, with very light gaming, you can just get a modern CPU capable of onboard graphics for low end performance.

​

Although I feel like your question may be as to why a CPU itself would need *any form* of a graphics unit, even if it is "on the chip" as with many modern CPUs. The CPU, strictly speaking, does computations in and of itself, at a very fast rate. However, it does not have the ability to do the output frequency conversions needed for standards that make the PC capable of connecting to an HDMI, VGA, Displayport etc. It is the same notion that necessitates having system ram, a hard disk, and other peripherals attached to do what you want to do.

2

Ts_kids t1_j6lnakt wrote

Cpu does lots of simple but wildly different tasks, a GPU does complex tasks that it is purpose-built for.

2

Semyaz t1_j6lnfnr wrote

To put this into perspective, a relatively low resolution monitor is 1920x1080 pixels. That is over 2 million pixels that need to be potentially sent 3 numbers (red, green, and blue values) for every frame. One gigahertz is 1 billion operations per second. Rendering 60 frames per second is 60 frames * 3 color values * 2 million pixels = 360 million operations per second -- 1/3 of 1 GHz. Even further, graphics depend on tons of other operations like rendering, lighting, antialiasing that need to happen for every frame that is displayed.

It becomes clear that raw speed is not going to solve the problem. We like fast processors because they are more responsive, just like our eyes like higher frame rates because it is smoother. To get smooth, high frame rate video, we need specialized processors that can render millions of pixels dozens of times a second. The trick with GPUs is parallelization.

GPUs have relatively low clock speed (1GHz) compared to CPUs (3-4Ghz), but that have thousands of cores. That’s right, thousands of cores. They also use larger instruction size: usually 256 bits compared to CPUs’ 64 bits. What this all boils down to is boosting the throughput. Computing values for those millions of pixels becomes a whole lot easier when you have 2,000 “slower” cores doing the work all together.

The typical follow up question is “why don’t we just use GPUs for everything since they are so fast and have so many cores?” Primarily because GPUs are purpose built for the task they were designed for. Although that doesn’t prevent the possibility of general computing on GPUs, we humans like computers to be super snappy. Where CPUs can juggle dozens of tasks without a hiccup, GPUs are powerhouses for churning through an incredible volume of repetitive calculations.

PS: Some software takes advantage of the GPU for churning through data. Lots of video and audio editing software can leverage your GPU. Also CAD programs will use the GPU for physics simulations for the same reason.

2

WeirdGamerAidan OP t1_j6lnvtx wrote

1920×1080 is over 2 million, not billion, BUT I see what you mean

2

Semyaz t1_j6lp2tn wrote

Woops. Thanks for catching that. Will edit

2

Oclure t1_j6lrq3d wrote

The cpu is a handful of college math majors, they are skilled in handling a wide variety of problems and in general are much faster than most at doing those calculations. The gpu is a gymnasium full of 5th gradrs, don't ask them to handle advanced calculus but give them a few thousand basic algebra questions and that mob of students is going to be done way faster than those couple of grad students.

Less eli5 : In general the cpu is deciding what happens on the screen and the gpu is in charge of saying that that looks like. As the one takes a lot of varied calculations and the other is more specialized at just drawing shaped and applying textures to them, but doing it with a ton of cores at once.

When it comes to games the cpu is running the game itself, saying what entity is where and where things are headed. The gpu is constantly trying to draw what the cpu says is there, it loads all the physical assets into its own memory, does all the calculations for how the scene is lit and dumps its result onto the screen. Once it's done with all of that it asks the cpu where everything is again and starts all over.

The cpu contains only a handful or so very powerfull general purpose core to do the work, a modern gpu on the other hand has thousands of less flexible dumber cores that can brute force their way through all the work it takes to generate frame in a modern game. Plus having much faster memory on board the card itself helps when gpu is constantly referencing large texture files and storing information dealing with the current frame its working on.

2

rupertavery t1_j6luky7 wrote

It could, but then it would have to do everything the GPU has to do, and that would prevent it from doing everything that it has to do, and make it slow down. This is called software rendering.

Furthermore, the GPU and a CPU aren't the same when it comes down to things. Sure, they are both made from millions of transistors (digital switches) that turn on and off billions of times a second, but GPUs are like lots of small houses in the suburb, while the CPU can be tall city scrapers, built to do different things.

The GPU has a highly-specialized set of processors and pipelines that are really good at doing almost the same thing to a set of data really fast and in parallel, whereas the CPU has a more generalized processor that is built to be able to to more that just shader, texture and vertex calculations (those things that are really important to what makes 3D graphics amazing, when properly done).

The CPU does everything else, run a program, interact with the user via inputs, communicate with everything else, like the sound device, the network device, disk storage, memory.

Before, "GPUs" were usually just integrated into the motherboard, they were called "framebuffers" and they did mostly that, "buffer" or store one or two "frames" of data long enough for the scanlines to "draw" the image in the buffer to the screen.

But then people wanted more from video games. They wanted effects, like blending two colors together to make transparency effects from when half of your character was in water.

Sure, you could do it in software, but it could be slow, and as resolutions increased, the time to render stuff on screen became slower. So engineers thought, well why not make a specialized card to do those effects for you? This meant that the framebuffer would now have to have it's own processor, to take care of stuff like transparencies and all, and now the CPU was free to do other stuff, like do more things, handle more stuff to be displayed on the screen.

Soon technology became fast enough so that you could send vertexes (3D points instead of 2D pixels) to the video card, and tell the video card to fill in the 3D points (usually triangles) with color (pixels) that would be displayed on screen, instead of telling it to fill a screen with many flat (2D) pixels. And it could do this 60 times per second.

Then, instead of just filling in trangles with a single color, you could now upload a texture (an image, say of barn wall) to the video card, and tell the video card to use the image to paint in the traingles that make up the wall in your scene. And it could do this for hundreds of thousands, and then millions of triangles.

All this while, the CPU is free to focus on other parts of the game, not waiting for data to load into the video card (by using direct memory access for example), or doing any actual pixel filling. Just sending data and commands over on what to do with that data.

Now imagine, if the CPU had to do everything, it would be huge and complex and expensive and run hot, and if it broke you're have to replace your GPU AND your CPU.

2

Cross_22 t1_j6luwcz wrote

Screens have millions of pixels and need to be updated at least 50 times per second. It is possible to connect a CPU directly to an HDMI cable (I have done that) but that doesn't really leave much time for the CPU to do any other work.

For that reason computers have had dedicated graphics chips for a very long time. In the early days those were fairly simple chips that just shared memory with the CPU. The CPU would put instructions like "blue 8x8 pixel square goes here", "Pac-Man goes there" into memory and then the graphics chip would send the right amount of electricity to the monitor at the right time.

These graphics chips have become more and more advanced and about 25-ish years ago were rebranded as GPUs. Nowadays they are quite generic and can run complicated calculations at phenomenal speeds.

2

DarkWhovianRises t1_j6m059m wrote

As said above, GPUs are centred specifically on video processing tasks. This is why even if you don't want a GPU you will need a CPU capable of handling integrated graphics. The AMD APU series comes to mind.

2

Tazavoo t1_j6m05a6 wrote

>What information is the CPU sending to the GPU that it can't just send to a display

It's a bit like this image. Very much simplified, you can think of the CPU sending information like this

  • There are 3 vertices (points) at the x, y, z coordinates (340, 239, 485), (312, 285, 512), (352, 297, 482) that form a triangle.
  • The vertices have these and those colors, textures, bump maps, reflective values, opacities etc.
  • The camera looks at them from position (112, 756, 912) with this and that angle, viewport, zoom.
  • There is a spotlight at (567, 88, 45) with this angle, shape, color, intensity. There is another one at (342, 1274, 1056).

And the GPU will come up with

  • What is the RGB color of pixel 1234, 342 on the display.

As others have answered, the CPU could do this, but the CPU is optimized for doing a bit of everything, and the GPU is optimized for doing a lot of floating point (decimal value) calculations in parallel.

2

Loki-L t1_j6m1u4r wrote

The CPU is the brain of your computer. It can do everything.

The GPU is a specialized idiot savant. It can only do one type of thing but it can do it really good.

The GPU is good at a certain type of math problem that is needed to create 3D images.

The CPU can do that sort of math too, but since it isn't specialized for it, it isn't as good at it. The CPU isn't as fast at that sort of thing.

The type of math the GPU does well is sometimes useful for other things too, like mining Crypto or certain types of simulations.

2

Isogash t1_j6mb40j wrote

CPU cores each run a different program.

GPU cores all run the same program at the same time, but each core operates on different data. They are much slower and more basic than a CPU core, but also much smaller (because they don't need all of the same parts) so you can have thousands of them. Because of that, you can use them to crunch a large amount of repeated calculations very quickly. This is used for mostly for the per-pixel calculations for your display, but they can be also used for other things too like AI training (or Bitcoin mining if you hate the environment.)

2

HunterSTL t1_j6met5i wrote

The greatest analogy I heard in that regard is to see the CPU as a couple of professors, each calculating difficult equations, while the GPU is a large group of toddlers each coloring a square.

Of course the professors could also color the squares, but they would just waste their potential since they are able to perform much more difficult tasks. It's simply more efficient to let the couple of professors do the hard work, while the large group of toddlers works on coloring the squares.

2

XJDenton t1_j6mf235 wrote

A CPU is a dremel: can be used for a lot of different tasks and pretty good at most of them.

A GPU is a chainsaw: far more narrow in what its useful for but far more efficient for that what it is good at.

And while you could cut down a tree with a dremel, there are good reasons to use a chainsaw, especially if you need to cut down a certain number of trees per hour.

2

MeepTheChangeling t1_j6mf9lz wrote

Nothing. Your CPU can do everything your GPU can. At like, 1:100000000th the speed. Do you want to play your games at about 1 frame every 3 minutes? Then use your CPU.

Then why have a CPU? Why not just a GPU? Because to make the GPU do the type of math that is needed to draw images very very very very fast, it has to be made poopy at other kinds of math. This isn't a software problem either, it's a hardware problem. IE the little squiggles we etch into crystals to zot electricity through to make the crystal think for us need to be drawn in certain ways to work how we want, and we can't have two doodles overlapping each other.

A CPU can do almost anything, but it's slow because of it. A GPU can do graphics (and certain types of AI work) very very very fast, but can't do anything else quickly (or in some cases, at all). So you need both. Unless you don't give a hoot about fancy graphics and are okay with your computer being able to produce graphics that are on par with 80s and VERY early 90s PC graphics only.

2

IgnorantGenius t1_j6mggg8 wrote

Good question! They should just put some cpu's on gpu's to bypass this and render even faster!

2

fluorihammastahna t1_j6mgwod wrote

Another point is that even if the GPU would do very little computing, you would need something able to communicate between your screen and your computer. This is similar to external sound cards and network cards, although these are not so common these days because for most users the integrated cards are fine.

2

SarixInTheHouse t1_j6mh16t wrote

TLDR: the gpu is specialized in tasks needed for rendering. That way the cpu doesn’t have to do all the work and can do other tasks instead that the gpu isn’t capable of doing.

My best eli5:

You have two workers. One can do everything but not particularly fast. The other can draw really good but can’t do anything else.

Of course you could have the first worker do everything, but that would be slow. Instead you have the first one do all the story and background, and the second guy just for drawings, so that the first has time For other stuff

A bit more technical:

In a really rudimentary CPU you have a component that adds things together. Now let’s say you want to multiply two numbers. You could do that by simply adding several times.

If you do that you block the component that adds for quite a while. So instead you make a new component dedicated to multiplying. Now you can simultaneously add something while something else is being multiplied.

So you’re dedicating a component to a specific task in order to have more performance. And this goes way further. Anything you gpu does your cpu can too. But it’s like the multiplying: if the cpu does everything the gpu does Them is occupied a lot and can’t work on other tasks.

The GPU has components for tasks that are very common for rendering. So you take the entire workload of rendering an image away from the cpu and shove it onto the gpu. Now your cpu is free to do other things that the gpu can’t.

So yes, you can run a computer with just a cpu. But this cpu would constantly be occupied with rendering, and while its doing that it can’t do other stuff. You end up with a lot less performance.

2

stevecam t1_j6mmcf0 wrote

The CPU is capable but contain a larger set of instructions, this slows things down creating wait times. The GPU has a handful of simple instructions that come in the form of shader programs. It's able to process and reduce wait times due to efficiency

2

hatsuseno t1_j6mt2v3 wrote

Computers don't need GPUs. Older computers from the 80s sometimes didn't have any GPU, and the CPU was responsible for redrawing what you see on the monitor.

The problem with that is that it costs a lot of CPU time to do so, redrawing all the pixels 50 or 60 times per second.

GPUs started out as nothing more than a helper chip(set) for the CPU, so it wouldn't be doing the pixel pushing, but could do other stuff at the same time.

As what we wanted to see on the screen became more complex GPUs consequently also became more complex. First it was 2D acceleration to improve drawing windows and sprites on the screen, later 3D acceleration for obvious uses.

Or said in one line, CPUs are generalists so they can do the 'anything' computers are known for, GPUs are specialists so the CPU can continue doing generalist stuff, like instructing the GPU to 'draw a rectangle there', 'move the sprite from here to here over X frames', or 'add a sphere to the 3D scene'.

2

Any-Broccoli-3911 t1_j6mxpss wrote

The CPU sends the meshes (set of vertices) and textures (non-uniform colors that go in the area between vertices) to the GPU when the software is loaded. Those are saved in the GPU memory. They can be updated from time to time, but they are changed as rarely as possible.

For each frame, the CPU sends commands to show those meshes by sending their position, axis, and scale, and those textures by sending in between which vertices it should appear. The GPU gets all those objects from memory, put them in the good position, axis, and scale in RGB arrays, and combines them. Combining them includes having only the ones in the front if they are opaque, and doing some addition if they have some transparency. The GPUs can also compute the effects of lights, in particular using ray tracing, to determine the brightness of each pixel.

Here is some extra information: https://computergraphics.stackexchange.com/questions/12110/what-data-is-passed-from-the-cpu-to-the-gpu-each-frame

2

Batfan1939 t1_j6nwsok wrote

Computers technically don't "need" a GPU. In fact, early home computers didn't have them. The American NES and Japanese Famicom game consoles were one of the first to have GPU's (then called the PPU).

The main advantages of this are…

1.) The GPU can process graphics information while the CPU handles other tasks, speeding up processing by acting as a digital carpool lane. In systems without this, programmers had to decide how much of a program's processing went to executing code, and how much went to graphics and sound. Time spent on one of these was essentially taken away from the other two.

A common side effect of this is that older programs (particularly games) would frequently only use part of the screen, since fewer pixels or tiles meant less time needed for processing. Some early games on, for example, the Atari, would even remove lines on the left and right sides to further reduce graphics requirements.

2.) Because it only handles graphical data, the GPU can be optimized in ways the CPU can't, allowing it to perform these calculations much faster than simply having, say, a second CPU running. This comes at the cost of being inefficient at, or even unable to perform, other calculations.

I remember reading a webpage/blog post where a 3D render was 30× faster when done with the GPU vs the CPU. This was ten or fifteen years ago.

TL;DR? It allows more data to be processed at once, and optimizes the processing of the particularly complex graphics calculations.

2

gromm93 t1_j6ocudm wrote

The CPU is a general-purpose computer. It's strength lies in being able to do any kind of calculation. A GPU is a specialized computer that's optimised for the specific task of rendering 3D graphics, and does its job much faster as a result.

2

Ilookouttrainwindow t1_j6oe9sg wrote

CPU can do everything. It's one really smart and hard working guy just going off of endless instructions.

GPU is a collection of smart fast hard working guys all given portions of specific instructions and are told to start working at the same time.

Another analogy - CPU is me remodeling bathroom step by step in an apartment building. When done, I move to next one.

GPU is a bunch of guys all assigned small tasks all located in designated bathrooms in the apartment building.

Which one is overall faster?

2

blue_nylon t1_j6owwzd wrote

Say you want to draw a circle with radius R in location X,Y on your screen. To do this in the CPU you would need to do all the math to figure out which pixels should light up and which should not in order to make the circle, and then send that pixel data to the monitor. This takes a lot of CPU resources and will generally be very slow.

What a GPU does is simplify common operations like this for the CPU. Instead of doing all the math, the CPU can send a command to the GPU to draw the circle with the same parameters, and the GPU will handle it automatically. This will free up the CPU to do other stuff. Additionally the GPU will have dedicated circuits in the chip that can do all the math and draw to the screen much faster than the CPU.

So technically yes the CPU could send direct to the display, but a separate GPU will accelerate a lot of the common math needed to draw the images to the screen.

2

Lorien6 t1_j6p17nm wrote

Compare the author to the painter.

One inspires the other to create visual representation of the numbers and inputs.

2

IMovedYourCheese t1_j6lcd0m wrote

A GPU is essentially a second CPU for your computer. The difference is that while a CPU is good at any task you can throw at it, a GPU is really good at exactly one thing – performing the kind of complex mathematical calculations that are used to render graphics. A CPU could technically perform these operations, but it would just be a lot slower at it.

When you are playing a game that has fancy HD graphics and needs to run at a high FPS, the CPU can offload the rendering to the GPU and the GPU sends the final frames to the display directly, resulting in much faster performance.

1

PckMan t1_j6n10hl wrote

A GPU is almost like a separate computer inside your computer suited to one specific task. It's like having a gaming console attached to your motherboard. A CPU can more or less do anything. After all an "integrated GPU" is really just the CPU doing the job of the graphics card as well as its own. The problem is that for most types of use for a PC, you don't need a GPU at all. Office computers, casual users just browsing the web and watching movies, store computers etc don't really need a GPU, because their tasks do not require lots of processing power. Conversely there's some tasks/activities, like gaming, rendering, cad/cam software and others that do require a lot of processing power, a disproportionate amount compared to most other things. So the solution is to have a "separate" computer inside your computer, with its own processors and its own memory, dedicated to those tasks specifically and since software is written around this industry convention, the GPU will perform those tasks more efficiently. Something like a server, used for different tasks, won't have a GPU at all, but it will have multiple CPUs and tons of storage space because that's the kind of resources it needs for its tasks.

1

[deleted] t1_j6o0ojf wrote

[removed]

1

explainlikeimfive-ModTeam t1_j6o0v92 wrote

Please read this entire message


Your comment has been removed for the following reason(s):

  • Top level comments (i.e. comments that are direct replies to the main thread) are reserved for explanations to the OP or follow up on topic questions (Rule 3).

Anecdotes, while allowed elsewhere in the thread, may not exist at the top level.


If you would like this removal reviewed, please read the detailed rules first. If you believe it was removed erroneously, explain why using this form and we will review your submission.

1