rupertavery t1_j6luky7 wrote
It could, but then it would have to do everything the GPU has to do, and that would prevent it from doing everything that it has to do, and make it slow down. This is called software rendering.
Furthermore, the GPU and a CPU aren't the same when it comes down to things. Sure, they are both made from millions of transistors (digital switches) that turn on and off billions of times a second, but GPUs are like lots of small houses in the suburb, while the CPU can be tall city scrapers, built to do different things.
The GPU has a highly-specialized set of processors and pipelines that are really good at doing almost the same thing to a set of data really fast and in parallel, whereas the CPU has a more generalized processor that is built to be able to to more that just shader, texture and vertex calculations (those things that are really important to what makes 3D graphics amazing, when properly done).
The CPU does everything else, run a program, interact with the user via inputs, communicate with everything else, like the sound device, the network device, disk storage, memory.
Before, "GPUs" were usually just integrated into the motherboard, they were called "framebuffers" and they did mostly that, "buffer" or store one or two "frames" of data long enough for the scanlines to "draw" the image in the buffer to the screen.
But then people wanted more from video games. They wanted effects, like blending two colors together to make transparency effects from when half of your character was in water.
Sure, you could do it in software, but it could be slow, and as resolutions increased, the time to render stuff on screen became slower. So engineers thought, well why not make a specialized card to do those effects for you? This meant that the framebuffer would now have to have it's own processor, to take care of stuff like transparencies and all, and now the CPU was free to do other stuff, like do more things, handle more stuff to be displayed on the screen.
Soon technology became fast enough so that you could send vertexes (3D points instead of 2D pixels) to the video card, and tell the video card to fill in the 3D points (usually triangles) with color (pixels) that would be displayed on screen, instead of telling it to fill a screen with many flat (2D) pixels. And it could do this 60 times per second.
Then, instead of just filling in trangles with a single color, you could now upload a texture (an image, say of barn wall) to the video card, and tell the video card to use the image to paint in the traingles that make up the wall in your scene. And it could do this for hundreds of thousands, and then millions of triangles.
All this while, the CPU is free to focus on other parts of the game, not waiting for data to load into the video card (by using direct memory access for example), or doing any actual pixel filling. Just sending data and commands over on what to do with that data.
Now imagine, if the CPU had to do everything, it would be huge and complex and expensive and run hot, and if it broke you're have to replace your GPU AND your CPU.
Viewing a single comment thread. View all comments