Comments

You must log in or register to comment.

TheLuminary t1_jaa1qz7 wrote

Rendering more information, than what can be seen. Allows the system more spaces to decide where edges fade and the different combinations of colours should work.

Then once you take this information and squish it down to your monitor again, it can take that info and merge it down into a smoother and more accurate representation. So that instead of each pixel getting a twig color or sky color. It could be a shade in-between.

21

TrollErgoSum t1_jaa2qv9 wrote

It's called Supersampling and basically gives the computer more options for what to render for any given pixel.

If you "render" at a higher resolution so that each final pixel is 4 pixels in the supersample then the computer gets 4 reference points for what color that pixel could be instead of just one. It can then average those 4 values and get a cleaner value for that final pixel.

When you have high contrast areas (black against white for example) the computer can pick a cleaner average between the two areas (shades of gray) instead of only choosing between white and black.

272

DragonFireCK t1_jaa8z2o wrote

Its easier to understand if you remove color and shades, and just consider a similar black and white image. With this, the screen is just a bunch of dots in a grid. If you have something going diagonally across them, such as a triangle, you have to decide which dots to turn on and which to turn off. This means you end up with a choppy looking pattern:

   ■
  ■■
 ■■■
■■■■

If you can also show gray colors, you can render that line at a higher resolution and average the values of each small block, resulting in gray near the line and a much smoother image. The more averaging you do, the better the end result will be, and this is why 8x looks better than 4x and so forth.

This is known as super sample antialiasing (SSAA). There are other methods to achieve similar results with various benefits and drawbacks:

Temporal antialiasing (TAA) which averages between multiple frames with slightly different camera angles. This one was made well known by Unreal Engine 4.

Another is called Fast Approximate Antialiasing (FXAA), which uses some image recognition-style techniques (a severe simplification) to find the jagged lines and blur them.

15

zachtheperson t1_jaa9ynd wrote

Picture a 3x3 grid of pixels like this:

O O O
O O O
O O O

How would you color in those pixels to draw a complex shape like the letter "P"? You really can't. No matter which way you choose to draw it, you'll either be missing the bottom of the stem, the hole in the middle, or something else. Similarly if we were to try and draw a diagonal line "\" it would have to be stair-stepped.

Computers try to get around this by doing something called "anti-aliasing," which adds translucent pixels around the edges, but it's more of a bandaid for the problem since it relies on the computer trying to guess about missing information, and often results it trade offs like smoother edges but tiny details vanish since they are too small to be rendered in the first place. By rendering at a higher resolution and then scaling down, we can render all the extra detail that needs to be there, and then scale down to get an "accurate," image.

8

anengineerandacat t1_jab0ffh wrote

Video games are very "noisy" in terms of image output, tons and tons of approximations and high precision but limited "accurate" solutions are used to render out the result and as such tons of factors to consider, too many to really give you an "exact" one thing but a very very simple one is a natural anti-aliasing as a result of downsampling.

Much of this has to do with the fact that we are rendering out to "pixels" and geometry doesn't always fit perfectly and as such filtering is needed to clean up the artifacts.

If your game is running at say Full-HD (1920x1080) and you have absolutely nothing else being done to the image (no anti-aliasing, no mip-mapping, no anisotropic-filtering, etc.) you'll have a bit of a pixel soup of a scene (some textures will be blurry, others will be pixelated, and you'll have "jaggies" across most non-transparent objects in the game).

You can easily solve a lot of these problems by simply using a buffer that is 4x larger than your target output resolution, drawing your game scene to this and then sampling it down to the target resolution.

This is usually what we call "super-sampling" and there are several ways to do this when you downsample down you merge or exclude information downwards and much of the sub-pixel information from the source information is transformed.

If you have a camera at your home you can effectively do this yourself in the real-world; take a photo, open said photo in say photoshop, resize photo down 33% and you'll notice image quality on the resized photo is usually improved (this is also for a variety of reasons, mostly ISO noise though is reduced heavily).

It's just not widely utilized because it's computationally expensive and it isn't always the case that all of a games shaders scale with the increased resolution (or in rarer cases, certain targets can't be created because some shaders could already be overdrawing).

If you want just overall a... "glimpse" into how complicated it is to render a game scene I highly recommend this post by Adrian Courrèges which will give you a deep but IMHO high level explanation from a semi-modern game: https://www.adriancourreges.com/blog/2015/11/02/gta-v-graphics-study/

5

TheLuminary t1_jabj0t4 wrote

The game does.... But it uses these techniques to do it. That's why you have lots of antialiasing options, including super sampling.

Rendering higher resolution for your display, but turning anti aliasing off. Is just kind of manually doing what is built in

3

JaggedMetalOs t1_jabnco7 wrote

Mip-mapping is slightly different - it's automatically using smaller versions of textures for far away objects, which speeds up rendering and makes the distant textures look better as the GPU isn't good at scaling textures down more than 2x.

What OP is talking about is better described as antialiasing.

5

JaggedMetalOs t1_jabnvfm wrote

It can, but some now common graphical special effects and lighting techniques don't work because they need to know which pixel belongs to exactly which object on screen, while that gray pixel is part of both the black and the white object.

It's kind of complex, but generally thought the overall look with those effects without antialiasing is better than without the effects but with antialiasing.

There are workarounds that give something similar to real antialiasing that work with those effects, or if you have lots of GPU power but a low res monitor you can do what OP asks and Renee a larger screen that you realize down.

1

iTwango t1_jabpcrh wrote

If you want to draw a colour, but only have a red crayon and a green crayon, it's a lot harder to make something that looks nice than if you have ten different red crayons and ten green crayons in between.

29

WjeZg0uK6hbH t1_jabsgtu wrote

Assuming the time between frames are constant; there is no advantage. Having the GPU output more frames than the monitor can show, just means the monitor will ignore the extra frames. The GPU will do more work and heat your room up faster. So if your room is chilly it might be an advantage, depending what other kind of heating you are using. Most games have a frame limit, vsync or freesync setting, which in their own ways will limit the frame rate to something appropriate.

−1

AetherialWomble t1_jaby9nq wrote

That's fundamentally wrong. More frames and the information displayed on your screen will be newer.

For the sake of simplicity, let's say you had 1hz screen and GPU producing 1fps. By the time a frame would appear on your screen, it's already 1 second old. You get to see the frame that was generated 1 second ago.

Now, if you had 4fps, the frame you see would only be 0.25 seconds old.

Linus had a video a while back, comparing 60hz and 60fps vs 60hz and 240fps. The difference is MASSIVE.

https://youtu.be/OX31kZbAXsA

10

masagrator t1_jac1rxm wrote

This is actually true only in specific cases. If you will use resolution that won't create equal square blocks that can be summarized equally (how this works was explained by other people), you will actually get less crisp image than native resolution.

So if you have 1920x1080 monitor, but you will use resolution lower than 3840x2160, you will get less edges, but image may be less crisp than 1920x1080 because of "approximation". Since we lack informations to properly descale image, it uses algorithm to figure out best possible result. Which varies and with low resolution difference image is more noisy, which means it's less sharp in result. It is more visible on displays with lower resolution. 900p image on 720p display will be visibly less sharp than 1440p image on 1080p display.

It's most visible with text if you set higher resolution on whole game instead of using internal scaler that renders only 3D objects at higher res while 2D HUD stays at native resolution.

1

MINIMAN10001 t1_jac3glg wrote

Lets just make up some numbers

Imagine your screen refreshes at 100hz 10 ms per frame with a perfect sync your input will be delayed by that 10 ms.

But what if you ran 200 fps, 5 ms. Well now your input is only delayed by 5 ms because a frame is being drawn every 5 ms and your GPU will only be holding on to that newest frame created every 5 ms to submit to the monitor

This latency would be addative to any latency from mouse/keyboard to computer as well as your monitor's processing time known as input latency tagged as "Lag" in tftcentral reviews

This example Acer Nitro XV273 X review noted that particular monitor they could only estimate potentially 0.5 ms of input latency but marked it as 0 as the estimate was not an actual measurement and 2 ms of grey to grey response times giving it a total input latency of 2 ms. Whereas the average range one may see goes from 3 to 8 ms.

Also worth noting that processing time of 100 ms on a television isn't unusual and that's why TVs are generally not recommended for gaming use.

2

SifTheAbyss t1_jaeyvp7 wrote

Input lag will be decreased by up to 1 frame of what the monitor can display.

Say you have a 60hz monitor and try to render at 120fps, the completed images are sent 0.5 frames earlier(counting with 60fps as a base). You try to render at 300fps, the images arrive at 1/5th of the original 60's 1, so you win the remaining 0.8 frames.

As you can guess, this is most of the time not worth it though, as in that last example the GPU still does 5 times the work just for a marginal increase.

If you have a monitor that has Freesync/Gsync, the ready frames get sent immediately, so no need to render above the monitor's refresh rate.

2