Comments

You must log in or register to comment.

its-a-throw-away_ t1_j6ova1n wrote

It's all a bit physicsy. But the jist is that each pixel in a camera sensor acts like a little capacitor that can do one of two things:

  1. When energized, its capacitance changes based on the amount of light it receives; and

  2. It can transfer its capacitance value to an adjacent pixel.

The first function makes sense, but what does the latter have to do with anything?

Well, after the sensor is exposed, the camera's logic starts reading pixel values. Instead of trying to route traces from every single pixel to a memory bus, the camera logic reads the value of the last pixel in the sequence. Once read, this value is discarded, and the value of the next to last pixel is transferred into the last pixel and read. All of the other pixel values in the transfer to their immediate neighbor, shifting them right by one pixel. This read/shift/read/shift sequence continues until all the pixels are read into the camera's memory, creating the final image.

2

dimonium_anonimo t1_j6owo34 wrote

A solar panel is one type of device that converts incident photons into electrical energy. A similar device is inside the camera and is generally referred to as "the sensor." Slightly different, though, they are tuned to specific colors/wavelengths of light. The sensor is split into pixels. Each pixel has a sub-pixel for red, green, and blue light. If a red, green, or blue photon hits one of the appropriate sub-pixels, it generates a small electrical current which can be measured.

Just like human eyesight, the sub-pixels are not extremely particular about what wavelength excites them. There's a wider curve. So for instance, yellow light is somewhere between red and green and will excite both red and green sensors, but to a slightly lesser degree meaning a slightly weaker (lower current/voltage).

What's really key to making an image instead of a poor efficiency solar panel is the lens which very finely aims rais of light based both on where on the lens they hit as well as what angle it comes from. The lens bends light because it has a different refractive index than air (feel free to ask additional questions on this... Or any of these concepts really). The bent light rays hit very precise locations of the sensor so only that pixel is affected.

The electronics will scan the pixels typically an entire row or column at a time, then shift the way down the entire sensor. This means if something moves very quickly relative to the speed at which the electronics scan the sensor, it can end up slightly warped due to something called rolling shutter.

These files end up fairly large, so the camera actually stores the square root of the excitation (the current is first scaled to a number from 0: no current to 255 max current) which not only makes the number much smaller, it also samples small changes in low light levels better than the same change in very bright conditions which is also how human vision works (one light bulb switched on when only 1 other is already on doubles the amount of light. But if there are 100 on and you turn on one more, it's only 1% more light. So we notice contrast better in low light conditions). Your computer should square the numbers back to a scale of 0-255 before displaying the picture. fun, relevant video

1

Glad_Significance778 OP t1_j6p18ib wrote

Thanks for your explanation, so basically the sensor is made out of subpixel and when light hits it, those subpixels generate an electrical turrent. The lens then filters the light based on its wavelength and bends it so that it hits the exact pixel. Then the sub pixels turn into pixels, which are scanned in rows or columns and then saved?

1

czbz t1_j6pbd0x wrote

Afaik digital cameras don't generally use subpixels. Each pixel is roughly speaking only able to detect one of red, green or blue light - because it's covered by a filter that blocks the other colours. So if it only detects the red part how can we see whether or not there was a green thing there when we look at that pixel in the image? A computer has to guess what the colour is in that precise spot by using information from the neighbouring pixels.

That guessing is called 'debayering'. It means that effectively the image captures black and white textures in much higher resolution than variations in colour. Generally that fits well enough with what we want to look at and how we see things.

Our eyes are more sensitive to green light than to anything else, so they make cameras to match. Half the pixels are sensitive to green, one quarter to red, and one quarter to blue.

1