KingRandomGuy
KingRandomGuy t1_ja74wh7 wrote
Reply to comment by DPanzer17 in I shot over 3600 one-second exposures to get my sharpest image of a galaxy to date by J3RRYLIKESCHEESE
So the downside to long shutter speeds is that if any motion happens during the duration of the exposure, then you will get blur in your image. This is why photos of people in lowlight often aren't sharp - people moving a little bit during the long exposure makes them have visible blur.
For astrophotography, the problem is that the sky is slowly rotating. This is barely perceptible to us, but it's enough rotation that a 10 second exposure may result in star trails - basically stars will appear like streaks instead of points.
So, as you've correctly identified, we want to maximize our exposure time, but you can't go too long or you'll get blurry images. This can be alleviated by star trackers, but that's an extra piece of equipment. One rule that can help with this is the 500 rule, which states that anything over 500/f seconds, where f is your lens's focal length (full frame equivalent), will cause motion blur. Do note that you'll often need to be well under this number to avoid blur, but it's a nice rule of thumb.
Clear skies!
KingRandomGuy t1_ja6fsxk wrote
Reply to comment by throughawaeladdie in I shot over 3600 one-second exposures to get my sharpest image of a galaxy to date by J3RRYLIKESCHEESE
No problem! I'm happy to answer any other questions too.
Astrophotography is quite a technical hobby but it's really rewarding IMO. I'd recommend giving it a try if you have the time and resources.
KingRandomGuy t1_ja6fed5 wrote
Reply to comment by throughawaeladdie in I shot over 3600 one-second exposures to get my sharpest image of a galaxy to date by J3RRYLIKESCHEESE
No, they would have less noise. A longer exposure means that more light hits the sensor, so you get more signal for the same amount of noise. In this case, assuming that the ISO (sensitivity) of the camera remained constant, doubling the exposure time would double the brightness (signal) while having the same noise level. In turn, the pre-stacked images would have a higher signal to noise ratio.
KingRandomGuy t1_ja4x3uu wrote
Reply to comment by throughawaeladdie in I shot over 3600 one-second exposures to get my sharpest image of a galaxy to date by J3RRYLIKESCHEESE
Whenever you take a photo, the image sensor is exposed to light for some amount of time. The duration of this time period where the sensor is capturing light is called "exposure time" or "shutter speed." By increasing this duration, the sensor receives more light, which is useful for taking photos of dim objects (for example, a 2 second exposure gets twice the light as a 1 second exposure). However, this has a tradeoff - if the camera or object moves before the exposure finishes, then there will be blur in the image. This is why photos taken under low light tend to be blurrier, since there is more of a chance that camera will shake in your hands or the subject will move.
When you take photos in daytime, you usually use a shutter speed of from 1/8000th of a second to 1/30th of a second. However, at night photographing dim objects, these kinds of shutter speeds will not give you any reasonable signal - more or less just a black frame. As such, you take exposures of 1 second or more to gather as much light as possible without incurring blur from the sky rotating (tracking mounts help with this).
However, for dim objects like galaxies, even 1 second exposures don't capture enough signal compared to the overall noise level of the camera. As such, we can take many 1 second exposures and stack them together by aligning the frames (done by aligning the stars in the image together) and then adding the pixel values. This results in an increased signal to noise ratio, allowing us to make a more detailed image once properly edited.
KingRandomGuy t1_j9k363j wrote
Reply to comment by activatedgeek in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
> CNNs provide the inductive bias to prefer functions that handle translation equivariance
There's some interesting bodies of work to inductive biases in CNNs, such as "Making Convolutional Networks Shift-Invariant Again". Really interesting stuff!
KingRandomGuy t1_iza65df wrote
Reply to comment by ThisIsMyStonerAcount in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
This lines up with ImageNet, but I'd probably drop in AlexNet as well.
KingRandomGuy t1_ja82qp1 wrote
Reply to comment by Total-Oil2289 in I shot over 3600 one-second exposures to get my sharpest image of a galaxy to date by J3RRYLIKESCHEESE
It's a similar solution to lucky imaging, but lucky imaging specifically requires that your exposures are short. You can still stack very long exposures for deep sky objects and get a great result (assuming you are tracking).
The concept is similar though; in both cases you are stacking to increase the signal to noise ratio, and you should throw out bad frames.