Submitted by ToTa_12 t3_yvtelj in MachineLearning
How much does it matter what settings (iso, f, exposure time) are used in datasets? Of course there are some specific cases, like imaging in dark conditions where the iso obviously needs to be large and the noise has to be handled. But in more general case, it seems like many datasets are acquired with auto settings. The quality assesment seems to be on the sharpness and relevancy. Is there any papers on the topic of how camera settings and lightning solutions affect the dataset quality or usability?
PredictorX1 t1_iwg6a0u wrote
It depends on the problem being addressed. Consistency can be hurt by using automatic settings since small changes in the scene will provoke dramatically different image settings by most cameras. If, for instance, one wanted to detect diseases of the skin, it would probably be helpful to establish (as best possible) uniform lighting conditions, and fixed camera settings (shutter speed, ISO, lens f-stop and any ancillary settings, such as color temperature adjustments, etc.). If, on the other hand, the goal was to identify individuals by their faces from arbitrary cameras, then a range of camera settings and image quality levels would be a more realistic representation of what the ultimate technical solution will be exposed to, during deployment.