A Methodology for Evaluating Illumination Artifact Removal for Corresponding Images
Reference
Degree Grantor
Abstract
Robust stereo and optical flow disparity matching is essential for computer vision applications with varying illumination conditions. Most robust disparity matching algorithms rely on computationally expensive normalized variants of the brightness constancy assumption to compute the matching criterion. In this paper, we reinvestigate the removal of global and large area illumination artifacts, such as vignetting, camera gain, and shading reflections, by directly modifying the input images. We show that this significantly reduces violations of the brightness constancy assumption, while maintaining the information content in the images. In particular, we define metrics and perform a methodical evaluation to firstly identify the loss of information in the images, and secondly determine the reduction of brightness constancy violations. Thirdly, we experimentally validate that modifying the input images yields robustness against illumination artifacts for optical flow disparity matching.