Object-Aware NIR-to-Visible Translation

Yunyi Gao, Lin Gu, Qiankun Liu, Ying Fu* ;

Abstract


"While near-infrared (NIR) imaging is essential for assisted driving and safety monitoring systems, its monochromatic nature hinders its broader application, which prompts the development of NIR-to-visible translation tasks. However, the performance of existing translation methods is limited by the neglected disparities between NIR and visible imaging and the lack of paired training data. To address these challenges, we propose a novel object-aware framework for NIR-to-visible translation. Our approach decomposes the visible image recovery into object-independent luminance sources and object-specific reflective components, processing them separately to bridge the gap between NIR and visible imaging under various lighting conditions. Leveraging prior segmentation knowledge enhances our model’s ability to identify and understand the separated object reflection. We also collect the Fully Aligned NIR-Visible Image Dataset, a large-scale dataset comprising fully matched pairs of NIR and visible images captured with a multi-sensor coaxial camera. Empirical evaluations demonstrate our method’s superiority over existing methods, producing visually compelling results on mainstream datasets. Code is accessible at: https://github.com/Yiiclass/Sherry."

Related Material


[pdf] [DOI]