Infrared images can be differentiated between target and background according to the distinction in thermal radiation, which generally works under any lighting condition. In comparison, visible images available in abundant detail, consistent as human visual system. In this study, a novel approach for the fusion of infrared and visible images is proposed built on a dual-discriminator GAN with attention module (DDGANA). Our approach establishes confrontation training between one generator and two discriminators. The goal of the generator is to output result with contrast information and details. The two discriminants aim to increase the similarity between the images generated by the generator and the infrared and visible images. After sequential adversarial training, DDGANA outputs images that have preserved the high contrast and the abundant texture detail. Experiments on the TNO dataset prove that our approach obtains an improved performance over the other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.