Journal of Atmospheric and Environmental Optics ›› 2023, Vol. 18 ›› Issue (5): 469-478.

Previous Articles     Next Articles

Infrared and visible images fusion with spatial multiscale residual networks

ZHANG Yimen 1, LIN Weiguo 2*   

  1. 1 Beijing System Design Institute of Electro Mechanic Engineering, Beijing 100005, China; 2 College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
  • Received:2022-02-17 Revised:2022-04-08 Online:2023-09-28 Published:2023-10-11
  • Contact: Lin weiguo E-mail:linwg@mail.buct.edu.can

Abstract: To fully extract and fuse typical features of infrared and visible images, an image fusion algorithm based on spatial multi-scale residual network is proposed. Firstly, the source image is input into an encoder network composed of spatial multi-scale residual modules, and through the task of image reconstruction, the encoder network is trained to automatically obtain important features. Then, a feature pyramid and a channel self-attention are introduced, the output of basic layer and detail layer by the endoder are fused to reduce scale noise, and the fused image is reconstructed by the decoder. Finally, qualitative and quantitative experiments on public datasets are carried out, and it is demonstrated that the imporved algorithm outperforms the alternatives on highlighting infrared image targets and preserving visible image texture details. Compared with the DDcGAN algorithm, the standard deviation and average gradient of the proposed algorithm have been improved by 12.91% and 47.41%, respectively.

Key words: image fusion, auto-encoder, spatial multi-scale residual module, channel self-attention

CLC Number: