In the recent years, infrared and visible image fusion has been actively explored due to its advantages in various vision-based applications. However,the existing fusion methods only use the output of the last layer in the coded network. The results of fusion are not good as most of the important information in the intermediate layers is lost. In this paper, a meta upscale based end-to-end depth framework for infrared and visible image fusion is proposed. The meta-upscale module can increase the resolution of the multi-scale features without repeatedly training the whole model. Firstly, in the proposed Multi-scale Feature Extraction Module (MFEM), the multi-scale deep features of each source image are extracted and up-scaled by a meta upscale module. Secondly, in the Multi-scale Feature fusion Module (MFM), the L1-Norm strategy-based feature fusion method is developed to fuse the features based on the scaling. Thirdly, the Multi-scale Feature Compensation Module (MFCM), it has dense skip connections, which can preserve signifificant amounts of information from the input data in a multi-scale perspective. In addition, a new content retention loss is proposed to further improve the contrast enhancement approach. Experiments demonstrate that the proposed method can achieve signifificant results compared tothe state-of-the-art algorithms.