Lung cancer is an abnormal development of cells that are uncontrollably proliferating. When using a system for medical diagnostics, the precise identification of lung cancer is crucial. Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are the most common methods for diagnoses. Due to the limited sensitivity of the border pixels in PET and MRI imaging, finding lung cancer might be difficult. As a result, image fusion was created, which successfully combines several modalities to identify the disease and cure it. But merging images from multiple modalities has always been troublesome in medicine because the final image includes distorted spectral information. The ideal pixel-level image fusion approach to merge lung cancer images obtained from several modalities is provided in this research to circumvent the issue. Pre-processing, multi-modality image fusion, feature extraction, and classification are the four phases of the suggested methodology. Images from the PET and MRI scanners are initially gathered and prepared. The best pixel-level fusion method is then used to merge the PET and MRI images. Here, the adaptive tee seed optimization (ATSO) method is used to ideally choose the fusion parameter contained in the approach to improve the fusion model. The texture characteristics are taken from the fused image after the image fusion procedure. The deep extreme learning machine (DELM) classifier will then identify animage as normal or abnormal using the retrieved characteristics.Utilizing a variety of criteria, the effectiveness of the suggested methodology is assessed and compared to previous state-of-art studies.