Some DL models and machine learning methods have been used to diagnose breast cancer types in the literature. Rakhlin et al., 2018 presented a method using a deep convolutional neural network (CNN) to analyze the histological image of breast cancer. By integrating several deep neural network architectures and optimized decision tree classification, the images have been classified into four groups with an accuracy of 87.2% [11]. Platania et al., 2017 presented a method called an automatic diagnosis of breast cancer using deep learning and diagnosis (BC-DROID), which provides an automated region of interest detection and diagnosis in mammography and MRI using CNN with the classification accuracy of 93.5%. To achieve higher accuracy, CNN was trained using the definite desired areas emerged by physicians. Then, the proposed system could classify the selected areas as benign or malignant in one step. To fast recognize breast cancer using neural networks, after the feature reduction by independent component analysis (ICA), the support vector machine (SVM) was utilized as a classifier. In addition to the higher efficiency of classification, the calculation cost decreased. Besides, the SVM performance compared to other classification methods like artificial neural networks (ANN), K-nearest neighbors (KNN), and radial basis function network (RBFN). The best performance was achieved when they utilized the RBFN with the classification accuracy of 90.49% using reduction features obtained of ICA [12]. Rasti et al., 2017, reported a novel computer-aided diagnosis (CAD) system in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) [13]. The CAD system is designed based on a mixed ensemble of convolutional neural networks (ME-CNN) to diagnose benign and malignant tumors. Their proposed algorithm was a modular and image-based ensemble, which could stochastically partition the high-dimensional image space through simultaneous and competitive learning of its modules. The proposed system was assessed on our database of 112 DCE-MRI studies, including solid breast masses, using various classification measures. The ME-CNN model achieved an accuracy of 96.39%. Yurttakal et al., 2019 presented that CNN performance is better in image classification compared with feature-based methods. CNN was employed to diagnose lesions as malignant or benign tumors using MRI images. They achieved the accuracy classification of 98.33% using only pixel information and a multi-layer CNN architecture [14]. The deep-learning-based techniques were established for a doubtful region of interest (ROI) segmentation and classification using MRI modalities [15, 16].
Drukker et al., 2020 evaluated an extended short-term memory network to analyze breast cancer progression along with neoadjuvant chemotherapy at two and five years post-surgery. After the segmentation of breast cancers in MR images, the features related to the kinetic curves were extracted. The areas under the ROC curve in the prediction of this study at two years post-surgery were 0.80 [17]. Fusco et al., 2020 considered 45 patients subjected to DCE-MRI before and after the treatment. After extracting 11 semi-quantitative parameters and 50 texture features, the standardized shape index had the best results with a ROC-AUC of 0.93 to distinguish pathological response versus non-pathological response patients [18]. Elanthirayan et al., 2021 considered the breast MR slices to examine the breast tumor section. They used a hybrid imaging procedure including Brain Storm Optimization Algorithm and Shannon’s entropy thresholding and Active-Contour (AC) for tumor segmentation. The proposed experiment on 150 2D breast MRI slices yielded the average accuracy of AC (> 93%) [19]. Typically, lesion classification using DL-based methods is employed without precise segmentation. Antropova et al., 2016 implemented the combination of a pre-trained CNN with SVM to breast MR classification [20]. In addition to their results, other literature confirmed that fine-tuning all layers towards breast MRI classification are necessary to attain high performance [14, 16, 21–25].
Because of CNN training by RGB ImageNet images in grayscale, there are various methods based on the input to the pre-trained CNN like three-time points (3TP) method [22], precontrast, first postcontrast, and second postcontrast frames [17], DCE-MRI, T2-weighted MR, and diffusion-weighted image (DWI) [22, 23, 26]. Participating in both temporal and spatial features of breast MR is challenging for DL networks, especially for 2D images. In this regard, Antropova et al., 2018(a) employed maximum intensity projection (MIP) to incorporate spatial information, whereas image enhancement did not cause miss any evidence [27]. Hu et al., 2019 presented a pooling layer to reduce the image dimension at the feature level, instead of the image level, as in the MIP case [25]. Recurrent neural networks, such as long short-term memory (LSTM), were also practical for lesion classification [28–30]. In each ROI, morphological features extraction at different time points by a CNN were usable for LSTM training to predict the result based on the entire DCE-MRI sequence. Some proposed CNN architectures corresponding to the 4D nature of DCE-MRI by exploiting 3D convolutional layers [31–34] and by extracting unique features of DCE-MRI at multiple scales [32].