Yan et al. [16] developed a computer-aided detection model for breast cancer. Their strategy entailed creating a powerful algorithm and collecting a carefully labeled dataset for model training. The authors present a feature-sensitive deep convolutional neural network for breast cancer diagnosis that makes use of an end-to-end training process. The first step of the procedure is to extract important image characteristics using a pre-trained model enhanced with custom layers. It then includes a feature fusion module that determines the weighting of each feature vector. By allowing various occurrences of each case to have varied degrees of effect on the classifier, this adaptive feature weighting technique ultimately improves the accuracy of breast cancer identification.
Xiang Yu et al. [17] implemented a new data augmentation methodology called SCDA (Scaling and Contrast Limited Adaptive Histogram Equalization Data Augmentation). This can modify the accuracy and robustness of machine learning models by introducing more variation in the data. Overall, SCDA is a new technique for data augmentation that can improve the performance of machine learning models, particularly in computer vision tasks that involve image processing.
Xin Shu et al. [18] developed an innovative approach for classifying mammograms without the need for mask ground truth labels. Convolutional neural networks (CNNs) are used in their end-to-end full-image mammography classification technique, It perform better than previous cutting-edge classifiers and detection techniques reliant on segmentation annotations. This breakthrough has the potential to improve mammographic image analysis and enhance breast cancer detection.
Ritabrata Sanyal et al. [19] presented a Hybrid Ensemble Framework for Patch-wise Classification of High-Resolution Breast Histopathology Images using Multiple Fine-tuned CNNs and XGBoot. In this system, optimised convolutional neural network (CNN) designs are integrated with extreme gradient boosting trees (XGBoost), the top-level classifier, to be used as supervised feature extractors. Experimental results show that the proposed strategy outperforms the state-of-the-art methods in patch-wise classification of high-resolution breast histopathology images.
Q. Abbas [20] introduces DeepCAD, a cutting-edge CAD solution that uses a four-phase technique to solve a variety of problems. This technique retrieves descriptors using measures of Local Binary Pattern Variance (LBPV) and Speed-Up Robust Features (SURF). These descriptors are subsequently converted into invariant features, assuring their stability and dependability for additional research. Deep Invariant Features (DIFs) utilize a multilayer deep-learning architecture by DeepCAD in both supervised and unsupervised ways to further improve performance and attain sensitivity, specificity, accuracy, and AUC of 92%, 84.2%, 91.5%, and 0.91, respectively.
Sha et al. [21] represented a Grasshopper Optimisation Algorithm for image noise reduction. Convolutional neural network image segmentation that is optimized feature extraction and selection using the same algorithm. The suggested method offers a practical solution for cancer area localization in mammography images by enhancing precision and lowering computing costs using these strategies. The suggested technique tries to automatically identify and categorize malignant areas in breast pictures. With sensitivity, specificity, and accuracy rates of 96%, 93%, and 92%, respectively, this strategy performed admirably.
Sheetal Rajpal et al. [22], proposed a deep-learning-based ground-breaking framework for the detection of biomarkers and the diagnosis of breast cancer subtypes. This framework consists of three stages: To obtain a compact representation of the gene expression data in the first stage, an autoencoder is utilized. The second stage involves categorizing breast cancer subtypes using a supervised feed-forward neural network. An algorithm called the Biomarker Gene Discovery Algorithm (BGDA) is suggested in the third phase to determine the significance of various genes. the framework obtained a mean accuracy of 0.899 with a confidence interval of 0.04 using 10-fold cross-validation.
Mondol et al. [23] used an adversarial auto-encoder (AAE) and a dual-stage neural network architecture called AFExNet, and they were able to extract features from highly dimensional genetic data. Using a publicly available RNA-Seq dataset of breast cancer, They tested the effectiveness of their model using twelve alternative supervised classifiers. While AFExNet consistently outperforms 12 other classifiers across all performance parameters, their model classifier is independent of AFExNet. They also created a technique called "TopGene" to locate highly weighted genes in the latent space, which may help locate cancer biomarkers. AFExNet offers a lot of potential for properly and efficiently extracting features from biological data.
Guangli Li et al [24]proposed a new model called Multi-View Attention-Guided Multiple Instance Detection Networks, dividing each histopathology image into instances to use high-resolution information fully. The Multiple-View Attention (MVA) approach was established by the algorithm to locate and localise lesion patches within the image by focusing on certain instances. Combining this with a Multiple Instance Learning (MIL) pooling technique driven by MVA allowed instance-level characteristics to be aggregated for subsequent bag-level classification. The model achieved better localization results without sacrificing classification performance, making it highly practical.
Amin Ul Haq et al. [25] proposed supervised and unsupervised techniques for related feature selection from a data set and determined by PCA algorithms and the Relief algorithms. These features are closely connected with accurate breast cancer diagnosis. The suggested approach obtained 99.91% accuracy on features chosen by the Relief algorithm, achieving excellent results in terms of accuracy.
Yasin Yari et al. [26] proposed deep transfer learning models that leverage pre-trained DCNNs to enhance binary and multiclass classification in breast cancer detection. By utilizing pre-trained weights from ResNet50 and DenseNet on the ImageNet dataset, the models were fine-tuned with a deep classifier and data augmentation techniques to identify malignant and benign tissues. The system achieved impressive accuracies of up to 98% in multiclass classification and up to 100% in binary classification.
Zhiqiong Wang et al. [27] developed a technique for identifying breast masses using CNN deep features in combination with unsupervised Extreme Learning Machine (ELM) clustering. By combining deep features with morphological, texture, and density characteristics, they subsequently created a complete feature set. Finally, the suggested technique for mass detection and breast cancer classification has undergone extensive testing to demonstrate its accuracy and effectiveness.
Irum Hirra et al. [28] introduced Pa-DBN-BC, a novel approach for breast cancer detection and classification in histopathology images. Their method, based on Deep Belief Networks (DBNs), automatically extracts features from image patches using unsupervised pre-training and supervised fine-tuning. The proposed approach has the potential to enhance breast cancer analysis, as evidenced by its accuracy of 86% on a variety of datasets, which is better than both earlier deep learning techniques and regular methods.
Farnoos Azoor et al. [29] combined efficientNet with other pre-trained models, they enhanced accuracy by using pre-trained cnn-based models notably with greater performance and less number of parameters, and the network resilience was increased much further using ensemble learning their tests showed encouraging test accuracy results of 9605 for abnormality type classification and 8571 for pathology diagnostic classification using 10-fold cross-validation this study demonstrates how pre-trained models and ensemble learning are beneficial for correctly classifying medical images.
Daniel et al. [30] presented an efficientNet-based convolutional network on the CBIS-DDSM dataset. The suggested model attained an AUC of 0.93 and an accuracy of 85.13% with the use of the 5-fold cross-validation approach. In comparison to VGG and Resnet, they found that modern efficientNet based model performed well.
H. U. Khan et al. [31]created a breast cancer classification algorithm for histopathology images that makes use of a multi-scale feature fusion strategy. Their model is made up of two blocks, each of which has three lightweight sub-models, for a total of six sub-models. This architecture is made to gather data from input images at different dimensions, improving its capacity to correctly categorize cases of breast cancer. The predictive performance evaluated using different performance measures shows that this model produces a better result than already existing approaches.