[1] C. Huang et al., “Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China,” Lancet, vol. 395, no. 10223, pp. 497–506, 2020.
[2] “COVID Live - Coronavirus Statistics - Worldometer.” https://www.worldometers.info/coronavirus/ (accessed Feb. 24, 2022).
[3] Y. Fang et al., “Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR,” Radiology, p. 200432, Feb. 2020.
[4] X. Xie, Z. Zhong, W. Zhao, C. Zheng, F. Wang, and J. Liu, “Chest CT for Typical 2019-nCoV Pneumonia: Relationship to Negative RT-PCR Testing,” Radiology, p. 200343, Feb. 2020.
[5] C. S. Guan et al., “Imaging Features of Coronavirus disease 2019 (COVID-19): Evaluation on Thin-Section CT,” Acad Radiol, vol. 27, no. 5, pp. 609–613, May 2020.
[6] A. Das, “Adaptive UNet-based Lung Segmentation and Ensemble Learning with CNN-based Deep Features for Automated COVID-19 Diagnosis,” Multimed Tools Appl, pp. 1–35, Dec. 2021.
[7] S. Park et al., “Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification,” Med Image Anal, vol. 75, p. 102299, Jan. 2022.
[8] A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” arXiv:2010.11929 [cs], Jun. 2021.
[9] A. K. Mondal, A. Bhattacharjee, P. Singla, and A. P. Prathosh, “xViTCOS: Explainable Vision Transformer Based COVID-19 Screening Using Radiography,” IEEE J Transl Eng Health Med, vol. 10, p. 1100110, Jan. 2022.
[10] Z. Liu et al., “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” arXiv:2103.14030 [cs], Aug. 2021, Accessed: Feb. 04, 2022.
[11] V. Perumal, V. Narayanan, and S. J. S. Rajasekar, “Detection of COVID-19 using CXR and CT images using Transfer Learning and Haralick features,” Appl Intell, vol. 51, no. 1, pp. 341–358, Jan. 2021.
[12] M.-R. Lascu, “Deep Learning in Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs on CXR and CT Images.,” J Med Biol Eng, pp. 1–9, Jun. 2021.
[13] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv 1409.1556, Sep. 2014.
[14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Jun. 2016, pp. 770–778.
[15] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” arXiv:1608.06993 [cs], Jan. 2018.
[16] C. Szegedy et al., “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9.
[17] F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” Jul. 2017, pp. 1800–1807.
[18] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv:1704.04861 [cs], Apr. 2017.
[19] M. Tan and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proceedings of the 36th International Conference on Machine Learning, May 2019, pp. 6105–6114.
[20] L. Brunese, F. Mercaldo, A. Reginelli, and A. Santone, “Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays,” Comput Methods Programs Biomed, vol. 196, p. 105608, Nov. 2020.
[21] A. Narin, C. Kaya, and Z. Pamuk, “Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray Images and Deep Convolutional Neural Networks,” Pattern Anal Applic, vol. 24, no. 3, pp. 1207–1220, Aug. 2021.
[22] G. Wang et al., “A deep-learning pipeline for the diagnosis and discrimination of viral, non-viral and COVID-19 pneumonia from chest X-ray images,” Nature Biomedical Engineering, vol. 5, no. 6, pp. 509–521, 2021.
[23] A. Shamila Ebenezer, S. Deepa Kanmani, M. Sivakumar, and S. Jeba Priya, “Effect of image transformation on EfficientNet model for COVID-19 CT image classification,” Mater Today Proc, Dec. 2021.
[24] S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic Routing Between Capsules,” arXiv:1710.09829 [cs], Nov. 2017.
[25] P. Afshar, S. Heidarian, F. Naderkhani, A. Oikonomou, K. N. Plataniotis, and A. Mohammadi, “COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images,” Pattern Recognit Lett, vol. 138, pp. 638–643, Oct. 2020.
[26] S. Toraman, T. B. Alakus, and I. Turkoglu, “Convolutional capsnet: A novel artificial neural network approach to detect COVID-19 disease from X-ray images using capsule networks,” Chaos Solitons Fractals, vol. 140, p. 110122, Nov. 2020.
[27] Y. Dong, J.-B. Cordonnier, and A. Loukas, “Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth,” arXiv:2103.03404 [cs], Mar. 2021.
[28] D. Shome et al., “COVID-Transformer: Interpretable COVID-19 Detection Using Vision Transformer for Healthcare,” Int J Environ Res Public Health, vol. 18, no. 21, p. 11086, Oct. 2021.
[29] K. Zhang et al., “Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography,” Cell, vol. 181, no. 6, pp. 1423-1433.e11, Jun. 2020.
[30] W. Ning et al., “Open resource of clinical data from patients with pneumonia for the prediction of COVID-19 outcomes via deep learning,” Nat Biomed Eng, vol. 4, no. 12, pp. 1197–1207, 2020.
[31] M. Rahimzadeh, A. Attar, and S. M. Sakhaei, “A fully automated deep learning-based network for detecting COVID-19 from a new and large lung CT scan dataset,” Biomedical Signal Processing and Control, vol. 68, p. 102588, Jul. 2021.
[32] K. Clark et al., “The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository,” J Digit Imaging, vol. 26, no. 6, pp. 1045–1057, Dec. 2013.
[33] M. Jun et al., “COVID-19 CT Lung and Infection Segmentation Dataset.” Zenodo, Apr. 20, 2020.
[34] A. I. G Samuel et al., “Data From LIDC-IDRI.” The Cancer Imaging Archive, 2015.
[35] “Radiopaedia.org, the wiki-based collaborative Radiology resource,” Radiopaedia. https://radiopaedia.org/ (accessed Feb. 06, 2022).
[36] S. P. Morozov et al., “MosMedData: Chest CT Scans With COVID-19 Related Findings Datase.” 2020. Accessed: Feb. 06, 2022.
[37] X. Li, Y. Zhou, P. Du, G. Lang, M. Xu, and W. Wu, “A deep learning system that generates quantitative CT reports for diagnosing pulmonary Tuberculosis,” Applied Intelligence, vol. 51, pp. 1–12, Jun. 2021.
[38] F. Zhuang et al., “A Comprehensive Survey on Transfer Learning,” Proceedings of the IEEE, vol. PP, pp. 1–34, Jul. 2020.
[39] Y. Brima, M. Atemkeng, S. Tankio Djiokap, J. Ebiele, and F. Tchakounté, “Transfer Learning for the Detection and Diagnosis of Types of Pneumonia including Pneumonia Induced by COVID-19 from Chest X-ray Images.,” Diagnostics (Basel), vol. 11, no. 8, Aug. 2021.
[40] Y. Gu, Z. Piao, and S. J. Yoo, “STHarDNet: Swin Transformer with HarDNet for MRI Segmentation,” Applied Sciences, vol. 12, no. 1, Art. no. 1, Jan. 2022.
[41] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision, vol. 115, Sep. 2014.
[42] B. Heo, S. Yun, D. Han, S. Chun, J. Choe, and S. J. Oh, “Rethinking Spatial Dimensions of Vision Transformers,” arXiv:2103.16302 [cs], Aug. 2021, Accessed: Feb. 05, 2022.
[43] H. Fan et al., “Multiscale Vision Transformers,” arXiv:2104.11227 [cs], Apr. 2021, Accessed: Feb. 05, 2022.
[44] B. Wang, Q. Xie, J. Pei, P. Tiwari, Z. Li, and J. fu, “Pre-trained Language Models in Biomedical Domain: A Systematic Survey,” arXiv:2110.05006 [cs], Oct. 2021, Accessed: Feb. 04, 2022.
[45] H. Farhat, G. E. Sakr, and R. Kilany, “Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19,” Mach Vis Appl, vol. 31, no. 6, p. 53, 2020.
[46] M. T. McCann, K. H. Jin, and M. Unser, “A Review of Convolutional Neural Networks for Inverse Problems in Imaging,” IEEE Signal Process. Mag., vol. 34, no. 6, pp. 85–95, Nov. 2017.
[47] E. Irmak, “COVID‐19 disease severity assessment using CNN model,” IET Image Process, vol. 15, no. 8, pp. 1814–1824, Jun. 2021, doi: 10.1049/ipr2.12153.
[48] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Neural Information Processing Systems, vol. 25, Jan. 2012.
[49] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 2261–2269. doi: 10.1109/CVPR.2017.243.
[50] Z. Tao, H. Bingqiang, L. Huiling, Y. Zaoli, and S. Hongbin, “NSCR-Based DenseNet for Lung Tumor Recognition Using Chest CT Image,” Biomed Res Int, vol. 2020, p. 6636321, Dec. 2020.
[51] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-Excitation Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 8, pp. 2011–2023, Aug. 2020.
[52] P. Ramachandran, B. Zoph, and Q. V. Le, “Searching for Activation Functions,” arXiv:1710.05941 [cs], Oct. 2017, Accessed: Jan. 05, 2022.
[53] L. van der Maaten and G. Hinton, “Viualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, pp. 2579–2605, Nov. 2008.
[54] M. Shorfuzzaman, M. Masud, H. Alhumyani, D. Anand, and A. Singh, “Artificial Neural Network-Based Deep Learning Model for COVID-19 Patient Detection Using X-Ray Chest Images,” J Healthc Eng, vol. 2021, p. 5513679, Jun. 2021.
[55] B. Melit Devassy and S. George, “Dimensionality reduction and visualisation of hyperspectral ink data using t-SNE,” Forensic Science International, vol. 311, p. 110194, Jun. 2020, doi: 10.1016/j.forsciint.2020.110194.
[56] S. Kullback, “Information Theory and Statistics,” 1959.
[57] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” in 2017 IEEE International Conference on Computer Vision (ICCV), Oct. 2017, pp. 618–626.
[58] G. Wang et al., “A deep-learning pipeline for the diagnosis and discrimination of viral, non-viral and COVID-19 pneumonia from chest X-ray images,” Nat Biomed Eng, vol. 5, no. 6, pp. 509–521, Jun. 2021.