Ref.
|
Year
|
Classifiers / methods
|
Dataset Used
|
No. of Selected attributes
|
Inferences
|
Benefits
|
Drawbacks
|
[7]
|
2007
|
Feed Forward back propagation network
|
Not Specific
|
13
|
- Unlabelled data fed for obtaining the classification accurately.
- 100% accuracy achieved.
|
- Less size of data (78 records)
- No performance metric is evaluated.
- Human involvement testing is preferred.
|
[8]
|
2007
|
TAN, STAN, C4.5, CMAR and SVM
|
Not Specific
|
8
|
- SVM showed the best accuracy of 90.9% among all.
|
- Less size of data (193 records)
- The accuracy of each classifier is varied for three different recumbent positions.
|
[9]
|
2009
|
Neural networks using LM, SCG, and CGP algorithms
|
Cleveland
|
13
|
- Classification accuracy is 89.01%.
- Specificity is 95.91%.
|
- No other performance metric is evaluated to standardize the results
- Spare Complexity.
|
[10]
|
2010
|
NB, DL, KNN
|
Kaggle Framingham
|
14
|
- Huge dataset (4240 instances)
- Naïve Bayes performed well compared with others and achieved an accuracy of 52.33%.
|
- Obtained accuracy is very less (52.33%) compared to all.
- No other performance metric is evaluated.
|
[11]
|
2013
|
Backpropagation network
|
Cleveland
|
13
|
- Obtained accuracy is 92% at the 10th run time of the algorithm with different seed numbers.
|
- Less size of training (116 records) and testing data (50 records)
- No feature extraction process.
|
[12]
|
2014
|
NB, DT – GI, SVM
|
Cleveland
|
13
|
- Accuracy of the majority voting based ensemble is 81.82%
- Specificity is 92.86%
|
- The type of feature selection and no. of attributes selected for the ensemble process is not mentioned.
|
[13]
|
2017
|
Multiple Feature Selection with an ensemble approach
|
Z-Alizadeh Sani
|
34
|
- Obtained accuracy is 93.70±0.48 %.
- F1-score (95.53%) and Recall (97.63%) provide the best results.
|
- Processing time is high.
- No significance test is considered.
- Sparse complexity.
|
[14]
|
2017
|
Bagged Tree, Adaboost, and RF
|
Statlog
|
7
|
- Feature selection technique is utilized
- Bagged tree with PSO provides a classification accuracy of 100%.
- Recall and specificity are 100%
|
- The size of the dataset is less.
- Not compared with other datasets for standardization of the obtained result.
|
[15]
|
2017
|
An adaptive weighted fuzzy system using GA + MDMS-PSO
|
Cleveland, Hungarian & Switzerland
|
7
|
- Three different statistical methods are used to identify the risk level factors
- Experiments were carried out on different datasets and achieved accuracies of 92.31% for Cleveland, 95.56 % for Hungary, 89.47% for Switzerland, 91.8% for Long Beach, and 92.68% for Heart datasets.
|
- The suggested model gives preferable results for one statistical method.
- Generalization of the system is not guaranteed as fewer performance metrics are evaluated.
- No significance tests are performed.
|
[16]
|
2018
|
LR, KNN, ANN, SVM, NB, DT
|
Cleveland
|
6
|
- Three different feature selection algorithms are used and compared with the classifiers.
- The best accuracy is 89% with LR and Relief algorithm.
|
- In spite of using several metrics the best algorithm is varied for all metrics.
- A single dataset is used for the entire process and no comparisons are made with other datasets.
|
[17]
|
2018
|
KNN, RF, SVM, NB & ANN
|
Statlog
|
7
|
- FCBF feature selection method is used besides two optimization approaches namely PSO and ACO.
- Accomplished with an accuracy of 99.65% using KNN
|
- Each algorithm worked worse in some situations.
- A single dataset is used for the entire process and no comparisons are made with other datasets.
|
[18]
|
2019
|
NB, GLM, LR, DL, DT, RF, GBT & SVM
|
Cleveland
|
13 (8 Subsets)
|
- The hybrid algorithm is implemented with RF and LM.
- Achieved the best accuracy of 88.47%
|
- A single dataset is used for the entire process and no comparisons are made with other datasets.
- The age factor is excluded in the modeling.
|
[19]
|
2019
|
NuSVM, LinSVM and SVC
|
Z-Alizadeh Sani
|
29
|
- Two-level optimization is preferred using GA and PSO
- Results are given that the highest accuracy is obtained for nuSVM (93.08%).
|
- No significance test is conducted for the standardization of the proposed approach.
- The exactness is less on the same dataset refer to [10]
|
[20]
|
2019
|
NB, BN, RF & MLP
|
Statlog
|
11 (6 Subsets)
|
- The maximum increase of 7% accuracy is achieved by the majority vote ensemble.
- The computational time for ensemble techniques is determined.
- Experiments are done and achieved an accuracy of 85.48% by ensemble of BN, NB, RF, and MP.
|
- The total complexity is not determined.
- The age factor is excluded from the model.
- No standardization has been proposed by comparing different datasets in the approach.
|
[21]
|
2019
|
model + Deep Neural Network
|
Cleveland
|
11
|
- The system was evaluated using 6 different performance metrics
- Comparisons were made between conventional neural networks (ANN, DNN) and proposed neural networks ( ANN & DNN).
- Under fitting and over fitting problems are resolved.
- Achieved a testing accuracy of 93.33%
|
- A single dataset with a small sample size is used to test the system.
- The time complexity is not determined.
- The search strategy is used for the optimal width selection for hidden layers in ANN and DNN.
|
[22]
|
2019
|
Random Search + Random Forest
|
Cleveland
|
7
|
- Reduces the time complexity as the number of features is reduced.
- Achieved a testing accuracy of 93.33%
- Overfitting problem is resolved
|
- A single dataset with a small sample size is used to test the system.
- Specific processing time is not mentioned in the approach.
|
[23]
|
2019
|
model + Gaussian NB
|
Cleveland
|
9
|
- Six evaluation metrics are used for the Cleveland dataset.
- Achieved a testing accuracy of 93.33%
|
- The age factor is excluded from the analysis.
- Time complexity is not determined.
|
[24]
|
2020
|
BiLSTM – CRF
|
Cleveland
|
-
|
- Analysed the data in both guiding ways and provide a linear relationship between attributes.
- Achieved a good classification accuracy of 90.04% for the Cleveland dataset.
- The proposed method is tested on 4 different datasets.
|
- No. of attributes selected for the prediction is not clear.
- Average accuracy results are preferred over individual accuracies of different datasets.
- No significance test is calculated.
|
[25]
|
2020
|
CHI - PCA with Random Forest
|
Cleveland, Hungarian & Cleveland-Hungarian
|
13
|
- Attained an accuracy of 98.7% for Cleveland, 99.0% for Hungarian, and 99.4% for CH datasets.
- Both feature selection and feature extraction are implemented.
|
- Analysed the system with the same type of features available in different datasets.
- Important parameters like age, RestECG, ST Depression (Slope), etc, are excluded in the model for the Cleveland dataset.
|