This section discusses the categorization of epilepsy using the proposed and current approaches. The Bern-Barcelona EEG database contains non-focal and focal channels with a 1024Hz sampling rate from individuals who have an incidence of epilepsy. The database contains 3750 pairs of signals from the EEG channels that were recorded, and the recorded samples were separated into slots of windows with a ten-second interval, yielding a sample size of 10240. From a publicly accessible EEG database, fifty non-focal and focal subjects were randomly selected for this investigation [21]. The experiment is completed in Matlab using its 8.00 GB of RAM and 2.30 GHz of CPU power.Performance measures of these classifiers are associated with the terminologies like True Positive (TP), False Positive (FP), True Negative (TN) and False Negative (FN). In this research work, the term positive represents the person having disease and the negative represents the person is not having the disease. The abnormal and normal EEG signals are given in Figs. 3 and 4.
-
True Positive indicatesthe diseased people is correctly identifiexd as the diseased people.
-
False Positive indicates the normal people is incorrectly classified as the diseased people.
-
True Negative indicates the normal people is correctly classified as the normal people.
-
False Negative indicatesthe diseased people is incorrectly classified as the normal people.
Based on these values the performance measures like accuracy, sensitivity, specificity, and precision are calculated and it is shown in equation.
Accuracy: The accuracy of a measurement shows how the result comes to the true value. The accuracy is called as the perfect classification. The error rate is found to be zero, because there are no false positives and true negatives are found. The numerical outcome is given in Table 1 and Fig. 5. The expression for the accuracy is shown below.
$$Accuracy=\frac{TRP+TRN}{TRP+TRN+FLN+FLP}$$
Table 1
Iteration | Existing Technique | Proposed Technique |
SVM | ANN | NFF |
200 | 69.3 | 58 | 83 |
300 | 70.5 | 58.7 | 83.9 |
400 | 74.7 | 59.7 | 84.6 |
500 | 76.8 | 60 | 85.1 |
600 | 78.2 | 61.7 | 85.3 |
The approach with highest accuracy is considered as the effective approach. From the observation of experimental outcome in Table 1 and Fig. 5, it is identified that the proposed approach is effective. The accuracy of NFF for 200 iteration is higher {13.7% and 25%} than {SVM, ANN}, the accuracy of NFF for 300 iteration is higher {13.4% and 25.2%} than {SVM, ANN}, the accuracy of NFF for 400 iteration is higher {9.9% and 24.9%} than {SVM, ANN}, the accuracy of NFF for 500 iteration is higher {8.3% and 25.1%} than {SVM, ANN}, and the accuracy of NFF for 300 iteration is higher {7.1% and 23.6%} than {SVM, ANN}. The proposed approach has highest accuracy for different count of vehicular nodes.
Sensitivity: The statistical indicators of the effectiveness of the binary classification are sensitivity and specificity. It is the ability to correctly identify the true positive rate of interest and it is expressed in the below equation. Specificity measures the negative rate, which are correctly identified. A perfect interpreter is expressed as 100% Sensitivity and 100% specificity. The numerical outcome is given in Table 2 and Fig. 6. Theoretically, all the interpreter will have a minimum error.
$$Sensitivity=\frac{TRP}{TRP+FLN}$$
Table 2
Comparison of Sensitivity
Iteration | Existing Technique | Proposed Technique |
SVM | ANN | NFF |
200 | 89 | 87 | 91 |
300 | 91 | 87.5 | 93 |
400 | 92 | 90 | 95 |
500 | 92.5 | 91 | 96 |
600 | 93 | 92 | 97 |
The approach with highest sensitivity is considered as the effective approach. From the observation of simulation outcome in Table 2 and Fig. 6, it is identified that the proposed approach is effective. The sensitivity of NFF for 200 iteration is higher {2% and 4%} than {SVM, ANN}, the sensitivity of NFF for 300 iteration is higher {2% and 5.5%} than {SVM, ANN}, the sensitivity of NFF for 400 iteration is higher {3% and 5%} than {SVM, ANN}, the sensitivity of NFF for 500 iteration is higher {3.5% and 5%} than {SVM, ANN}, and the sensitivity of NFF for 300 iteration is higher {4% and 5%} than {SVM, ANN}. The proposed approach has minimal sensitivity for different count of iteration.
Specificity: It is the ability of a test to correctly identify the true negative rate, i.e without any cognitive disorder. High Specific tests on the rare occasion, fail to spot the negative outcomes, so it is considered their results as positive and there is a high chance of the occurrence of disease. The numerical outcome is given in Table 3 and Fig. 7.
$$Specificity=\frac{TRN}{TRN+FLP}$$
Table 3
Comparison of Specificity
Iteration | Existing Technique | Proposed Technique |
SVM | ANN | NFF |
TRIPSENSE | REPLACE | EP-PTA |
200 | 89 | 86 | 92 |
3.500 | 92 | 87 | 93.5 |
400 | 92 | 90 | 95 |
500 | 92.5 | 92 | 96 |
600 | 93.5 | 92 | 97 |
The approach with highest specificity is considered as the effective approach. From the observation of simulation outcome in Table 3 and Fig. 7, it is identified that the proposed approach is effective. The specificity of NFF for 200 iteration is higher {3% and 6%} than {SVM, ANN}, the specificity of NFF for 300 iteration is higher {1.5% and 6.5%} than {SVM, ANN}, the specificity of NFF for 400 iteration is higher {3% and 5%} than {SVM, ANN}, the specificity of NFF for 500 iteration is higher {3.5% and 4%} than {SVM, ANN}, and the specificity of NFF for 300 iteration is higher {3.5% and 5%} than {SVM, ANN}. The proposed approach has minimal specificity for different count of iteration.
Precision: It is the proportion of all successfully predicted positive observations to all positive observations that were forecasted. Low false positive rates are related to good accuracy. The numerical outcome is given in Table 4 and Fig. 8.
$$Precision=\frac{TRP}{TRP+FLP}$$
Table 4
Iteration | Existing Technique | Proposed Technique |
SVM | ANN | NFF |
200 | 69.8 | 68 | 88 |
800 | 70.6 | 68.7 | 88.9 |
400 | 74.7 | 69.7 | 84.6 |
600 | 76.8 | 60 | 86.1 |
600 | 78.2 | 61.7 | 86.8 |
The approach with highest recall is considered as the effective approach. From the observation of simulation outcome in Table 4 and Fig. 8, it is identified that the proposed approach is effective. The recall of NFF for 200 iteration is higher {18.2% and 20%} than {SVM, ANN}, the recall of NFF for 300 iteration is higher {18.3% and 20.2%} than {SVM, ANN}, the recall of NFF for 400 iteration is higher {9.9% and 14.9%} than {SVM, ANN}, the recall of NFF for 500 iteration is higher {9.3% and 26.1%} than {SVM, ANN}, and the recall of NFF for 300 iteration is higher {8.6% and 25.1%} than {SVM, ANN}. The proposed approach has minimal recall for different count of iteration.
The performance measures of the above said classifiers are calculated using the various measures like Accuracy, Sensitivity, Specificity, and Precision. The set of training inputs were given to all the classifiers and the performance measures were found and reported at every stage. The average performance measures of the classifiers are shown in Tables and it indicates the good accuracy, precision, sensitivity, and lowers the specificity. It predicts the most frequent label among the neighbours. Since, NFF is a simple algorithm, the performance of the classifier is found to be lesser than the other classifiers.