DLHO-:An Enhanced Version of Harris Hawks Optimization By Dimension Learning-Based Hunting For Breast Cancer And Other Serious Diseases Detection

In the bio-medical science various diseases are most serious and are prevalent causes of death among the human whole world out of which breast cancer is the most serious issue. Mammography is the initial screening assessment of breast cancer. Swarm intelligence techniques play an important role for the solution of these types of dis-eases.However due to some shortcomings of these methods such as slow convergence, premature convergence and weak local avoidance etc various complexities are faced. In this study, An enhanced version of Harris Hawks Optimization (HHO) approach has been developed for biomedical datasets, it is known as DLHO. This approach has been introduced by integrating the merits of dimension learning-based hunting (DLH) search strategy with HHO. The main objective of this study is to alleviate the lack of crowd diversity, premature convergence of the HHO and the imbalance amid the exploration and exploitation. DLH search strategy utilizes a dissimilar method to paradigm a neighborhood for each search member in which the neighboring information can be shared amid search agents. This strategy helps in maintaining the diversity and the balance amid global and local search.To test the performance of the proposed technique diﬀerent set of experiments have been performed and results are compared with various recent metaheuristics.First, the performance of optimizer is analysed by using 29-CEC -2017 test suites. Second, to demonstrate the robustness of the proposed technique results have been taken on ﬁve bio-medical datasets such as XOR, Balloon, Iris, Breast Cancer and Heart. All the results are in the favour of proposed technique.


Introduction
Mammograms are restorative pictures that are hard to decipher, hence a prepreparing stage is required with the end goal to enhance the picture quality and make 1 the division results more precise. Mammography is the most widely used modality for detecting and characterizing breast cancer. It is a medical imaging that uses low-dose X-ray system to see inside the breasts. A mammography exam, called a mammogram, helps in the early detection and diagnosis of breast diseases. It has high sensitivity and specificity due to which small tumours and microcalcifications can be detected on mammograms. During mammography two views of each breast [1] are recorded (see in figure 1): • Craniocaudal (CC) view: It is one of the two standard projections in a screening mammography. It is a top to bottom view that must show the medial part as well the external lateral portion of the breast as much as possible.
• Mediolateral Oblique (MLO) view: It is a side view taken at an angle. The presence of pectoral muscle on the MLO view is a key component in acquiring the acceptability of the film. For reducing the false negatives resulting in increasing the sensitivity, the amount of visible pectoral muscle plays an important role. Radiologists visually search mammograms for specific abnormalities. Some of the important signs of breast abnormalities that radiologists look for are: • Calcifications

• Masses
• Architectural distortions Calcification's are tiny mineral deposits (calcium) scattered throughout the mammary gland, or occur in clusters. They appear as small bright spots on the mammogram. Concentration of microcalcifications in one place is also in favor of malignancy while the scattered calcifications are usually benign.Two major types of calcifications found in breast tissue are: microcalcifications and macrocalcifications.
Mass is defined as a space occupying lesion. Masses are areas that look abnormal and they can be many things, including cysts (non-cancerous, fluid-filled sacs) and non-cancerous solid tumors (such as fibroadenomas) , but may sometimes may be a sign of cancer.
• Cysts: These are fluid-filled masses in the breast (known as simple cysts) or sometimes partially solid (known as complex cystic and solid masses). Simple cysts are benign (not cancerous) and don't need to be biopsied.
• Solid Tumours: If a mass is not a simple cyst i.e. it's partly solid such as fibroadenomas then more imaging tests may be needed. Most of the solid masses are nodules or lumpy areas composed of fibrocystic tissue which are very dense.
Architectural distortions are when the normal architecture is distorted with no definite mass visible. This includes spiculations radiating from a point, and focal retraction or distortion of the edge of the parenchyma. It appears as a distortion in which surrounding breast tissues appear to be "pulled inward" into a focal point.
Big dataset issues are the most famous in the field of biomedical science. Metaheuristics are playing a big role for the solutions of these types of issues. In the last few decades, the many enhanced versions have been implemented on these issues for trapping the best outcomes for that. But due to their complexities no one algorithm is sufficient for that so we needed more enhanced versions for tackling these issues. Some works already done by the researchers have been described in this phase.
Tariq Sadad et. al. [2] proposed a CAD system in which segmentation technique i.e. FCMRG (Fuzzy C-mean Region Growing) is applied to obtain the mass in the image. Different features are extracted such as LBP-GLCM and LPQ (local Phase Quantization) and the feature selection is done on the basis of mRMR (minimumredundancy maximum-relevancy) algorithm. At the end classification is carried out using different classifiers such as Decision Tree (DT), Support Vector Machine (SVM), Logistic Regression (LR), Linear Discriminant Analysis (LDA), K-nearest Neighbors (KNN) and Ensemble classifier to differentiate the benign tumors from the malignant ones achieving an accuracy of 98.2%. Abdelali Elmoufidi et. al. [3] proposed a method to segment and detect the boundary of different breast tissue regions in mammograms by using dynamic K-means clustering algorithm and Seed Based Region Growing (SBRG) techniques. Mammographic Image Analysis Society (MIAS) database is used for evaluation.
Mellisa Pratiwi et. al. [4] proposed comparison of two classification methods: Radial Basis propagation Neural based on Gray-level Co-occurrence Matrix (GLCM) texture based features and BPNN. The computational experiments are performed on MIAS database showing that RBFNN is better than BPNN in breast cancer classification. Chun-Chu Jen et. al. [5] proposed an efficient abnormality detection classifier (ADC) in mammogram images. Firstly, preprocessing is performed that included global equalization transformation, image denoising, binarization, breast object extraction, determination of breast orientation and the pectoral muscle suppression. On the obtained segmented images, gray level quantization is performed and further five features are extracted from the ROI and PCA is applied for determining feature weights.
Washington W. Azevedo et. al. [6] worked on IRMA database of mammograms that contains four types of tissues: fat, fibroid, dense and extremely dense. In this work, Morphological Extreme Learning machine is proposed with a hidden layer kernel based on morphological operators: dilation and erosion for classifying the masses as benign or malignant. Danilo Cesar Pereira et.al. [7] presents abnormality detection method in CC and MLO view of mammograms. Preprocessing included artifact removal algorithm followed by an image denoising and enhancement based on wavelet transform and Wiener filter. For segmentation of masses three techniques are used: multiple thresholding, wavelet transforms and genetic algorithm.DDSM database is used for experimentation Area overlap metric (AOM) achieved by the proposed method is 79%.
Min Chen et. al. in [8] proposed a clustering approach based on combination of fuzzy C-mean clustering (FCM) with PSO. As fuzzy C-mean clustering has some drawbacks such as the number of clusters needs to be specified in advance and also we need to have knowledge of the ground truth. The data points in overlapping areas cannot be correctly categorized so to overcome all these drawbacks PSO is used in combination with FCM. The result of this algorithm shows that it can automatically find the optimal number of clusters. P. Shanmugavadivu et. al. [9] proposed an intuitive segmentation technique to separate the microcalcification regions from the mammogram. This mechanism first enhances the input image using an imagedependent threshold value and binarizes it to obtain an enhanced image. Then the pixels constituting the edges of microcalcification regions are grown in the enhanced image, with respect to the neighbourhood pixels. Lastly, the edge intensities of the enhanced image are remapped into the original image, using which the regions of interest are segmented. The results of the present work are compared with the ground realities of the sample images obtained from MIAS database.
R.V. Rao et. al. [10] proposed a new optimization method known as Teaching Learning-based optimization (TLBO) which is based on the influence of a teacher on the learners. It is a population-based method that uses a population of solution to reach to a global solution. Rangaraj M. Rangayyan et. al. [11] presents an overview of various digital image processing and pattern analysis techniques for problem solution in several areas of breast cancer diagnosis. This includes: contrast enhancement, detection and analysis of calcifications, masses and tumors, asymmetric shapes analysis and detection of architectural distortion.Seyedail Mirjalili [12] has utilized the GWO algorithm for training MLP for evaluating the accuracy and classification rate of five dataset issues. On the basis of experimental outcomes, proved that the GWO strategy is competent in giving the superior quality of outcomes in terms of enhanced local optima avoidance. After that the various authors have utilized the many recent enhanced and hybrid techniques on these dataset for effective solutions [13]- [14].
For locating the cancerous region in the mammogram image a comprehensive algorithm has been utilized by Zijun Sha [15].This strategy has been applied on image noise reduction, optimal image segmentation for feature selection and extraction, thereby reducing the evaluating cost and enhancing precision etc. In addition, the proposed algorithm was utilized by the MIAS and DDSM databases. For verification, the performance of the presented approach has been validated with ten recent optimizers. Through tabulated outcomes it has been proved that the presented approach is competent to give the 96% Sensitivity, 93% Specificity, 85% PPV, 97% NPV, 92% accuracy, and better efficiency as compared to others. Shaikh et al. [16] has introduced a new hybrid version by merging the features of harmony search (HS) and simulated annealing (SA) for precise and accurate breast malignancy. In this work, has been utilized 02 breast datasets such as i) benchmark BCDR-F03 dataset and ii) local mammographic dataset. On the basis of tabulated outcomes have been proven the robustness of the proposed approach for these datasets.
This work presents an enhanced HHO strategy by merging the features of dimension learning-based hunting (DLH) search strategy for dataset issues.In DLHO, the features of the DLH search strategy play an important role for maintaining diversity, ignoring the premature convergence and enhancing balance between exploitation and exploration phase of the HHO algorithm. Experimental outcomes have been compared and verified for the effectiveness of the proposed algorithm with the most recent optimizer strategies. To sum up, the main contributions of this new work are: • A new enhanced version namely DLHO that includes features from DLH search strategy is proposed.
• To segment the cancerous region from mammogram, based on a fresh structure of the multilayer perceptron (MLP) neural network using DLHO algorithm.
• Utilizing DLHO algorithm for disease detection in other biomedical datasets also.
• Statistical and qualitative experimental analyses to assess the robustness and effectiveness of the DLHO algorithm as compared to recent algorithms.
The rest of the paper is organized as follows: In section 2, the basics of HHO algorithm is presented.Section 3 explains the new enhanced DLHO approach.Section 4 discusses about the robustness of the proposed approach based on numerical as well as statistical results. The section 5 briefs about proposed technique for breast cancer detection.In section 6, DLHO based MLP trainer is discussed. In section 7 usage of proposed technique for other biomedical datasets also such as XOR, Balloon, Iris, Breast Cancer and Heart. Finally, Section 8 provides conclusions for this research work.

Harris Hawks Optimizer Algorithm (HHO)
Harris Hawks optimizer approach is recently developed by Heidari, Ali Asghar, et al. [17]. It's a population based method and inspired through the intelligence of the crowd. In which main inspiration is the co-operative behavior and chasing style of Harris' hawks in nature called surprise pounce. During the hunting of the target each search agent or hawks jointly pounce prey by different positions. This methodology is implemented for finding the best target in the complex search space. The Fig.2 is indicate the phase of the HHO strategy.

Exploration stage
On the basis of the following equations, each search member randomly visits each location and waits to find a target; where X(t + 1) and t, X rabbit (t) are shows the the position vector of search agent and rabbit in the next generation, X(t) is show the recent location of search agents, r 1 , r 2 , where X i (t), t and N illustrates the position of each search agent, iteration and total number of search agents respectively.

Transition from exploration to exploitation
During this phase, energy of the prey is measured through the following Eq. 3; where E,T and E 0 illustrates the escaping energy of the target, total generations and initial state of its energy.

Exploitation phase
Soft besiege. The behavior of the search agents is attained through the following conditions; where J = 2(1 − r 5 ), ∆X(t), r 5 and t are presents the random jump strength of the rabbit throughout the escaping procedure,the difference between the location vector of the rabbit, random number and recent position in generation.The J value changes randomly in all iterations to simulate the nature of prey or rabbit motions.
Hard besiege. In this phase, the present positions of all agents are updated by Eq.(6): Soft besiege with progressive rapid dives. The next move of the search member has been evaluated through the following (Eq. 7): Each member are drive by the following rule; where D, 1 × D are presents the dimension of function, random vector and LF is the levy flight function, Which is calculated through the Eq. (9): where u, v and β illustrates random values and default constant set to 1.5. So, in this phase, for updating the positions of all members in the soft besiege stage can be performed by Eq.(10): where Y and Z are obtained using Eqs. (7) and (8).
Hard besiege with progressive rapid dives. The following rule is performed in hard besiege condition: where Y and Z are obtained using new rules in Eqs. (12) and (13).
where X m (t) is obtained using Eq. (2).

Enhanced DLHO Algorithm
The complex optimization applications are challenges for the optimization of meta-heuristics. According to literature, each optimization method is not able to show the best solution for all types of complex problems. All algorithms may face some drawbacks, so due to these weaknesses these could fail to find the solution of complex functions.
Cancer related issues are a big challenge for the bio-medical field researchers. Due to their complexity each optimization method is not competent to tackle these issues. Therefore, we always need the most robust optimizer method for the future demand. After this inspiration, to address these issues, a technique named DLHO has been developed by the merits of the HHO and dimension learning-based hunting (DLH) search strategy.In DLHO,the exploitation and exploration phases of HHO has been enhanced by DLH method to demonstrate the merits of DLH search strategy.DLH search strategy utilizes a dissimilar method to paradigm a neighborhood for each search member in which the neighboring information can be shared amid search agents.This strategy helps in maintaining the diversity and the balance amid global and local search. The exclusive motivation for overdue mixing modifications in HHO is to advantage the process to evade immature convergence and to steer the search in the way of possible exploration or search area in a faster direction.
By this mechanism, a hawk outcome is competent to escape from the local optima outcome. Also the accuracy or quality of the outcome is extended with faster convergence speed. With that the search members have explored the large search areas for trapping the superior outcome or goal. This procedure is repeated again and again until a new outcome or position has not fulfilled the termination conditions.
Mathematical phases of DLHO algorithm are as follows: • Parameters: the crowd size (n = 30), maximum generations (M g = 500), upper and lower bound 10 to 100 has been fixed during the implementation of the optimizer methods.
• Crowd Initialization: Firstly, the crowd in the exploration area has been initialized randomly. In this, optimizers allocate a random n d for the i th hawk; H i (i = 1, 2, ..., n). In the exploration area each slime has been allocated as: Where n and d illustrate the hawks and size etc.
The fitness of each slime has been amended by the equation (15); where f h i illustrates the fitness outcome of the i th hawk during the searching procedure in the exploration space. In addition the following two distinct matrix can be formulated for target as in equations ((16)-(17)); where f t i denotes the fitness outcome for the final target.
• Fitness Evaluation The fitness value of the hawk is calculated through the following equations (18)- (19); Where F best and F worst are illustrates the best and worst fitness value for functions.
• Drive stage During this phase, the DLH helps in specifically chasing the prey or goal of the search member in the search domain. DLH is used to construct a local area for each search member in which the best area or neighbors information can be shared between search members. Additionally, next phase represents how DLH and canonical HHO phases make two different search agents.
• Canonical HHO phase In HHO method phase, the position of the all member in the search domain is updated by equations ((1)- (13)).Finally the first search member for the new location or position of HHO X i (t) named X HHO (t + 1) is updated by equation (11).
• DLH phase In DLH, the position of the search agents is modified by an equation in which this different search agent is learned by its different neighbors or local optima and a randomly chosen search member from the population.After that, besides X i−HHO (t + 1), the DLH phase creates other candidate for the new position of hawk X i (t) named X i−DLH (t + 1). To do this, a radius R t (n) is evaluated applying Euclidean distance among present location of hawk X i (t) and the candidate position X i−HHO by equation (20).
Then, the near hawks of X i (t) indicated by L t (n) are calculated by equation (21) with respect to equation (20).Here M i illustrates the Euclidean distance amid X i and X i .
After that, the multi-neighbors learning is executed by the following mathematical formulation of equation (22); where d is illustrate the dimension.
• Position update stage In this phase, the best hawk is nominated by comparing the fitness outputs of two hawks X i−HHO (t + 1) and X i−DLH (t + 1) by equation; So, as per above equation if the fitness output of the chosen member is < X i (t), then X i (t) is modified by the chosen member. Otherwise, X i (t) remains unchanged in the crowd.

• Stopping Condition
Under this stage, the stopping criteria are utilized for modifying position of the hawks in the search domain. This process is repetitive again and again, until it does not fulfill the conditions of prevention.

Pseudocode of DLHO
The pseudocode of DLHO algorithm is reported in Algorithm 1.

Analysis and Discussions
The robustness of the new method has been evaluated on 29-CEC-test suite and verified by the recent methods such as MFO [18], SCA [19], Chimp [20], SBPSO [21] and AOA [22], SMA [23] etc. In addition, the accuracy, robustness and effectiveness [24] of the new technique are discussed in following subsections:

Test Suites & Constants
All methods have been coded in R-Matlab-2018a software and tested on 8GB Ram with 64 for bit operating system and Core i3, 8th Gen system for evaluating the robustness performance of the algorithms. Under this implementation of these methods the various constant settings have been fixed such as search agents (n = 30), size of problems (10 to 100), upper and lower size taken amind -100 to 100 respectively.
The performance of the proposed method has been tested on 29 CEC functions, these shown in table 1 [25]. The three dimension graphs of CEC are illustrated by figure 3. Normally, these suites could be divided into four phases as uni-modal, simple multi-modal, hybrid and composition etc.

Algorithm 1 Pseudo-code of DLHO algorithm
Inputs: Initialize the constants, size of crowd (n) and maximum number of generations (M g ) Output: The position of target and its fitness score Initialize the locations of all hawks H i (i = 1, 2, . . . , n) while stopping or ending criteria is not met do Evaluate the fitness of each member (or hawks) Fixed the best position of prey for all search member (X i ) do amend the initial energy by (E 0 = 2 × rand() − 1) and jump strength of the prey by (J = 2(1 − r 5 )).
Modify the energy by Eq. (3) if (energy is greater than and equal to 1) then Amend the position vector by Eq. (1) if (energy is less than 1) then if (rand is greater than or equal to 0.5 and energy is greater than or equal to 0.5) then Amend the position vector by Eq.(4) else if (rand is greater than or equal to 0.5 and energy is less than 0.5) then Amend the position vector by Eq.(6) else if (rand is less than 0.5 and energy is greater than or equal to 0.5) then Amend the (X i−DLHO (t + 1)) position vector by Eq.(10) else if (rand is less than 0.5 and energy is less than 0.5) then Amend the (X i−DLHO (t + 1)) position vector by Eq.(11) Amend R i (n) by equation (20) Modify neighborhood x i (n) with radius R i by equation (21) for d=1 to D do Modify multi-neighbors learning by equation (22) end for Choose best (x i−HHO (t + 1), x i−DLH (t + 1)) by equation (23) Modify crowd end for g = g + 1; end while Return Outcome (X b )

Testing and Evaluation
To confirm the effectiveness, robustness and accuracy of the proposed optimizer it has been run on 29-CEC test suite. Numerical and statistical results of the proposed optimizer proves that the proposed method is able to give the highly effective and accurate solutions in terms of min and max objective scores, mean and standard scores etc. Results of the optimizers are denoted by tables (2)(3)(4)(5) and Figures (5)(6)(7)(8).
Further, in the following subsections, the brief details of the results analysis and discussions have been described:-

Discussions on Outcomes
Generally the test suites can be split into four phases and each phase is utilized to prove the robustness of the optimizer on different levels such as; • Uni-modal: These are utilized to present the exploitation capability of the optimizers.
• Multi-modal: These functions involves various local optima that are applied to examine the capability of optimizers for many optima.
• Hybrid and Composite: These functions are used to assess the exploration capability of the optimizers.

Exploitation competence
Uni-modal functions involve a single global optima normally used for evaluating the exploitation performance of the optimizers. Results in tables (2)(3)(4), illustrated that the DLHO is competent to give the best exploitation performance in the search space as compared to other optimizers. Experiments have proven that the DLHO method can handle the uni-modal functions easily and is able to give the best and accurate score on these functions than others. Here, it could be concluded that the enhancement of HHO method allowed the functions to reach the best global optima. So, the DLHO method could tackle the high domain and complex functions easily.
As specified previously, the CEC test suites are more suitable functions for evaluating or testing the robustness of the optimizers. All Simulation proves that the DLHO is extremely functional. Results give strong evidence that the DLHO is competent to give the most effective and accurate optima solutions for the complex domain functions.

Competence valuation
Multi-modal functions are more suitable for testing the suitability of the optimizer since these involve many local optima, and the number of variables exceed exponentially against the size of the function as compared to the uni-modal function. Results show that the proposed method shows strong detection behavior.

Balance Competence valuation
In hybrid and composite functions various local and global optima are involved normally which are utilized for evaluating the exploration robustness of the optimizers. In addition, these suites are utilized for evaluating and verifying the balance amid exploration and exploitation phases of the optimizers. Results showed that the DLHO is competent to make a strong balance amid exploration and exploitation phases. So, by this enhancement we can find the most effective solutions for the complex functions which illustrate the strong exploitation and exploration behavior of the DLHO method. Furthermore, this modification helps to make the new position or other best fitness location for each search agent which helps to amend their present locations. Experimental measurements illustrate the effective detection behavior of the DLHO for trapping the best scores against the least number of generations. Since hybrid and composite suites involve more complex space which shows the robustness of the new method. Hence these experiments showed that the proposed method leads to best optima by the most effective scan behavior.

Accuracy
In this phase the accuracy behavior of the proposed optimizer will be discussed. In general, the least average score shows the accuracy of the optimizer against the best score. The best average scores of the optimizer are illustrated in a table 6 into two different phases such as best (B) and worst (W) respectively. The best and worst average scores have been assigned in table 6 by the outcomes of the table 4. Results of the table show that the DLHO is competent to trap the best score against the least average score than others. Hence it can be said that the new enhanced version is competent to trap the best and accurate outcomes for the complex issues as comparison to others.

Stability
In general, if the standard scores lie near to '0' then these illustrates the stability of the optimizers for the global optima in the search space. The standard scores of  figure 4 against the table 5. In figure 4, it can be easily seen that the DLHO method is finding the best global optima for each function against the least standard score than others. These results show better stability performance of the new version than others on all functions. Additionally the least sd score also illustrates the convergence speed of the optimizers. On the basis of the all outcomes of table 5 and figure 4 it can be concluded that the DLHO is competent to finding the best outcomes for complex issues much more fast without losing their track.

Convergence Graphs Analysis and Discussion
The performance graphs of the methods have been plotted between maximum number of generations and best outcomes so far, these shown in figures (5)(6)(7)(8). These figures show that the best outcomes are obtained beside each generation which are shown by the best outcomes.
According to V.D. Berg et al. [26], assures that the optimizers ultimately trace to a goal and catch the best outcomes in search areas. So, the DLHO expands the fitness outcome for every member and guarantees best goals for issues as generation increases. Now, it can be said that it happens by the enhancement of HHO method.Each member travels from highest to lowest optima and with this strategy the overall members are amends their locations with every generation.
With this strategy, the best outcomes are stored for searching the next best location for each member of the crowd in the search space. All these graphs show that the DLHO is competent in trapping the best outcomes for each function against the least number of generations which illustrates the fast convergence speed of the DLHO algorithm.

Proposed technique(DLHO) for breast cancer detection
In this work, firstly the mammography pictures are fed into an MLP neural network (NN) for pre-processing to categorize the nonlinear distinguishable structures of the pictures. Secondly, preprocessed output has been passed to DLHO.The structure of the proposed methodology has been illustrated in following subsections:

Picture pre-processing
Occasionally, Mammograms have a slight sound due to fluctuation and accidental alterations in dignified motions. Sound is a serious issue for picture-processing developments, mostly when boundaries inside the pictures have to be recognized, which needs separation. Separation rises the consequence of extraordinary frequency pixels, which comprises sound. In addition the median filter is utilized for the pre-processing procedure earlier picture division. it recalls boundaries although eliminating sound.
It is a lowpass filter that needs a lengthier dispensation time than other filters. The median filter exchanges the particular pixel by the median of the nearby pixels. It is evaluate by mathematical is as follow; Where M illustrate the median and δ shows the neighbourhood mid nearby the position (m, n) in an picture.

MLP Neural Networks (NNs)
The NNs are unique of the extreme developments in the domain of artificial and computational intelligence. They mimic the neurons of human brain to frequently resolve the big datasets related to bio-medical science. Every link amid the neurons is allocated a unusual weight, which specifies the amount of influence of the production on the contribution to the subsequent neuron. Usually, a neuron also has its individual weight, mentioned to as a bias, which defines the influence of the neuron on itself [27].The basic structure of the MLP Neural Networks (NNs) has been illustrated by figure 9. The outcome of every node can be calculated by two phases 1) firstly, the sum of the weight is calculated by the following equation; Where x i , w ij and h j represents the input, weight and hidden neuron j.Secondly, the activation function (AF) is utilized to evaluate the outcome of the neurons. During this study, the sigmoid function has been utilized in AF and it can be evaluate by the following equation; At end, the outcome of all neuron can be calculated by the following formula;

DLHO Algorithm and MSE
In this stage of implementation the proposed DLHO method has been applied to detect the breast cancer.Breast cancer detection is one of the major challenging wellknown issue in the biomedical field. From last few decades, scientists of different fields are trying to solve this issue by various robust optimizers. Due to the complexities of this issue, a robust technique is needed so that we could tackle this problem. By merging the merits of two powerful techniques such as HHO and DHL search strategy, we have tried to present a new method for tackling this issue so that the best solution can be generated.
Metric MSE(mean square error) has been utilized to evaluate the difference between desired and actual outcomes for each sample or object. MSE can be calculated by the following equation; where y j (k) and y * j (k) are denotes the actual and desire outcomes.The main objective of this study, the new hybrid version is utilization as a tool for breast cancer identification.

Post-processing
During this work, the kapur's strategy has been utilized for thresholding. Let L show the levels of a picture with m pixels. Then the average occurrence of a orange area i has been evaluated is as follows; where k(i) is illustrate the number of orange areas i in the picture.During the segmentation of the picture into various classes (m) we need m − 1 outcomes. So limit of the orange levels for all class concerning optimal thresholds is defined as; where z 0 = 0 and z m − 1 = L − 1.On the basis of above equation orange levels w k has been calculated by the following equation; Where w k denote orange levels of P i .Hence, image is segmented into two regions: R(0,z) and R(z,L) where; On the basis of above fitness function the optimum threshold has been obtained.

Dataset Report
To verify the applicability of the proposed method for breast cancer detection, experiments have been performed on following data sets: • First database: First MIAS dataset has been taken from the Pilot European Image Processing Archive (PEIPA) at the University of Essex [28].It includes 322 digitized mammography pictures or images (Amid it contain 202 normal and 120 abnormal pictures) and size of each image is 1024 × 1024. The sample of these images are illustrated by figure 10.
• Second database: Second dataset has been taken from the University of California at Irvine (UCI) Machine Learning Repository [29].The weight and biases range has been fixed -10 to 10 for each dataset. The population size (n = 200) and max number of generations (M g = 250) have been taken during the work.The details of input values are illustrated in table 11.

Results and Discussion
Statistical and numerical computations are done on the first database that included 322 digitized mammographic images out of which 202 are normal and 120 are abnormal pictures and the size of this dataset is taken as 1024 × 1024. This work has been implement on Matlab R2018a with a 64-bit operating system, Core i3, 8th GEN and 8GB Ram respectively.
For verifying the robustness of the proposed method the obtained outcomes has been compared with recent algorithms such as BP [30], GA [31], PSO [32], GWO [12] and WOA [33] respectively in terms of correct detection rate (CDR), false rejection rate (FRR) and false acceptance rate (FAR), mean and standard deviation etc. The CDR, FRR and FAR are calculated is as follows; Where D r ,C p and T p are illustrates the correct detection rate, number of pixels correctly classified and total number of pixels in the database respectively.
where F a ,H p and T p denote the false acceptance rate, number of healthy pixels classified as breast cancer and total number of pixels in the database.
where F r ,C p and T p denote the false rejection rate, number of cancer pixels classified as healthy and total number of pixels in the database. Experimental outcomes of the algorithms on the MIAS database have been illustrated through table 7 and figure 11.The graph is plotted (figure (11)) to compare the cancer detection rate (%) of various algorithms.    During the implementation of the proposed algorithm on the second data set for breast cancer,various input parameters are set such as number of attributes (9), number of training objects (599), number of test objects (100) and number of classes (2) respectively. The algorithm is run iteratively to find the best optima.Tabulated outcomes of the methods are illustrated in the table 9. The classification rate of the methods against this dataset are drawn by figure 14.The figure (13) depicts the detection of left and right breast cancer detection in eight mammogram pictures using the DLHO algorithm.
Again the values in the tables ((7)-(9)), show the superiority of the proposed method.The statistical score shows that DLHO method has a high ability to ignore local optima and approximate the superior global optima outcomes for biases and weights.This issue has the utmost complexity compared to the earlier discussed dataset issues in terms of the biases, weights and training objects. Experimental solutions give strong evidence of the suitability of the DLHO method in training multi-layer perceptions.The proposed method illustrates the high local prevention.For alternative, the local search around to global optimum and exploitation are high.  Here noticeably the cancer detection rate (%) of the maximum optimizer on this issue is very poor due the complexity of this data set.However, all the methods have been tested on the same parameter settings so that we can correctly find out the robustness of the algorithms.Tabulated values of the methods strongly shows that the proposed method is competent to solve this problem with best optima, accuracy and high rate of cancer detection as compared to others.The computational time of the algorithms for cancer detection have been reported in the tables ((8)-(10)).All the values in the tables have proved that the proposed method is competent to detect the cancer with highest detection rate as well as have least computational time as compared to others.    15)), are shows the mean square error of the algorithm on the first and second database. As stated, the metaheuristics were run various times to confirm the robustness. These graphs show that the proposed method successfully solves these datasets with least mean square errors.
By experiments, it could be determined that a huge drawback of the recent algorithms is that numerous weaknesses such as getting stuck at a local optima, slow convergence, weak exploration and exploitation balance and premature convergence cause them to fail to resolve complex issues in certain cases (see Figures ((12)-(15))).

DLHO based MLP trainer
The multilayer perceptron (MLP) plays an important role in the field of biomedical science. Firstly, the optimizer method was utilized on these datasets by. The weight and biases are the most important decision variables in these datasets.
Normally these variables are utilized for obtaining the superior prediction accuracy, approximation and classification respectively.Here the DLHO method accepts these decision variables in the form of a vector and it could be evaluate is as follows; where n, θ j and W i,j illustrates the number of input nodes, connection weight and bias of the j th node. Further, the mean square error has been obtained by utilizing the following equation; where m, o k i and d k i illustrates the number of outcomes, desired outcome and actual outcome of the i t h input. Now the accuracy of an multilayer perceptron (MLP) has calculated through mean square error over all the training trials by the following equation; where s and m illustrates the number of training trails and number of outcomes respectively.At end, the fitness function has been obtained by the above equations is as follows; M in (f ( v)) = M error (41)

Robustness of DLHO for other biomedical datasets
In this part of research, four different bio-medical issues such as XOR, balloon, iris and heart are taken from the University of California at Irvine (UCI) Machine Learning Repository [29]. On these datasets various population based optimizers are applied [12,34,35,36] for evaluating the high rate of classification and accuracy of the outcomes.The weight and biases range has been fixed -10 to 10 for each dataset. The crowd size is set 50 for XOR and Balloon, 200 for the rest datasets and 250 maximum number of generations have taken during the implementation of the codes.
The input values for these datasets have been illustrated by table 11. Each method has been run various times to generate the best outcomes for each datasets. The outcomes of the optimizers have been presented in the terms of average, standard and classification rate (%) respectively. Noticeably, the least mean and standard score of mean square error in the end of the generation shows superior performance and accuracy.
The normalization used in this work is called min-max Normalization which is the most important stage of MLP when we solve these datasets with attributes in different sizes or ranges. During this work, the stage is known as min and max normalization and is evaluate is as follows;

XOR
The XOR dataset has been tested by the metaheuristics on different object settings such as number of attributes (3), number of training objects (8), number of test objects (8) and number of classes (2) etc. The tabulated outcomes of the optimizers are reported in the table 12 and the classification rate of the optimizers have been drawn by figure 16.These outcomes have been verified in the terms of mean, standard score and classification rates. The results of the table 5, illustrate that the DLHO algorithm classifies XOR dataset with 100 % accuracy with least mean and standard outcomes. Normally, the least statistical scores illustrate the superior capability to ignore the local optima, significantly superior than other metaheuristics. Since, the GA, GWO and MGWO algorithms are most robust metaheuristics which are famous for high levels of exploration. So, all simulation shows that the proposed method is able to give highly effective outcomes for this dataset as compared to the metaheuristics.On the other side, we can see that except DLHO, MGWO, GWO and GA, all other algorithms give very poor performance for this dataset.With this above fact that we can say that the DLHO algorithm is more suitable for these type issues.

Balloon
Balloon dataset has been tested through the algorithms on different settings of samples such as number of attributes (4), number of training objects (16), number of test objects (16) and number of classes (2) respectively. Experimental outcomes of the algorithms have been reported through table 13.

Iris
The Iris dataset is actually one of the most famous issue in the literature. The researchers have tried to present the best outcomes for this dataset during the last few decades. For this dataset number of attributes (4), number of training objects (150), number of test objects (150) and number of classes (3) has been taken. So, the multi-layer perception structure for resolving this issue is of 4-9-3 and the issue of 75 decision variables. Experimental outcomes of the iris dataset are illustrated in the table 14. The classification rate of the algorithms for iris dataset have been drawn by figure 17.
Tabulated results of table 14, illustrates that the DLHO method outperforms other metaheuristics in terms of least average, least standard deviation and highest accuracy level respectively. So, the complexity of iris issue and multi-layer structure is high for this issue, tabulated outcomes are strong evidence for the efficiencies of DLHO in training multi-layer perceptions. Hence the outcomes affirm that the DLHO optimizer has better local optima prevention and exactness concurrently.  [12] 0.0229 0.0032 91.33 PSO [12] 0.228680 0.057235 37.33 GA [12] 0.089912 0.123638 89.33 ACO [12] 0.405979 0.053775 32.66 ES [12] 0.314340 0.052142 46.66 PBIL [12] 0.116067 0.036355 86.66

Heart
In lastly, the heart dataset has been solved by the optimizer methods which have number of attributes (22), number of training objects (80), number of test objects (187) and number of classes (2). Multi-layer perceptron's against the construction of 22-45-1 has been trained through the metaheuristics. Obtained outcomes of the metaheuristics have been reported by table 15 and the classification rate of the algorithms on the heart dataset are plotted by figure 18.
Experimental outcomes revealed that the DLHO is competent at giving the superior classification rate with the best statistical outcomes. Here, it could be concluded that the proposed method most effective in issue approximation heart dataset. The least mean and standard deviation outcomes reveals the better local optima avoidance of the proposed method.All simulation also illustrates that the superior error fits the proposed method. This validates the accuracy and best performance of the DLHO method as well.  [14] 0.0765 0.0376 75.14 MGBPSO-GSA [13] 0.10442 0.002041 73.33 GWO [12] 0.122600 0.007700 75 PSO [12] 0.188568 0.008939 69.75 GA [12] 0.093047 0.022460 58.75 ACO [12] 0.228430 0.004979 00 ES [12] 0.192473 0.015174 71.25 PBIL [12] 0.154096 0.018204 45 Figure 18: Classification Rate (%) of algorithms on Heart dataset.

Conclusion
Breast cancer, heart and other various types of diseases in the biomedical science is a most interesting topic for the researchers of different domains. This breast cancer disease is the 2nd deadliest kind of cancer.The young scientists are trying to give their best role for enhancing the detection rate of these types of diseases. However, due the complexities of these diseases a single technique is unable to provide an effective detection rate. Slime mould algorithm is famous for the high level of exploitation and exploration ability.Normally, these methods have been utilized for the solution of real world applications of different domains. But, each algorithm cannot tackle the high complex dataset issues due to their slow convergence, trap in local optima, premature convergence and least balance between exploration and exploitation phase respectively. After being motivated to this a new enhanced version has been introduced in this work for avoidance of these weaknesses.This version is developed by integrating the merits of Harris Hawks Optimization (HHO) algorithm and dimension learning based hunting (DLH) search strategy, it is called DLHO algorithm. In this modification the DLH search strategy has been utilized for enhancing the exploration and exploitation stages of the HHO algorithm, so that the above weakness could be removed. For verification of effectiveness of the DLHO method has been utilized 29 standard suites and two datasets. Experimental outcomes proves that the proposed method is competent to define the effective optima outcomes for biases and weight in terms of local avoidance and detection or classification rate of the biomedical issues as compared to others. This work also classifies and discusses the effective explanations for robust and worst performances of other optimization methods.
For future work, we shall present a more enhanced version for biomedical and engineering application. At end, we believe this research will inspire every young scientist, who is recently working on meta-heuristics and engineering applications.