An enhanced donkey and smuggler optimization algorithm for choosing the precise job applicant

Throughout the last few decades, Nature-Inspired Algorithms (NIA) have become very popular in solving real-world problems by getting inspiration from nature. This work suggests the Modified Donkey and Smuggler Optimization (MDSO) algorithm for solving the selection problem to choose suitable job applicants for a specific position. The original Donkey and Smuggler Optimization algorithm (DSO) has two modes: smuggler and donkey mode (non-adaptive and adaptive). In the smuggler mode, the algorithm tries to find the best solutions, which is the path to send the donkey to the destination; once the best path is found, the smuggler will send the donkey through the selected path. The donkey's actions will start when the smuggler mode's best solution is no longer the best one. Certain modifications have been made in the smuggler mode, including replacing the original fitness with a new fitness equation since that method can work more accurately. The Human Resource (HR) department in Korek-telecom has been used as a resource to achieve real-world data to test the original and Modified DSO. Using MDSO, not only will organizations be able to choose suitable job applicants more accurately but also will manage to do so at a faster pace as well.


Introduction
NIAs are widely used in solving real-world problems by getting inspiration from nature. Most of them are based on swarm intelligence, such as Ant Colony Optimization and Particle Swarm Intelligence [1]. Swarm intelligence (SI) is B Tarik A. Rashid tarik.ahmed@ukh.edu.krd Zaher Mundher Yaseen zaheryaseen88@gmail.com 1 is proposed, which originated from the DSO, MDSO algorithm is used to eliminate the complexity of the selection problem that the human resources staff face when recruiting new employees. Although DSO has shown its reliability when compared to other well-known algorithms through the benchmark test functions, there still exist areas where the DSO will not fit to apply as it is and will need modifications. One of these areas is the employee selection problem, in this example, the hardness of selecting the best candidate to fill a certain vacancy is tackled. The employee selection problem is the process of choosing the most suitable candidate for a particular position by interviewing the applicants and assessing whether their qualities meet the requirements needed for the job [8,9]. The recruitment and selection processes are timeconsuming, highly costly, and require considerable effort. Besides, organizations are always at risk of losing valued job applicants to other organizations; therefore, a more accurate and less time-consuming way of recruitment is required [10]. When compared to DSO, MDSO represents a solution that reduces the recruitment process exponentially in addition to selecting the best-fitting candidates for the proposed job vacancy based on the recruitment parameters. The DSO requires some enhancement in finding the fitness of the solutions since the procedure of calculating the parameters is neither accurate nor suitable for candidate selection in the recruitment process.
In this work, DSO and MDSO are executed on the same dataset, which has been obtained from a local telecommunication company named Korek-telecom. The dataset consists of the parameters used by the aforementioned company's HR to evaluate the candidates, as well as the weight of each one of them, see Table 1. In addition, Tables 2, 3, 4, and 5 show the real-life data obtained from the company. The method is comparing the two algorithms on the same real-life example, which is the employee selection problem, in this example, the hardness of selecting the best candidate to fill a certain vacancy is tackled. The main difference is illustrated by the results obtained from both algorithms. MDSO reduces the recruitment process exponentially and the DSO procedure of calculating the parameters is not accurate. As demonstrated in Sect. 4, MDSO has shown its ability to provide better results than the original DSO in solving the employee selection problem. It has provided a wider range of solutions as well as more accurate ones. In MDSO, it is possible to allocate credits to each parameter while in DSO there is no weight for the parameters, which means all of the parameters have the same priority, for instance, the value of a candidate's experience should be given more value than a candidate's communication skills when recruiting for a technical position, while in DSO all parameters have the same value. Therefore, it is safe to claim MDSO equation is more accurate and flexible. The main contributions of this paper can be summarized as follows: 1. Modifying the fitness function used in the first part of the DSO algorithm, which evaluates all the possible solutions to find the best one, in MDSO each parameter is squared first and then multiplied by its weight then summing all the parameters. The original equation for finding the fitness of solutions is shown in Eq. (1), and the modified one is shown in Eq. (2). 2. The software has been designed and developed according to Korek-telecom's style of recruitment and selection process to apply DSO and MDSO algorithms on the dataset, using C#.
The remaining structure of this paper is organized as follows. Section 2 presents the more related works to the job applicant-based optimization technique. Section 3 shows the proposed Donkey Smuggler Optimization Algorithm for solving the job applicant problem. Experiments and results are presented in Sect. 4. Finally, the conclusion and future works are given in Sect. 5.

Literature review
Saadaldin and Rashid developed DSO in 2019, which is a meta-heuristic algorithm and has been adapted to different problems such as packet routing, ambulance routing, and traveling salesman problems. The DSO algorithm has two modes: the donkey and smuggler modes or adaptive and nonadaptive modes. In the first mode, the algorithm finds the best solution. When the best solution has been found, then the second mode will start. The second mode, which has three phases (run, face & support, and face & suicide) will either maintain the best solution or return to the best solution after the conditions are found [7]. There still exist areas where the DSO will not fit to apply as it is and will require modifications such as the employee selection problem. Dorigo developed ACO in 1992, which is an algorithm that imitates the social behaviors of ants. Ants are working perfectly in finding the best path between their nest and food source. Ants achieve this by using a pheromone to mark the path and attract other ants to follow the same path to reach the food source [11]. ACO is easily coupled with other algorithms and performs excellently in addressing complicated optimization problems. The fundamental problem with ACO is altering the algorithm's convergence speed quickly while increasing the number of iterations [12].
In 1995, Kennedy and Eberhart developed PSO, which is the most famous nature-inspired algorithm. The algorithm has been developed by getting inspiration from flying birds and fish behaviors. PSO is a population-based algorithm, and so far, it has gone through many modifications by researchers. Furthermore, the PSO algorithm has been adapted to a huge number of applications [13]. PSO's benefits include simple implementation and fewer parameters that need to be changed. The main drawbacks of the PSO are it is simple to fall into a local optimum in high-dimensional space and the iterative process has a low convergence rate.also, it is hard to maintain the balance between exploration and exploitation [14]. In 2006, Yang developed the Firefly Algorithm (FA) at Cambridge University, which originated from the behaviors of fireflies. Fireflies have flashing activities, and they use this behavior to communicate, attract each other, and risk warning predators [15]. Fireflies are unisexual, and they attract their partners through brightness. The attractiveness is directly proportional to individuals' brightness levels [16]. Although this algorithm is easy to implement but needs a huge number of iterations [17]. In 2009, Yang and Deb suggested an optimization algorithm, which was a CS algorithm, and has been inspired by the social behavior of some cuckoo classes. One of the behaviors of this type of cuckoo bird was leaving their eggs in other host birds' nests and also removing existing eggs that belong to host birds [18]. CS is a meta-heuristic algorithm, it has various benefits, including being simple to use and requiring fewer tuning parameters. However, it has been shown to have a slower rate of convergence and very quickly falls into local optimum solutions [19]. ABC is one of the most famous SI algorithms and is derived from the natural life of bees when they try to find an essential resource of food. The bees are categorized into three groups which are scouts, employees, and onlookers. The scout's mission is to explore the best-searching area to find a source of food. The mission of employees and onlookers is to exploit promising solutions [20]. ABC algorithm is easy to implement in real-life applications, also it is a flexible algorithm regarding modifications. However, it is slow to obtain accurate solutions and additional fitness tests are required for the new algorithm parameters [21]. Another nature-inspired algorithm is the Bat Algorithm (BA), which has been developed by Xin-She Yang in 2010. BA is inspired by the social behavior of bats. The two most crucial behaviors of bats are prey and navigation. Bats use echolocation to find out the distance between themselves and their prey [22]. The algorithm is simple to use and capable of searching both locally and globally. It may also be utilized to solve a variety of optimization issues. However, it has several parameters that may require fine-tuning [23].
Gray Wolf Optimization (GWO) is a SI algorithm inspired by the social life of the gray wolf. GWO was developed by Mirjalili in 2014 and imitates the leadership style of wolves for their group hunting. In the GWO algorithm, a group of wolves is classified into four categories which are (Alpha, Beta, Omega, and Delta). The leader known as Alpha is responsible for decision-making in different situations. The betas behave as assistance for alpha wolves. The omega and Delta have a lower ranking compared to previous ones and are responsible to obey the commitments [24]. GWO is a simple algorithm with a few parameters and also has a good balance between local and global search. However, GWO has a problem with stability and accuracy when it is used as a meta-heuristic algorithm [25]. Another algorithm, which imitates the social life of cats, is called Cat Swarm Optimization (CSO) and was suggested by Bouzidi and Riffi in 2013. The seeking and tracing are its two operating modes, and they are only appropriate for small-scale population optimization. The convergence rate will be slower as the population grows [26]. The first mode is the seeking mode and is used to model the status of the cat. The second mode is the tracing mode, which represents the cat's behavior when they trace the target [27]. There is no global algorithm that can be applied and solve every problem in the real world. In other words, an algorithm can be used for some problems, but no algorithm can work for all types of problem that exists in the real world. Thus, we are offering MDSO to solve a selection problem in the recruitment process [28].

The proposed method
This section consists of two subsections. The first section describes the data obtained and how it was managed to be used in MDSO. The second section describes MDSO in terms of the modifications that were implemented to the DSO. Figure 1 shows the MDSO pseudocode.

Dataset
The data have been obtained from Korek-telecoms' HR department. The selection process in Korek-telecom has three resources, which are external, internal, and hybrid resources. The external resource is about getting employees outside of the organization. This resource comes in three stages, the first being the phone screening. In this stage, two

Face and Suicide: choosing the second-best solution as a best solution when the first-best
solution is no more the best one, by using Equation (3).

Face and Support
: this action will take place when there is overload on the first-best solution.using the second-best solution to support current best solution to reduce the overload, and when the overload reduced then the second-best solution will return to its original location. (4) the algorithm will join both first-best solution and second-best solution to reduce the overload on the first-best solution.

Both Equations (3 and 4) represent the face and support action in adaptive mode. In Equation (3) the algorithm finds the second-best solution, and in Equation
End questions are asked. First, where the job applicant's current location is? And second, how much do they expect to be paid? The second stage, which is the pre-employment test, includes an IQ test and an English test. Job applicants who passed this stage will be input for the third and the last stages. The third stage is the interview, which has five parameters: experience, education qualification, motivation, communication skills, and organizational and cultural fit. The second resource is the internal resource, where the applicant is already an employee in the organization. In this case, the assessment has one stage which is the interview. Its' parameters are motivation, qualification and education, experience, communication skills, and organization and cultural fit. The third resource is a hybrid resource, used when the organization decides to have employees for a specific job in internal and external resources. As per Korek-telecom's standards, each applicant will be assessed and recruited according to how high they have scored in their parameters. For scoring, they are using a range between 1 and 5 which represents Very Bad to Very Good for each parameter, as shown in Tables 2,  3, 4, and 5. Furthermore, it is essential to mention the values or weights of the parameters were received from the HR department of the subject company and applied to MDSO. This has been added to the dataset section, see Table 1. During the interview, we defined numerator and denominator parameters to adapt DSO to the selected application. Lastly, we had a restriction in obtaining the job applicants' information, such as their CVs and testing results. Tables 2, 3, 4 and 5 show the dataset, which has been obtained from Korektelecom and used to compare DSO and MDSO.

Modified Donkey and Smuggler Optimization Algorithm
As mentioned in the previous sections, the DSO algorithm's modification has been done in the smuggler (non-adaptive) mode of the algorithm, which is used to find the fitness of the solutions. The original equation for finding the fitness for solutions is shown below: When x i is the solution number i, i is a number of a solution, j is several directly proportional parameters of each solution, z is of reversely proportional parameters of each solution, x ij is the directly proportional parameter of solution i, and x iz is the reversely proportional parameter of solution i. In MDSO, the fitness equation has been changed completely, as shown below: When x i is the solution, x ij is the parameter of solution i, i is the number of the solutions, j is the number of the parameters and w is the weight of the parameters. In MDSO each parameter is squared first and then multiplied by its weight. The last step is the addition of the parameters. Equations (3) and (4) are used in the second part of the algorithm, which is the donkey (adaptive) mode. Equation (3) is used to choose the second-best solution as the best solution when the firstbest solution is no more the best choice. When i is the number of possible solutions. The equation will subtract the fitness of the best solution from the fitness of the possible solutions and the one that gives the least difference is considered the new best solution.
The first and second best solutions are joined by using Eq. (4). When best Solution is the current best solution and second − best Solution is the second-best solution which has been chosen by Eq. (3).

Experiments and results
As shown In Fig. 1, MDSO includes a two-part algorithm, in the first part the algorithm finds the best solution and the second part reacts to any changing event to maintain the best solution, these parts are explained below, respectively. Furthermore, the results of comparing MDSO with DSO are evaluated in Sect. 4.2.

Part I: non-adaptive mode
The non-adaptive part of the algorithm is used to find the job applicants' fitness using Eq. (2) for MDSO and Eq. (1) for DSO. All three recruitment methods (internal, external, and hybrid) are explained as follows:

External resource
This resource has three stages (phone screening, preemployment test, and interview), the first stage's output will become the input for the next stage. When the data pass to the next stage, there is a shortlisting for the job applicants. This happens by adding all job applicants' fitness and dividing by the number of applicants, as shown in Eq. (5).
The output of shortlist = n f=1 f(x n ) total number of applicants . (5) When f is the fitness of the job applicants, f (Xn) is the summation of the fitness of all job applicants and the total number of applicants is the number of job applicants. The following are the stages of the external resource: Phone screening The first stage in the external resource is the phone screening, which consists of three parameters location, salary, and speaking skills. Table 6 shows the data and the output of adapting MDSO in the first stage of the selection problem.
The DSO algorithm has been adapted to the same stage with the same data, as shown in Table 7. In DSO the numerator holds the directly proportional parameters and the denominator holds the reversely proportional parameters. Thus, the salary and location are denominators, while speaking skills is a nominator parameter. The procedure is described in Table 8.
As shown in Fig. 2, there is a huge difference between DSO and MDSO in choosing the right person in the first stage. MDSO selects applicant G in the phone screening stage while DSO prefers applicant H. when comparing the scores of applicants G and H, Both applicants have the same score in their locations. Job applicant H asked for a salary which is  in the range of Good, But job applicant G asked for an ideal salary, which is in the range of Very Good. Furthermore, job applicant H has a higher English speaking skills score, see Tables 6 and 7.
If we consider Korek-telecoms' standards, applicant G is more suitable which is the choice of MDSO. This difference in suitability between DSO and MDSO is due to the weight coefficient allocated in the MDSO algorithm. Suppose we judge both job applicants without considering their weight, in that case, we may choose job applicant H because if we add all parameters' scores, then job applicant H has a higher fitness as shown in Table 7. In MDSO, applicant G is preferred because the salary parameter has a 50% value of all parameters' total weight. Location and speaking skills together have a 50% of the total weight. Even though job applicant H has a higher English speaking skills score but it is not as effective as the salary parameter, as stated in Table 1.

Pre-employment test
The second stage of the external resource is the pre-employment test, which consists of two parameters IQ test and English spelling & grammar tests. Table 9 shows the data and results of adapting DSO and MDSO in the second stage of the external resource. According to the results, DSO selects B, D, and H; while MDSO prefers D as the best candidate. Applicant D is preferred because the IQ test parameter has a 75% value of all parameters' total weight. English spelling & grammar test has a 25% of the total weight. Since it is not possible in DSO to allocate weights coefficients to each parameter, thus DSO finds three candidates as the best solution while MDSO selected only one candidate as the best solution. This stage shows that it  Interview The third and last stage in the external resource of the selection process is the interview, it has five parameters Motivation, Qualification & Education, Experience, Organizational & cultural fit, and Communication skills. Table 10 shows the collected data and the output of DSO and MDSO in the third stage of the external resource. Furthermore, Fig. 4 illustrates the difference between DSO and MDSO in choosing the right person for a specific position in the last stage of the external resource. The issue of using DSO is presenting the same fitness for more than one job applicant. For example, applicants H and B has the same fitness. In MDSO, applicant B is preferred because this applicant has an ideal score in Qualification & Education and Experience parameters which have the highest weight coefficient among all parameters, as shown in Table 10. In MDSO, however, the fitness of all the applicants is different. Furthermore, according to the data which have been achieved from the HR department of Korek-telecom, job applicant B has been recruited as an employee there which is the choice of MDSO.

Internal resource
The second recruitment method of Korek Telecoms' hiring policy is internal resource recruitment, which has five parameters Motivation, Qualification & Education, Experience, Organizational & cultural fit, and Communication skills. Table 11 shows the data and results of adapting DSO and MDSO on the selection problem. As shown in Fig. 5, the DSO algorithm selects three job applicants as the best solution which are applicants B, D, and E. However, MDSO selected only one first-best solution, which is job applicant J.
MDSO outperforms DSO in selecting the right job applicant because MDSO has chosen applicant J as the first-best solution which has the highest scores for the most important parameters according to Korek-telecoms standards, as stated in Table 1. Also, the HR department in Korek-telecom hired the same job applicant (J) as an employee.

Hybrid resource
The hybrid resource is the third recruitment method of Korek Telecoms' hiring policy, to use this resource the solutions for both internal and external resources have to be obtained first. After having the solutions in both resources, the first-best solution of both resources together becomes the best solution in the hybrid resource. For instance, if we use MDSO as an example, all results of resources including external and internal are entered into the hybrid. Applicant B is selected as the first-best solution among external resources and applicant J is selected as the first-best solution among internal  resources. Then, these two are selected as the best solution for the hybrid resource.

Part II: adaptive mode
Once the best solution is obtained in the first part, the second part of the algorithm which is adaptive or Donkey mode will take place. The adaptive part of the algorithm has the same options and processes in all resources external, internal, and hybrid. Thus, we only mentioned the external resource of MDSO as an example. As shown in Fig. 4, for MDSO, if the first-best solution employee B leaves the job permanently, the algorithm will replace employee B with employee H. The second option is a provisional change employee. The algorithm uses this option when the first-best solution leaves its job temporarily; then, the algorithm will replace employee B with employee H using Eq. (3). The third option is the support employee which will be taken when there is an overload in employee B. In such cases, the algorithm will join employee B with job applicant H simultaneously. This is done using two equations, which are Eqs. (3) and (4). DSO algorithm is failed in the second (adaptive) mode because there are The same results as mentioned in internal and external resource two first-best solutions, and the organization can not decide which job applicant is the best solution. Otherwise, the adaptive part of the algorithm has the same process for both DSO and MDSO.

Evaluation of both DSO and MDSO
This section gives information about the evaluation of both DSO and MDSO when they have been adapted to the selection problem, as shown in Table 12.

Conclusion
This paper suggests the MDSO algorithm, which originated from the DSO algorithm. The DSO algorithm requires some enhancements in terms of finding the best solution. Furthermore, the DSO algorithm has been adapted to a few numbers of real-world applications. Thus, it should be adapted to different real-world problems. Besides, without modification, the original DSO could not be adapted to the recruitment process. Using MDSO, all the above restrictions have been addressed. In MDSO, we have modified the DSO algorithm, and the modification has been done in the first part of the algorithm, which is a non-adaptive mode. The modification aims to make the algorithm more accurate to reduce the risk of choosing inappropriate job applicants in organizations. MDSO shows more accuracy in finding the best solution due to the modifications done to the DSO. This has been proved by applying both DSO and MDSO into the same application with the same data. To reduce the probability of selecting the wrong people in the organizations, we have developed software that can perform a selection process using the MDSO algorithm.
In future work, we would like to adapt MDSO to different applications, such as traveling salesman problems, ambulance routing, and packet routing, to ensure that it can be applied to other applications. Moreover, it can be planned to assess the MDSO algorithm with more test functions such as CEC 2019 test functions. Furthermore, The MDSO algorithm can be hybridized with the Fitness-Dependent Optimizer (FDO) algorithm because both algorithms are meta-heuristic and swarm intelligence. Besides, both algorithms are flexible and can be modified easily. This algorithm should also be evaluated against newer algorithms such as FOX optimizer, Ant Nesting Algorithm, etc. [29][30][31].
Data availability Data sets are collected by the UKH from the Korektelecoms' HR department. Derived data supporting the findings of this study are available from the corresponding author [Nazir] on request.

Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.