3.1. System Model
There are three small cells in each macro BS of the HetNet architecture, which is composed of many four-gigabit-per-second (Mbps) 5G small-cell base stations (BCSs). All three sectors in the macro-BS are independent cells, but the smaller cells are all omnidirectional single-sector cells. Example of HetNet placement with three small cells on a single macro cell is shown in Fig. 1. The macro and small cell radii are R and r, respectively.
They work in frequency bands, with receptiveness of one, with the reuse frequency factor equal to one. Nk = 1,..., Nm, and Nl = 1,..., Nn define the sets of macro and tiny cells, respectively. Using a chance mobility model, the network's user set is designated as Nu with U = 1,..., U, where U is arbitrarily dispersed throughout the network. Either macro or small cells deliver requested traffic to the UEs. A dispersed self-organizing network accumulates HO information and optimises HCPs in each small and macro cell. Any time an end-user device (UE) switches from one network node to another, whether inside the same one or in another, the HO operation is initiated. The serving cell decides whether or not to begin the HO procedure to a board cell depending on the UE's measurement report (MR). Eq. [1] expresses the route loss model for various urban bands between a BS and the user as follows:
$${PL}_{u,k,l}={20log}_{10}\left(\frac{4\pi {r}_{0}}{{\lambda }_{l}}\right)+{20log}_{10}\left(\frac{{d}_{u,k}}{{d}_{0}}\right)+X$$
1
where:
$$BS=\left\{\begin{array}{c}small cell if l=1\\ macro cell otherwise\end{array}\right.$$
2
There is a reference distance d 0 and a distance d (u,k,l) between the user U and the base station k, which is expected to be 50 m (d (u,k,l) - d 0). The carrier frequency's wavelength is ll (c,l). c has a variance of 2 and a standard deviation of zero.
The RLF is reduced to a minimum by the highest possible quality of service (QoS) requirements. For QoS to be satisfied, the performance of both UE u must fulfil the smallest data rate criterion. Figure 3 shows how SINR can be modelled for the UE as a function of its location in the channel:
$${SINR}_{u,k,l}=\frac{{\rho }_{u,k,l}{g}_{u,k,l}{b}_{i,j}}{\sum _{i\in K\left\{k\right\}}\sum _{j\in u\left\{u\right\}}{p}_{i,j}{g}_{u,k,l}+{P}_{AWGN}\text{'}}$$
3
the received signal power at \(p (u,k,l)\) and its channel gain are shown below. B ij = 1 indicates that user u is related with one BS, and bij is the binary association pointer for that user. Other than that, b ij is zero. As a result of the UE's interference, pij is a measure of the received signal power. P AWGN is the power of white Gaussian noise that is added to a signal.
3.2. Functioning
The details about the different heterogeneous network information are gathered for pre-processing from the core network. The collected data is considered for pre-processing such as normalization. The normalized data is sent to KGMO to estimate the initial Kinetic Energy (KE) and initial velocity of users. Based on the initial velocity of the users a random particle is generated to calculate the fitness function for the evaluation of the technique. The entire process to yield appropriate fitness function is depicted in Fig. 2.
Once fitness function is calculated, the best fitness is selected to update the KE and velocity, and consequently generate a new random particle. This process is repeated till the best fitness is obtained. This is performed by KGMO algorithm. Once the best fitness is obtained, the resultant data is given to the data collection unit to function the network more accurately as shown in Fig. 3. The network performance is calculated in terms of throughput, BER and latency based on the location of user and mobility management.
The chief goal of this work is to solve the fast convergence issues of KGMO, this paper proposes the ALO technique for modifying the inertia weight. Initially, the equatic explanation for KGMO is given as follows:
3.3. KGMO – The proposed algorithm
w A new algorithm, called KGMO, uses kinetic energy as a measure of performance because gas molecules are the agents in a search area. Until the container's lowest temperature and energy are reached, the gas molecules will continue to travel in the container. Using Van der Waal forces, gas molecules have been shown to attract one another. The molecules' positive and negative charges cause the electrical pressure. Each gas molecule in the KGMO has four properties: velocity, and mass. Each gas molecule's kinetic energy influences its speed and location. A gas molecule's goal is to reach the coldest possible position in the algorithm by travelling to every corner of the search space.
After that, imagine a network with N agents (gas molecules). The ith agent's role is outlined in these terms:
$${X}_{i}=\left({X}_{i}^{1},..,{X}_{i}^{d},\dots {X}_{i}^{n}\right), for(i=\text{1,2},\dots ,N)$$
4
where \({X}_{i}^{d}\) Signifies the location of the ith agent in the dth dimension.
The velocity of the ith agent is obtainable by
$${V}_{i}={(v}_{i}^{1},\dots {v}_{i}^{d},\dots {v}_{i}^{n}),for(i=\text{1,2},\dots ,N)$$
5
in which v id is the ith agent's speed in the dth dimension.
An exponential relationship exists between the kinetic energy of gas molecules and their velocity, which can be calculated using Boltzmann distributions. Movement in the environment is referred to as kinetic energy.
$${k}_{i}^{d}\left(t\right)=\frac{3}{2}Nb{T}_{i}^{d}\left(t\right),{K}_{i}=\left({k}_{i}^{1},..{k}_{i}^{d},\dots {k}_{i}^{n}\right), for (i=\text{1,2}..N)$$
6
When the Boltzmann constant is b and the temperature of the ith proxy in the dth dimension at time t is T id.
The molecule's speed is changed every time the\({v}_{i}^{d}\left(t+1\right)={T}_{i}^{d}\left(t\right)w{v}_{i}^{d}\left(t\right)+{C}_{1}{rand}_{i}\left(t\right)({gbest}^{d}-{X}_{i}^{t}\left(t\right)+{C}_{2}{rand}_{i}(t\left)\right({pbest}_{i}^{d}\left(t\right)-{X}_{i}^{d}\left(t\right)\left)\right)\) (7)
As time goes on, T I d decreases exponentially for the converging molecules and is calculated as
$${T}_{i}^{d}\left(t\right)=0.95\times {T}_{i}^{d}(t-1)$$
8
Ith gas molecule's best prior location is represented by the vector (pbest)=(pbest)(i), and gbest = (g)(n) is the finest preceding position of all gas molecules in the container, as shown by the vector (g)best = (gbest)(i), and pbest = I as shown by the vector I The starting velocity and position of each particle are determined by random vectors. In this example, [v min;v max] is used as the upper and lower limits of the velocity of the gas molecules. In other words, if the inertia weight of the gas molecule is more than w, then |v i |=vmax. rand i (t) is a random variable with a uniform distribution at time t in the interval [0,1], which gives the search method a random quality. [0,1]. Acceleration quantities C 1 and C 2
As long as the container contains only one type of gas, the gas molecules' mass m is randomly generated from a range of 0 m 6 1; once detected, this value remains constant throughout the algorithm's execution. Using a random integer, the technique is able to mimic a variety of gases.
The molecule's position can be determined from the equations of motion used in physics.
$${X}_{t+1}^{i}=\frac{1}{2}{a}_{i}^{d}\left(t+1\right){t}^{2}+{v}_{i}^{d}\left(t+1\right)t+{X}_{i}^{d}\left(t\right)$$
9
where a id denotes the ith agent's acceleration in the dth-dimensional space.
Using the acceleration formula, we can derive
$${a}_{i}^{d}=\frac{\left(d{v}_{i}^{d}\right)}{dt}$$
10
According to Eq. (9) of the gas molecule rules, we can also conclude that
$${dk}_{d}^{i}=\frac{1}{2}m{\left(d{v}_{i}^{d}\right)}^{2}\Rightarrow d{v}_{i}^{d}=\sqrt{\frac{2\left({dk}_{i}^{d}\right)}{m}}$$
11
So, from Eqs. (10) and (11), the acceleration is distinct as
$${a}_{d}^{i}=\frac{\sqrt{\frac{2\left({dk}_{i}^{d}\right)}{m}}}{dt}$$
12
In the time intermission Dt, Eq. (12) can be as
$${a}_{d}^{i}=\frac{\sqrt{\frac{2\left({\varDelta k}_{i}^{d}\right)}{m}}}{\varDelta t}$$
13
As a result, the rate of acceleration is
$${a}_{d}^{i}=\sqrt{\frac{2\left({dk}_{i}^{d}\right)}{m}}$$
14
Then, from Eqs. (9) and (14), the place of the molecule is intended by
$${X}_{t+1}^{i}=\frac{1}{2}{a}_{i}^{d}\left(t+1\right)\varDelta {t}^{2}+{v}_{i}^{d}\left(t+1\right)\varDelta t+{X}_{i}^{d}\left(t\right)⟹$$
$${X}_{t+1}^{i}=\frac{1}{2}\sqrt{\frac{2\left({\varDelta k}_{i}^{d}\right)}{m}}(t+1)\varDelta {t}^{2}+{v}_{i}^{d}\left(t+1\right)\varDelta t+{X}_{i}^{d}\left(t\right)$$
15
In order to keep things simple, the molecule is randomly generated in each run of the algorithm but is the similar for all the molecules in performance.
$${X}_{t+1}^{i}=\sqrt{\frac{2\left({\varDelta k}_{i}^{d}\right)}{m}(t+1)}+{v}_{i}^{d}\left(t+1\right)+{X}_{i}^{d}\left(t\right)$$
16
The minimum fitness is originate by using
$${pbset}_{i}=f\left({X}_{i}\right), if f\left({X}_{i}\right)<f\left({pbset}_{i}\right)$$
$$gbest=f\left({X}_{i}\right),if f\left({X}_{i}\right)<f\left(gbest\right)$$
17
Eq. (8) for inertia weight change is obtained by using ALO algorithm's best fitness function. The ALO mathematical equations are summarised as such::
3.3.1. Operators of the ALO procedure
The ALO algorithm simulates the behaviour of antlions and ants in a trap. An ant colony and antlions can be simulated by allowing them to hunt each other in the search area and gain strength by utilising traps. Because ants' foraging behaviour is stochastically distributed in nature, the following random walk is used to mimic their movement:
$$X\left(t\right)=[0,cumsum(2r\left({t}_{1}-1\right)),cumsum(2r\left({t}_{2}-1\right)),\dots cumsum(2r({t}_{n}-1)\left)\right]$$
18
which adds the increasing sum, n is the amount of iterations, t represents the random walk step, and r(t) is the stochastic function distinct as follows::
Here, rand is a random number made in the range of [0,1] for each iteration of the random walk.
$$r\left(t\right)=\left\{\begin{array}{c}1 if rand>0.5\\ 0 if rand \le 0.5\end{array}\right.$$
19
In the following matrix, the position of ants is kept and used during optimization:
$${M}_{Ant}=\left[\begin{array}{cc}{A}_{\text{1,1}}& \begin{array}{ccc}{A}_{\text{1,2}}& \cdots & {A}_{1,d}\end{array}\\ \begin{array}{c}{A}_{21}\\ ⋮\\ {A}_{n1}\end{array}& \begin{array}{c}\begin{array}{ccc}{A}_{\text{2,2}}& \cdots & {A}_{2,d}\end{array}\\ \begin{array}{ccc}⋮& \cdots & ⋮\end{array}\\ \begin{array}{ccc}{A}_{n,2}& \cdots & {A}_{n,d}\end{array}\end{array}\end{array}\right]$$
20
in which each ant's position is stored in M Ant as an array, Ai,j displays the value of the i-th variable (dimension) in M Ant, and the total number of variables in M Ant is equal to n x d.
A comparison between ants and particles or persons in GA is warranted. The ant's position is a good indicator of a solution's parameters. Matrix All ants' positions (variables from all solutions) will be saved by M Ant during optimization.
Optimization is used to evaluate each ant's fitness (objective function), which is stored in the subsequent matrix:
$${M}_{oA}=\left[\begin{array}{cc}{f\left(\right[A}_{\text{1,1}}& \begin{array}{ccc}{A}_{\text{1,2}}& \cdots & {A}_{1,d}\left]\right)\end{array}\\ \begin{array}{c}{f\left(\right[A}_{21}\\ ⋮\\ {f\left(\right[A}_{n1}\end{array}& \begin{array}{c}\begin{array}{ccc}{A}_{\text{2,2}}& \cdots & {A}_{2,d}\end{array}\left)\right]\\ \begin{array}{ccc}⋮& \cdots & ⋮\end{array}\\ \begin{array}{ccc}{A}_{n,2}& \cdots & {A}_{n,d}\left)\right]\end{array}\end{array}\end{array}\right]$$
21
where n is the amount of ants, A (i,j) shows the value of the jth dimension of the ith ant, and f is the objective function, M oA is the matrix for preserving each ant's fitness.
We assume that the antlions are also lurking someplace in the search area, as well. The following matrices are used to keep track of their positions and fitness values:
$${M}_{Antlion}=\left[\begin{array}{cc}{AL}_{\text{1,1}}& \begin{array}{ccc}{AL}_{\text{1,2}}& \cdots & {AL}_{1,d}\end{array}\\ \begin{array}{c}{AL}_{21}\\ ⋮\\ {AL}_{n1}\end{array}& \begin{array}{c}\begin{array}{ccc}{AL}_{\text{2,2}}& \cdots & {AL}_{2,d}\end{array}\\ \begin{array}{ccc}⋮& \cdots & ⋮\end{array}\\ \begin{array}{ccc}{AL}_{n,2}& \cdots & {AL}_{n,d}\end{array}\end{array}\end{array}\right]$$
22
A matrix called M Antlion keeps track of the positions of all antlions, and a value called AL (i,j) displays the value of the number of dimensions in which an antlion appears in the i-th dimension (dimension).
$${M}_{oAL}=\left[\begin{array}{cc}{f\left(\right[AL}_{\text{1,1}}& \begin{array}{ccc}{AL}_{\text{1,2}}& \cdots & {AL}_{1,d}\left]\right)\end{array}\\ \begin{array}{c}{f\left(\right[AL}_{21}\\ ⋮\\ {f\left(\right[AL}_{n1}\end{array}& \begin{array}{c}\begin{array}{ccc}{AL}_{\text{2,2}}& \cdots & {AL}_{2,d}\end{array}\left)\right]\\ \begin{array}{ccc}⋮& \cdots & ⋮\end{array}\\ \begin{array}{ccc}{AL}_{n,2}& \cdots & {AL}_{n,d}\left)\right]\end{array}\end{array}\end{array}\right]$$
23
For each antlion, there is a matrix called M oAL that is used to store the antlion's fitness, and AL (i,j) (n,f) is the goal function.
For optimization, certain rules are followed:
-
Ants use a variety of random walks to explore the search area.
-
Insects travel in random directions in all of their dimensions.
-
The antlions' traps have an effect on random walks.
-
As their fitness increases, antlions can build larger and larger trenches (the higher fitness, the larger pit).
-
As a general rule, antlions with larger pits are more likely to trap ants.
-
In each repetition, an antlion can capture every ant, and the elite (fittest antlion).
-
As the ants slide towards the antlions, the random walk's range of motion is reduced.
-
This means that when it gets stronger than the antlion, the antlion will grab the ant and drag it under the sand.
-
In order to increase its chances of catching another victim after every hunt, an antlion moves closer to the prey it has just caught and constructs a pit.
3.3.1.1. Random walks of ants
The Eq. is the basis for all random walks (18). Random walk is used at every phase of optimization by ants. It is not possible to directly use Eq. (18) to update the position of ants since every search space has a border. The random walks are adjusted using the following equation to keep them within the search space.
$${X}_{i}^{t}=\frac{({X}_{i}^{t}-{a}_{i})\times ({d}_{i}-{C}_{i}^{t})}{({d}_{i}^{t}-{a}_{i})}+{C}_{i}$$
24
where ai represents the minimum possible random walk in the i-th variable, bi represents the largest possible random walk in the i-th variable, ct I represents the smallest possible random walk in the i-th variable at iteration t, and dt I represents the largest possible random walk in the i-th variable at iteration t.
To ensure that random walks occur within the search space, Eq. (24) should be used in each iteration.
3.3.1.2. Trapping in antlion’s pits
antlion traps have an effect on ants' random movements, as was previously addressed in detail. The following equations are presented to mathematically describe this assumption:
$${C}_{i}^{t}={Antlion}_{j}^{t}+{C}^{t}$$
25
$${d}_{i}^{t}{=Antlion}_{j}^{t}+{d}^{t}$$
26
Where Ct is the minimum of altogether variables at a given iteration, and C I t is the smallest of altogether variables for an individual ant, and d i is the supreme of altogether variables for an individual ant, and Antlionj t is the position of the picked j-th antlion at that iteration.
3.3.1.3. Building trap
A roulette wheel is used to simulate the antlions' hunting abilities. During optimization, the ALO procedure must use a roulette wheel operator to select antlions based on fitness. Fitter antlions have a better chance of taking down ants thanks to this process.
3.3.1.4. Sliding ants towards antlion
Since antlions can build traps that are proportionate to their fitness, the techniques presented so far need ants to travel at random. Antlions, on the other hand, shoot sands out of the pit's centre once they detect an ant. The ant that is annoying to escape the trap is being dragged down by this behaviour. Ants' hyper-sphere radius is compact adaptively to model this behaviour in mathematics. In light of this, the following equations have been proposed:
$${C}^{t}=\frac{{C}^{t}}{I}$$
27
$${d}^{t}=\frac{{d}^{t}}{I}$$
28
dt is the vector containing all variables at t-th iteration, where I is the ratio, C is the minimum, and t is the iteration numeral.
I = 14 10w t/T in Equations (27) and (28) When \(t > 0.1T, w = 2;\) when it is > 0.5T, \(w = 3;\)when it is > 0.75T, it is w = 4; and w = 6 when it is > 0.95T, t is the current iteration. T is the maximum amount of iterations. The constant w can be used to fine-tune the level of exploitation accuracy. Using these equations, ants' positions are updated with a smaller radius and approximate the sliding motion of an ant within the hole. So that the available research area can be used to its full potential.
3.3.1.5. Catching prey and re-building the pit
Hunting concludes when an ant falls to its doom and is swallowed whole by the antlion. After this, the antlion eats the ant's body by snatching it from the ground and burying it. Predation occurs when an ant gets more physically fit than its counterpart, such as by entering the sand, to emulate this process. To upsurge its odds of infectious new prey, an antlion must then update its location to reflect the most recent location of the chased ant. In this regard, the following equation has been proposed:
$${Antlion}_{j}^{t}={Ant}_{i}^{t} if f\left({Ant}_{i}^{t}\right)>f\left({Antlion}_{j}^{t}\right)$$
29
Here, Antliont j specifies which one of the three iterations is currently running, Antt I tells which one of those iterations is currently running, and the current iteration is t
.
3.3.1.6. Elitism
As a result of elitism, evolutionary algorithms can preserve the best solution(s) at any point in the optimization procedure. The most successful antlion from each iteration is kept and treated as an elite antlion in this investigation. Due to its superior physical condition, it should be in a position to influence all ants over subsequent iterations. A random tour around one of the roulette wheel-selected antlions and the elite is expected to occur simultaneously:
$${Ant}_{i}^{t}=\frac{{R}_{A}^{t}+{R}_{E}^{t}}{2}$$
30
where R At is the roulette wheel's random walk about the antlion at tth iteration, R Et is the elite's random walk at tth iteration, and Antt I is the i-th ant's location at tth iteration's random walk around the elite.
3.3.1.7. ALO algorithm
The global optimum for optimization difficulties, the following three-tuple function is used: The ALO algorithm:
Assuming that C is satisfied, then A returns true; otherwise, it returns false; and if B modifies A's initial population in any way, then C returns true. Following are the meanings of the functions A, B, and C:
$$\varnothing \underrightarrow{A}\{{M}_{Ant},{M}_{OA},{M}_{Antlion},{M}_{OAL}\}$$
32
$$\left\{{M}_{Ant},{M}_{Antlion}\right\}\underrightarrow{B}\{{M}_{Ant},{M}_{Antlion}\}$$
33
$$\left\{{M}_{Ant},{M}_{Antlion}\right\}\underrightarrow{C}\{true, false\}$$
34
where M Ant is the ant position matrix, M Antlion contains the antlion position matrix, M OA contains the ant fitness matrix, and M OAL contains the antlion fitness matrix. M OAL values are the input value for Eq. (8) in order to tackle the problem of rapid KGMO convergence. After that, we'll compare the proposed method's results to those of the currently used procedures.