This research work proposes a CRN architecture secure from PUEA. The framework comprises of an online ANN based PUEA detection model whose hyperparameters are optimally tuned using MGA.

## 3.4. Proposed Online Adaptive Memory based Genetically optimized Artificial Neural Network for PUEA Detection:

This framework ensures security from PUEA in a multihop CRN environment, as an extension of our previous works [16, 17]. Any transmission which imitates a PU transmission falls under the gambit of PUEA. Since the only requirement of this attack is that normal SUs should unwittingly sense it as a PU signal, there could be several ways this attack is carried out. For instance, an attacker could position itself as close as possible to a PU station and launch an attack. Alternatively, it could emulate the PU signal (e.g., PU’s transmission power). Thus, it is a challenging task to profile a PUEA behavior. On the other hand, profiling of a PU behavior is easier comparatively, since a PU does not change its form over long periods of time. For instance, most of the PU stations are stationary stations (e.g., a TV tower) which do not generally change its transmission characteristics over long periods of time. In conclusion, the idea is to build an online training data-set that consisting of PU energy signal data (in dB) to perform online training of an artificial intelligence based classifier to detect the presence of PUEA.

With reference to [15], the relationship between the transmitted and the received signal power is modelled using the log-normal shadowing of communication channel and the path loss model. Considering the exponent of path loss \(\delta\), the receiver signal energy \({E}_{r}\) is proportional to \({d}^{\delta }\), where \(d\) is the distance between the receiver and the transmitter. In addition to this, \({E}_{r}\) is proportional to the shadowing random variable \(S\) as well. This \(S\) can be expressed as,

$$S=\text{exp}\left(\alpha \beta \right)$$

1

Where, \(\alpha =\frac{ln10}{10}\), and \(\beta\) follows a normal distribution. The energy received at the receiver’s end can be expressed as,

$${E}_{r}={E}_{t}{d}^{-\delta }S$$

2

Where, \({E}_{t}\) is the energy employed by the transmitter to transmit the signal. For a cycle of every \(N\) instances of simulation, these received energy values are processed to create the online adaptive training dataset for PUEA detection. On successful training, the classifier differentiates a PU signal from a PUEA signal for the continuous simulation states of the network, keeping the CRN environment safe and secure for data transmission.

The PUEA detection model of OAM-GANN consists of an ANN model whose hyperparameters such as learning rate, number of epochs, and batch size, are optimally tuned using the Memory based GA (MGA) algorithm.

The various types of layers present in the ANN are input layer, hidden layer, and output layer. Input layer accepts input data and passes them to the hidden layer of neurons for learning. The hidden layer comprises of several layers of hidden units to learn the incoming data by computing the weighted sum of inputs and bias. This computation can be expressed as follows,

$$h\left(X\right)=sigmoid\left(\sum _{i=1}^{n}{w}_{i}{x}_{i}+b\right)$$

3

Where \(h\) is the output of the hidden state activated by \(sigmoid\) activation function, \(X\) is the input data vector, \(x\) is the input data element in the data vector, \(w\) is the weight, and \(b\) is the bias. The output layer is the final layer of the ANN which transforms the hidden layer output into a classification output.

The hyperparameters of the ANN model are tuned using the MGA algorithm based multi-objective optimization in the following procedure:

**Population Initialization**

A pool of possible values of learning rate, batch size, and number of epochs for the ANN are initialized. Each variable of hyperparameter acts as gene and the set of genes of hyperparameters is termed as a chromosome (population).

$$C=\left\{{C}_{1},{C}_{2},\dots ,{C}_{n}\right\}$$

4

Where the chromosome candidate \({C}_{i}\) can be expressed as,

$${C}_{i}=\{{g}_{l,i},{g}_{b,i},{g}_{e,i}\}$$

5

Here, \({g}_{l,i}\) is the learning rate of generation, \({g}_{b,i}\) is the batch size, and \({g}_{e,i}\) is the number of epochs.

**Fitness Function**

The fitness function considered here is the training error of the classifier, which is the Mean Square Error (MSE) value between the actual output and predicted output. The multi-objective fitness function proposed for this research work can be expressed as,

$${F}_{i}=f\left({C}_{i}\right)=\left\{f\left({g}_{l,i}\right),f\left({g}_{b,i}\right),f\left({g}_{e,i}\right)\right\}$$

6

Where,

$$f=\frac{1}{m}\sum _{i=1}^{m}{\left({y}_{i}-{\widehat{y}}_{i}\right)}^{2}$$

7

\({y}_{i}\) is the actual output value,

\({\widehat{y}}_{i}\) is the predicted output value,

\(m\) is the data length.

**Selection**

The selection phase involves the selection of candidates for the reproduction of offspring. All the selected candidates of the current generation are then arranged in pairs of two to increment reproduction. Then these candidates transfer their genes to the next generation. This selection process is executed using Roulette Wheel selection technique.

**Reproduction**

The stage involving the creation of child is termed as the reproduction stage. In this stage the GA utilizes 2 variation operators that are employed to the parent population. The 2 operators involved in the phase of reproduction are crossover and mutation.

**Crossover**

A crossover point is chosen randomly within the genes. Then the operator of crossover process swaps the genetic information of 2 parents from the current generation to create a new candidate representing the offspring. The parental genes are exchanged among themselves until the crossover point is reached. These newly produced offspring are included to the current population generation.

**Mutation**

The operator of mutation process adds random genes in the offspring to preserve the population diversity. It can be performed by flipping the values among the genes of the chromosomes. This process of mutation enhances the diversification and helps to resolve the issue of premature convergence.

**Termination**

The chromosome which gives the minimum fitness value of the updated generation (population including initial and reproduction stages) will be considered as the final best solution of hyperparameters (learning rate, batch size, and number of epochs) for ANN. The algorithm is terminated when the fitness value converges and attains saturation.