Artificial neural networks are powerful tools to establish the relation between in-puts and outputs parameters under highly nonlinear conditions. Artificial neural networks captured the synaptic weights according to their training data set. Artificial neural network modeling is used to synthesize the metamaterial unit cell. In artificial neural networks, the back propagation technique is the fastest learning method, which reduces the computer’s processing time and provides the best results under the nonlinear relationship between input and output.

Here's we have six inputs i.e. Inner length of outer SRR (l1), Inner length of inner SRR (l2), Inner width of outer SRR (w1), Inner width of inner SRR(w2), Gap of SRR (g), Width of thin wire (p) and one output that has been predicted is Resonant frequency (f).

Neural network architecture is design with the following steps. Each first six values are fed as input to each neuron of the first layer. The input layer neurons are fed to the hidden layer neurons, in our case there are 16 neurons in the hidden layer. The hidden layer neurons map the non-linearity of the input into the linear output. Most of the processing has done in the hidden layer neurons. All the input neurons connect with each hidden layer neurons and make a dense network. All the connections are associated with the weight function. These weight functions are updated with every epoch.

In the output layer, the neuron with the highest value fires and determines the output the values are probable. Error is calculated at each layer of the neural network; both forward and backpropagation take place during the training process of a neural network Most of the data processing is carried out in the hidden layers. All the hidden neurons also contain a bias function. Bias function is similar to the IQ level of human brain. Output of the hidden neurons are fad at the output stage. At this output stage, we make the estimation of the actual output. The difference between actual output and estimated output is the error. Mean square error is a performance criterion of the ANN. To minimize the error we use steepest gradient Descent method that is an Optimization method. To reduce the error, network adjust the weight functions and this process is repeated until we got the minimum error. In every epoch the weights are updated and output error reduced, this process is called learning method. In our network, the learning method is LM backpropagation. The back propagation is a fast algorithm, which reduce the computation time and computational cost.

Figure 3 is showing the ANN architecture to predict the resonant frequency. Detail parameters of ANN architecture are as follows:

Number of input: 6

Number of output: 1

Number of hidden layer: 1

Number of neurons in the hidden layer: 16

Training algorithms: LM backpropagation

Learning function: Gradient descent

Performance measure: Mean square error (MSE)