CIRM-SNN: Certainty Interval Reset Mechanism Spiking Neuron for Enabling High Accuracy Spiking Neural Network

Spiking neural network (SNN) based on sparse trigger and event-driven information processing has the advantages of ultra-low power consumption and hardware friendliness. As a new generation of neural networks, SNN is widely concerned. At present, the most effective way to realize deep SNN is through artificial neural network (ANN) conversion. Compared with the original ANN, the converted SNN suffers from performance loss. This paper adjusts the spike firing rate of spiking neurons to minimize the performance loss of SNN in the conversion process. We map the ANN weights to the corresponding SNN after continuous normalization, which ensures that the spike firing rate of the neuron is in the normal range. We propose a certainty interval reset mechanism (CIRM), which effectively reduces the loss of membrane potential and avoids the problem of neuronal over-activation. In the experiment, we added a modulation factor to the CIRM to further adjust the spike firing rate of neurons. The accuracy of the converted SNN on CIFAR-10 is 1.026% higher than that of the original ANN. The algorithm not only achieves the lossless conversion of ANN, but also reduces the network energy consumption. Our algorithm also effectively improves the accuracy of SNN (VGG-15) on CIFAR-100 and decreases the network delay. The work of this paper is of great significance for developing high-precision depth SNN.


Introduction
Image processing plays an important role in the military, astrophysics, and aerospace.Highresolution images contain a lot of information, and processing such images requires a powerful computing platform [1].Due to the limitation of hardware, developing a hardware-friendly and low-power neuromorphic chip is an effective solution to this problem [2].SNN is different from the current ANN and closer to the brain information processing mode [3].As a new generation of ANN, SNN has a more powerful information processing ability than the traditional ANN based on frequency coding information [4].SNN uses discrete spike trains to transfer information instead of analog quantity, which is more suitable for hardware implementation and rapid information processing [5].Therefore, image processing instruments and systems based on SNN neuromorphological chips will show excellent performance in complex task processing.
The study of SNN is fundamental to the design of neuromorphic chips.At the same time, SNN-based neuromorphological chips require high-performance SNN algorithms [6].This paper focuses on improving the performance of SNN algorithms.The output of ANN neurons is the analog quantity in the interval, while the output and input of neurons in the biological nervous system are discrete spike information [7].Scientific research shows that the information coding and processing mechanism based on spike trains are very important for the study of brain-like intelligence [8].Therefore, SNN composed of spiking neurons based on biological interpretability has become an indispensable tool for the study of brain-like systems [9].
ANN only processes the image in the spatial dimension, while SNN uses spike trains to represent and process all kinds of information [10].It integrates multiple dimensions of input information, such as time, space, frequency and phase.Because SNN processes discrete spike information, it cannot train the network through backpropagation (BP) like ANN [11].Although SNN has advantages in processing perceptual information, the training of its network model has not been unified [12].In the past few years, SNN has developed rapidly, and its model training remains an active area of research.At present, the algorithms used for SNN training are mainly divided into direct training and indirect training.Direct training includes spike time-dependent plasticity (STDP) and surrogate gradient learning [13].
STDP is a local weight correction rule based on spike-driven, and its synaptic connection strength is only related to the spike activity of pre and post synaptic neurons [14,15].In 2000, Song et al. [16] proposed the most original STDP according to the Hebb rule and gave the corresponding functional representation.The original STDP has an exponential kernel in its function expression, which is very inefficient to use directly.Simplified and improved STDP algorithms have been proposed to optimize the network parameters, which improve the performance of SNN and reduce the computational cost of the model [17,18].Inspired by biology, Legenstein et al. [19] proposed RM-STDP based on the dopamine reward mechanism, which achieved behavior-related adaptive changes in a self-organized way in complex SNN [20].Biological STDP can improve the performance of the network by introducing nonlinearity.The computation of SNN is much less than that of ANN, but SNN is more sensitive to parameter changes [21].The adaptive threshold method is used to reduce the impact of parameters on the network, making SNN easier to be applied in practice [22].STDP is biologically reasonable, but its lack of global information hinders the convergence of large models.SNN based on STDP is currently limited to shallow network learning (generally no more than 5 layers), and its accuracy in complex tasks is far lower than that of ANN [23].
The surrogate gradient learning algorithm is derived from ANN, which solves the problem that the discontinuous spike function in SNN is nondifferentiable [24,25].The essence of surrogate gradient learning is to use approximately differentiable functions to fit the activation of spiking neurons.In [26][27][28], the error backpropagation of their model is based on the membrane potential of a single time step, which completely ignores the time dependence of the input data.To solve this problem, different from previous spatial backpropagation [1,2], Wu et al. [29] first proposed Spatio-temporal backpropagation (STBP) to guide SNN training and achieved the most advanced classification accuracy on MNIST and N-MNIST.In 2021, Zheng et al. [30] proposed a threshold-related BN method (STBP-tdBN) to solve the problem of gradient explosion and disappearance of SNN.The model based on the STBP reports better classification accuracy on MNIST, CIFAR-10, and other datasets with lower delay [31].STBP needs to backpropagate the error of the model at each time step, so this algorithm will have a high calculation cost during training [24].Therefore, the internal training of SNN based on a surrogate gradient learning algorithm is quite complex, which limits its application in deep SNN.
The indirect training method is to convert the ANN into the corresponding SNN [32].This method is an effective way to realize deep SNN.Hunsberger and Eliasmith used the biologically more reliable LIF neuron model instead of the previous IF to convert SNN [6,32].They found that the high discharge rate of neurons can produce performance equivalent to that of the original network, while the low discharge rate will cause a great loss of performance [6].In 2015, Cao et al. [32] successfully converted a deep SNN with two orders of magnitude lower power consumption than the original network by training and tailoring CNN.Compared with the original network, the accuracy of the converted SNN on CIFAR-10 is reduced by 1.69%.Han et al. [33] proposed using soft reset (It can also be called residual membrane potential (RMP)) spiking neurons, appropriate hierarchical threshold initialization, and constrained ANN training to achieve almost lossless ANN-SNN conversion.The conversion from ANN to SNN will cause a loss of network performance.The focus of researchers' work is how to reduce the loss in the conversion process [34].
ANN-SNN conversion has been proven to be an effective way to realize deep SNN.The transformed SNN can obtain high enough accuracy in complex tasks [35].Based on this, we propose a continuous weight normalization (CWN) algorithm for the network to ensure the normal spike firing rate of neurons.The certainty interval reset mechanism (CIRM) proposed in this paper solves the problems existing in hard reset and soft reset.By adding a modulation factor (MF) to the CIRM, the spike firing rate of neurons is further adjusted to ensure the performance of the network.For different tasks, the MF only needs to be adjusted properly to achieve the lossless conversion of the network.By analyzing the spike firing rate, the advantages of our algorithm are explained.
The specific contributions of our work are as follows: 1. We monitor and analyze the spiking firing rate of the model and propose a modulation method to adjust it to improve the network performance.

The most important contributions:
We propose the CWN to ensure that the spike firing rate of spiking neurons is in the normal range.Aiming at the deficiency of hard reset and soft reset, the CIRM proposed by us effectively reduces the loss of membrane potential of hard reset and avoids the problem of over-activation of soft reset.We added the MF to the CIRM to further adjust the spike firing rate of neurons.The algorithm not only reduces the energy consumption of the model but also improves the network delay.
3. On MNIST and CIFAR-10, the algorithm in this paper achieves the lossless conversion of SNN.The algorithm also has good applicability in the deep network (VGG-15) and CIFAR-100.
The first section introduces the background and development of SNN.The second section introduces the relevant theories of ANN-SNN.On this basis, the optimization algorithm of the SNN model is proposed.Then, experiments are carried out on our proposed algorithm.Finally, the algorithm is discussed and analyzed in detail.

ANN to SNN
The conversion of ANN to SNN requires a series of conversion operations, including neuron replacement, weight normalization, threshold allocation, and selection of appropriate reset mechanisms [36].The purpose of these conversion operations is to improve the performance of the SNN model and reduce the performance loss in the process of network conversion.

Neuron Model
The spiking neuron is the basic unit of SNN.Its main function is to integrate and transmit the spike train information of the network.The integrated and fire (IF) model has been widely used in the field of neural computing for a long time [37].The reason why the IF neuron is selected is that it has a simple linear model.The analytical expression of the membrane potential of the IF model can not only quantitatively study the properties of neurons but also accurately simulate SNN by using an event-driven method [38].The membrane potential of the IF neuron at time t is calculated by Eq. (1).
where V m (t − 1) represents the membrane potential of neuron at time t − 1. W j,i represents the weight of the connection between the current layer neuron i and the upper layer neuron j. S j represents the spike train information of neuron j (If the neuron has a spike at time t, S j (t) = 1, otherwise, S j (t) = 0).The membrane potential of neuron at time t is the sum of the membrane potential at time t − 1 and the spike input received at time t.When the membrane potential at time t exceeds the threshold V thr , the neuron will emit a spike and the neuron membrane potential will be reset [39].
The training of the ANN converted to SNN uses ReLU as the activation function, and the nonlinear description of the function is as Eq.(2).
where y is the output of the activation function ReLU.W i j is the weight of the connection between the current layer neuron i and the upper layer neuron j. x j is the activation value of neuron j. b represents the bias term of neurons [40].The activation function ReLU varies linearly with positive input, which can be roughly simulated using IF neurons.Therefore, when ANN is converted to SNN, the neuron nodes in ANN are replaced by IF neurons.To make the network conversion more effective, the b is usually set to 0.

Reset Mechanism
When the membrane potential exceeds its threshold, the neuron will emit spikes to trigger the corresponding reset mechanism.Next, we introduce the most commonly used hard reset and soft reset.On this basis, we propose a certainty interval reset mechanism.

Hard Reset
At present, the hard reset is a common method in the research of SNN.At one spike instant, the membrane potential is "hard reset" to 0, regardless of how much the membrane potential exceeds the threshold V thr .If the membrane potential does not exceed V thr , then its value remains unchanged [41].Equation ( 3) shows the specific way of the neuron hard reset method.
Ignoring the residual membrane potential above V thr will affect the expected linear relationship between input and output.We assume that the weighted input sum received by an IF neuron (as shown in Fig. 1) in three consecutive time steps is 1.3 V thr , 1.2 V thr , and 0.5 V thr , respectively.The weighted input sum of the three consecutive time steps is 3 V thr .Neurons need to generate three spikes to maintain a precise linear relationship between input and output.Because hard reset ignores the residual potential above the threshold at the trigger moment, the neuron produces only two spikes in three consecutive time steps.The hard reset will cause the loss of neuronal membrane potential.

Soft Reset
The soft reset mechanism of spiking neurons effectively solves the problem of membrane potential loss of the hard reset [33].Equation ( 4) shows the reset mode of the soft reset mechanism.
At timet, the neuronal membrane potentialV m (t) < V thr , the membrane potential remains Similarly, we assume that the sum of weighted inputs received by an IF neuron (as shown in Fig. 2) in three consecutive time steps is 1.3V thr , 1.2V thr ,and 0.5V thr , respectively.The neuron soft reset method retains the membrane potential beyond the threshold, and the accumulation of membrane potential generates a spike in three consecutive time steps (as shown in Fig. 2b).If the neuron membrane potential V m (t) ≥ 2V thr at timet, the reset neuron membrane potential V m (t) is still greater than the thresholdV thr .When neurons do not receive any input at t + 1 time, they will also emit a spike (as shown in Fig. 2c).We consider the following situations, when the membrane potential V m (t) is far greater than 2V thr , the soft reset neuron will emit a spike at each time step in the coding time window, resulting in the problem of over-activation.In a fixed time window, no matter whether the neuron receives large input or small input, there is little difference in its spike firing rate.This kind of neuron with an over-activation problem will lose the ability to distinguish input data, thus affecting the performance of the network.

Certainty Interval Reset
To solve the problems of membrane potential loss caused by hard reset and over-activation caused by soft reset, we propose a CIRM, which effectively solves the problems.Biological neurons are resting (no action potential is generated) without external stimulation.Neurons are activated only when they are stimulated externally [42].Based on this, the following settings for neurons have been made.When there is no spike input of the spiking neurons, the neurons will not emit spikes.The membrane potential reset mode of CIRM is shown in Eq. (5).
Assuming that the membrane potential of neurons at time tis V thr , the adjustment of membrane potential under the CIRM will be as follows: and S(t) = 1.In this case, CIRM solves the problem of membrane potential loss caused by hard reset.3. V m (t) ≥ 2V thr , the membrane potential is reset, V m (t) = 0.99V thr and S(t) = 1.At time t, the membrane potential exceeds the threshold, and V m (t) is reset to 0.99V thr .The membrane potential is reset to 0.99V thr , which effectively reduces the loss of membrane potential caused by hard reset.When the neuron does not receive any input at time t + 1, the neuron membrane potential V m (t + 1) = 0.99V thr < V thr , the neuron does not emit a spike.Assuming that the weighted input sum of the neuron at time t is 2.1V thr , the neuron emits a spike and resets the membrane potential to 0.99V thr .At time t + 1, if the neuron does not receive any input, the neuron will not generate a spike, as shown in Fig. 3.

Pooling Operation
The pooling layer is generally behind the convolution layer of the convolution neural network to reduce the size of the convolution output map.Maximum pooling and average pooling are the two most popular methods to implement the pooling mechanism.SNN deals with spike train information rather than analog values.If the maximum is performed in the network, it will lead to a serious loss of information.Therefore, our pre-trained network considers average pooling as a pooling mechanism [43].

Normalized Weight
ANN saves the weight of its network after full training.If the ANN weights are directly mapped to the corresponding SNN without any operation, it will cause a great loss of network performance [44].The first possible cause of network performance loss is that the weight is too high, and its size exceeds the threshold of spiking neurons, resulting in multiple spikes in one time step.The second possible reason is that neurons do not get enough input or the threshold is too high, which leads to the low spike firing rate of neurons.In this case, the model does not have enough spikes to drive deeper layers, resulting in a greater loss of network performance [33,43].For the conversion loss, we propose a CWN algorithm to minimize the loss to improve the network performance.The pseudo-code of the weight normalization method is shown in Algorithm 1.This method adjusts the weight twice continuously according to the maximum activation value and weight value of neurons to avoid the loss of network performance (See supplementary materials for a detailed operation process of CWN).We used the network in this paper to test the weight normalization method of Diehl [43].This normalization method makes our model unable to obtain enough spikes to drive the deep layer, resulting in insufficient activation of deep neurons.Our proposed CWN algorithm effectively solves the problem of insufficient activation of deep neurons (For different methods, we have tested the spike firing rate of neurons, as shown in Fig. 10 in Chapter 4.2.2).This CWN method is suitable for the case of short delay and high precision.The results show that this method is very effective to improve network performance.

Input Encoding for SNN
We use Poisson coding to encode the input images of the network.A Poisson event generation process is used to generate the input spike train [45].Each time step of the SNN operation is associated with the generation of a random number whose value is compared with the corresponding input amplitude (image pixel value).If the generated random number is smaller than the value of the corresponding pixel intensity, a spike is triggered.The average number of spikes transmitted to the network as an input in a sufficiently large time window is approximately proportional to the size (pixel intensity) of the original ANN input.
The time window size of Poisson coding is very important for the coding process of neural information.The time window cannot be selected arbitrarily but depends on the dynamic characteristics of the stimulus signal.We randomly select 8 samples in CIFAR-10 (as shown in Fig. 4a) to compare the impact of different time windows on image coding.In the experiment, different time windows (Blue dashed box, window sizes: 64, 128, 256, 374, 512, 768, and 1024) are selected to encode the selected samples respectively.The coding frequency is fixed at 1000 Hz for all our experiments.Figure 4b shows the coding of the sample at a certain time (black dashed box), and Fig. 4c shows the image reconstructed after the Poisson coding of the sample (red dashed box).By comparing the reconstructed coded images, we can know that the small Poisson coding time window leads to a large loss of reconstructed image information.On the contrary, a large coding time window enriches the image information and does not cause a serious loss of image information.However, the larger the time window, the amount of calculation of the corresponding network will increase significantly, and the calculation delay of the network will also be prolonged.The algorithm we designed should not only reduce the amount of computation but also shorten the network delay under the condition of ensuring network performance.

Spike Firing Rate Modulation
The firing rate of spiking neurons is an important factor affecting the performance of SNN.
In this section, we analyze the spike firing rate of neurons from the perspective of reducing the conversion error.On this basis, we give the mathematical expression of the spike firing rate and the theoretical explanation of the conversion error.

Analysis of Neuronal Membrane Potential
The too high or too low spike firing rate of neurons will affect the performance of SNN.An appropriate spike firing rate is very important to improve the performance of the network.Sengupta et al. [46] adjust the spike firing rate of the network by changing the threshold.There are two problems with this adjustment method: 1.The increase of the threshold of spiking neurons will decrease the spike firing rate.In this case, it will be more difficult for inactive neurons to emit spikes.2. Decreasing the threshold can increase the firing rate of spiking neurons.However, spiking neurons may be over activated.
We avoid the above problems by adding modulation factor η to the CIRM to adjust the neuron spike firing rate (η represents the modulation factor).The specific adjustment method of η to spike firing rate is shown in Eq. (6).
Assuming that the membrane potential of neurons at time t is V m (t), the adjustment of membrane potential under the MF will be as follows: According to different modulation factors, the reset membrane potential V m (t) is divided into three cases, as shown in Eq. (7).
After MF is added, the reset neuronal membrane potential is V m (t), The value of MF affects the membrane potential reset, including the following three cases: 1. η > 1 may cause V m (t) to be less than 0. If V m (t) < 0, to avoid the loss of membrane potential, we reset V m (t) to 0. 2. If 0 ≤ V m (t) < V thr , V m (t) remains unchanged.3.If V m (t) ≥ V thr , to avoid over activation of neurons at t + 1 time, reset V m (t) to 0.99 V thr .
Larger η decreases the spike firing rate of neurons that are easy to activate, while it has little effect on neurons that are hard to activate.Smaller η increases the spike firing rate of neurons that are hard to activate, while it does not affect the spike firing rate of neurons that are easy to activate.The addition of η can flexibly adjust the spike firing rate and avoid the problem caused by changing the threshold.
The algorithm in this paper does not adjust the threshold of neurons.In the experiment, the threshold is a fixed constant.The modulatory factor is to adjust the membrane potential when the neuron is reset.The adjustment of the membrane potential by the conditioning factor is not complicated.When the neuron performs the reset operation, the modulation factor is subtracted from the membrane potential.

Error Analysis
There is an approximately proportional relationship between the firing rate and the activation value of neurons.According to the relationship between the spike firing rate r 1 i (t) and the activation value a 1 i of the neurons in the first layer of the network [47], we derive Eq. ( 8).The equation represents the relationship between the spike firing rate r 1 i (t) and the activation value a 1 i of the CIRM.
where r max represents the maximum spike firing rate.a 1 i represents the activation value of neuron i of the first layer network (ANN).r 1 i (t) represents the spike firing rate of neuron i in the first layer network (SNN).V 1 i (t) represents the membrane potential of neuron i in the first layer after reset.The relationship between the spike firing rate r 1 i (t) and the activation value a 1 i is not strictly proportional, but there is an error term t×V thr .When we add an MF (η) to the membrane potential V 1 i (t), the relationship between r 1 i (t) and a 1 i is shown in Eq. ( 9).
where V 1 i (t) represents the membrane potential of neurons after MF modulation.After adding η, the error term t×V thr .According to the size of V 1 i (t), there are three cases.In these three cases, t×V thr .In either case, the error term is reduced.Therefore, the MF (η) can effectively reduce the error in the conversion model.

Experiment
In the previous chapters, we theoretically derived and analyzed the algorithm.In this section, the algorithm is tested experimentally.

Network Structure
Here, we use a network similar to Fig. 5 to test our algorithm with MNIST and CIFAR-10.
The MNIST is trained in the network for 100 epochs.The batch size is 256 and the learning rate is 1.The CIFAR-10 is trained in the network for 240 epochs, and the batch size is also 256.The initial learning rate of the network is 0.1.The learning rates of epoch 81 and epoch 142 are divided by 10 respectively.All experiments are run on a GPU (GTX 3090).
The regularized dropout parameter is uniformly set to 0.5, and the momentum attenuation and momentum are 0.0001 and 0.9, respectively.The size of all convolution kernels in the network is 3 × 3.All the experimental results in this paper are the average of the results of five independent runs of the network.We trained the network according to the above method.The classification accuracy of the network is 99.39% (deviation (%): + 0.03/ − 0.05) on MNIST and 86.00% (deviation (%): + 0.05/ − 0.02) on CIFAR-10.

Result
Next, the above networks are converted into corresponding SNNs.We first save the weight and prepare the conversion operation of the network.Then, the above-saved weights are continuously normalized.Replace all nodes of the network with IF neurons (except the output layer).A coding layer is added at the network input to Poisson code the input image.In this way, the ANN network is successfully converted to SNN.We test its performance on the data set.

MNIST
Based on different coding time windows (time window sizes: 32, 64, 128, 256, 374, and 512), we test the classification performance of SNN on the MNIST dataset under hard reset, Fig. 6 The classification performance of SNN on MNIST under hard reset, soft reset and CIRM respectively soft reset, and CIRM respectively (the results are shown in Fig. 6).In our experiment, when the coding time window of the network is 32, the accuracy of our proposed CIRM is 99.1%, and the accuracy of hard reset and soft reset methods is only 64.89% and 98.79%.When the coding time window is 512, SNN uses hard reset, soft reset, and CIRM to achieve the best performance, and the accuracy is 99.38%, 99.27%, and 99.36%, respectively.The small coding time window of the network leads to the loss of image information, resulting in the model does not have enough spikes to drive deeper layers.On the contrary, a large coding time window does not cause a serious loss of image information, and the model can also get enough spikes to drive deeper layers, to achieve better classification.Thus, under the above three reset mechanisms, the classification accuracy of SNN increases gradually with the increase of the coding time window.However, the increase in the coding time window greatly increases the amount of calculation and delay of the network.By comparing the experimental results, we find that SNN has good configurability.If the network computing time is very important, we can choose the interval reset mechanism.This mechanism can make the network performance reach the same level as ANN with a small time window.If accuracy is important, a large time window can be used to improve the performance of the network.
The CIRM proposed by us shows good performance in the above experiments.Next, we add modulation factor η to the CIRM to adjust the spike firing rate of neurons, to further improve the network performance.The impact of η of different sizes on network performance is shown in Fig. 7.When the time window size is 512 and the modulation factor η = 1.5, the performance of the SNN network achieves the best classification accuracy of 99.48%.In this case, the accuracy is improved by 0.09% compared with the original ANN.From the experimental results, we can see that our algorithm achieves the lossless conversion from ANN to SNN.This may be due to the fact that the threshold and discretization during conversion to SNN reduce the possible overfitting of ANN, so the accuracy of the converted SNN is slightly higher than that of ANN [48].Choosing the appropriate η not only improve the performance, but also improve the delay of the network.
Table 1 compares the best performance of SNN under different reset mechanisms and also compares it with previous work (It should be noted that the network topology used in other Fig. 7 The impact of η of different sizes on network performance works is different from ours).All the accuracy results presented in this paper are the average of 5 independent experiments.We calculated the deviation of each experimental result from the average value.The maximum and minimum deviations are shown in the table (It can be seen that the accuracy deviation brought by the training of the model itself is very small compared with the improvement of the accuracy of the algorithm).By comparison, we can know that the addition of MF in the CIRM effectively improves the performance of SNN.

CIFAR-10
With the above experiments, we tested the classification performance of SNN on CIFAR-10 under hard reset, soft reset, and CIRM respectively (time window size: 64, 128, 256, 374, 512, 768, 1024, and 2048).The results are shown in Fig. 8.Under the three reset When the coding time window size is 2048, SNN uses hard reset, soft reset, and CIRM, and its network achieves the best performance, with the accuracy of 86.49%, 86.63%, and 86.13% respectively.Compared with ANN, the three reset mechanisms have achieved lossless conversion of models, which shows that the weight normalization method proposed by us is effective in network conversion and has good applicability.Next, we add η to the interval reset mechanism to adjust the spike firing rate of neurons.The impact of η of different sizes on network performance is shown in Fig. 9.When the input coding time window size is 2048 and η = 1.5, the classification accuracy of the SNN network reaches 87.01%.When η = 1.1, the performance of the SNN network reaches the best, and the classification accuracy is 87.03%.The accuracy of SNN with η = 1.1 is 1.03% higher than that of the original ANN.In this paper, the optimal MF of the network in MNIST is η = 1.5, while the optimal MF of the network in CIFAR-10 is η = 1.1.The optimal modulation factors are different for different datasets.The experimental results show that we can flexibly select the best η for different tasks to improve the performance of SNN.
The focus of the research on the conversion from ANN to SNN is to reduce the loss of network conversion.The CWN algorithm proposed in this paper shows excellent performance in the SNN of three reset methods.The CWN algorithm adjusts the spike firing rate of neurons to an appropriate range so that the classification accuracy of the converted SNN exceeds that of the original ANN.In the four-layer fully connected network, the data-based normalization algorithm has achieved good results on MNIST [43].We compare the spike firing rate of this algorithm with the CWN algorithm proposed in this paper at each neuron layer of the network (network time window size: 256, neuron reset mode: hard reset), as shown in Fig. 10.
As can be seen from the figure, we do not deal with the weight of ANN and directly convert the network.The neurons in the front layer of the network are over-activated, and the neurons in the subsequent layer are under-activated.We also tested the data-based normalization algorithm [43], this algorithm causes insufficient activation of neurons.Moreover, it can be Fig. 9 The impact of η of different sizes on network performance Fig. 10 The spike firing rate of neurons in each layer of the network seen from the figure that this algorithm cannot drive the deep network.Both over-activation and under-activation of neurons will cause the loss of SNN performance.The spike firing rate of neurons is the fundamental reason affecting SNN performance.The spike firing rate of each network layer of the CWN is better than the above two methods.The CWN algorithm proposed in this paper ensures that the neuron spike firing rate is in the normal range, and the performance of the converted SNN network is not reduced.
The CWN algorithm proposed by us preliminarily adjusts the spike firing rate to ensure the normal firing of neuron spikes.Aiming at the shortcomings of the hard reset and soft reset, we propose CIRM.The spike firing rate is further adjusted by adding MF to CIRM.The spike Fig. 11 The spike emission of neurons in each layer of SNN network with MF added to CIRM firing rate of neurons in each layer of the SNN network with MF added to CIRM is shown in Fig. 11.CIRM resets the membrane potential of spiking neurons to a reasonable range, which reduces the loss of membrane potential of hard reset and solves the problem of over activation of soft reset.MF is added to CIRM to further adjust the reset membrane potential.Choose the appropriate value of MF according to our needs.We experimentally select MF = 1.1 to achieve the optimal performance of SNN (time window: 2048, CIFAR-10), and the accuracy is improved by 1.026% compared with ANN.The MF with appropriate value finally controls the spike firing rate in a better range, to improve the performance of SNN.
In the experiment, we also tested that ANN does not use the CWN algorithm to directly transform the model.The accuracy of converted SNN (soft reset) on MNIST and CIFAR-10 is lower than 10%.This method causes a serious loss of SNN performance, resulting in model conversion failure.Table 2 compares the best performance of SNN on CIFAR-10 under different reset mechanisms and also compares it with previous work (Also, it should be noted that the network topology of other works is different from the model in this paper).Through comparison, we can know that the previous SNN conversion algorithms have caused the loss of network performance.The classification performance of SNN based on our algorithm is improved compared with the original network.Our algorithm has advantages in the conversion of SNN networks.

Average Spike Firing Rate
SNN processes information only when the spike appears, resulting in energy consumption.Therefore, the event-driven characteristic of SNN improves the energy efficiency of the neural morphological network.We propose the average firing rate (R AS R ) to indirectly reflect the energy consumption of the SNN model, which is defined in Eq. (10).
resulting graph, it can be found that the average firing rate (∼ 1%) presents a disproportionate benefit with the doubling of reasoning time−steps.Although the energy consumption of the hard reset model is low, the accuracy cannot meet the requirements.The soft reset mechanism solves the problem of membrane potential loss of hard reset and improves the accuracy of the network.However, the average firing rate increases significantly, which means that the energy consumption of the soft reset model is higher than that of the hard reset.Our CIRM reduces the loss of membrane potential caused by hard reset and reduces the energy consumption of soft reset.
As can be seen from Fig. 12, the average firing rate of CIRM is not much smaller than that of soft reset.While ensuring accuracy, to further reduce the energy consumption of the model, we add an MF to CIRM.The average firing rate curves of different modulation factors are drawn in Fig. 12.When MF = 1.1, the performance of SNN reaches the best (time window: 2048), and the accuracy is improved by 1.026% compared with ANN.And the energy consumption of the model is further reduced compared with CIRM.With the increase of MF value, although the average firing rate of the model gradually decreases (the corresponding energy consumption also decreases), the accuracy cannot meet the requirements.The algorithm not only reduces energy consumption, but also ensures its accuracy without loss.Adding appropriate MF into CIRM not only improves accuracy, but also reduces energy consumption.

CIFAR-100
The above content makes an empirical analysis and optimization of the algorithm proposed in this paper on CIFAR-10.We apply the conclusions and settings to the simulation of CIFAR-100 by deep layer network (VGG-15).To verify the effectiveness and generalization of our algorithm, the settings of VGG-15 and training parameters used in this experiment are consistent with Lu [53].CIFAR-100 trains 200 epochs, the batch size is 256, the initial learning rate is 0.05, and the learning rates of epoch 81 and epoch 122 are divided by 10 respectively.All experiments are run on a GPU.The regularized dropout parameter is uniformly set to 0.1, and the momentum attenuation and momentum are 0.0001 and 0.9 respectively.The classification accuracy of VGG-15 on the CIFAR-100 test set is 64.9% (deviation (%): + 0.01/ − 0.03).
Next, we convert the trained VGG-15 to the corresponding SNN and test its performance on CIFAR-100.The test results of SNN models based on different algorithms on CIFAR-100 are shown in Fig. 13.From the test curve, we get the performance comparison table of different algorithms (Table 3), the models compared in the table use the same network topology.Compared with the hard reset and soft reset methods, the SNN (VGG-15) model based on CWN and CIRM algorithm not only improves the network accuracy but also improves the network delay.Adding MF to the above model further improves the network performance.When MF = 1.5, the classification accuracy reaches 64.63%.Compared with the original ANN, the classification accuracy of SNN decreased by only 0.27%.Compared with other methods, the accuracy loss is the smallest, and the reasoning time-steps required by the network is also the smallest.It can be seen that CIRM and MF also have good applicability in deep networks and large data sets.
We propose a conversion algorithm to reduce energy consumption.Our algorithm achieves the lossless transformation of models with sparser event-driven operations.The algorithm in this paper can reduce the energy consumption of the model, and the performance of the SNN model can also meet the requirements.The algorithm in this paper is limited in improving the

Conclusion
In this work, we propose a continuous weight normalization technique for ANN to SNN conversion.This technology ensures the normal spike firing rate of spiking neurons, and there is almost no loss of network performance after conversion.We also propose a certainty interval reset mechanism for the membrane potential.This mechanism reduces the membrane potential loss of hard reset and solves the problem of over-activation of soft reset.In the interval reset method, we introduce a modulation factor to further regulate the spike firing rate of neurons.For different tasks, the lossless conversion of the network can be realized by appropriately adjusting the modulation factor.In our experiments, the performances of the network with modulation factors on MNIST and CIFAR-10 exceed that of the original ANN.Moreover, the algorithm proposed in this paper also has good practicability in deep network and large data set.
In the follow-up study, a general SNN algorithm is developed in close combination with the design requirements of the SNN neuromorphological chip.Image processing systems and instruments based on SNN have great advantages in both speed and power consumption.

Fig. 1
Fig. 1 Hard reset mechanism of neuronal membrane potential

Fig. 2
Fig. 2 Soft reset mechanism of neuronal membrane potential

Fig. 4
Fig. 4 Poisson coding of image.Different time windows (Blue dashed box) are selected to encode the selected samples respectively.a Original image.b the coding of the sample at a certain time (black dashed box).c the image reconstructed after Poisson coding of the sample (red dashed box).(Color figure online)

Fig. 5
Fig. 5 Network structure.The network includes one input coding layer, six convolution layers, three pooling layers and two full connection layers.C m i, j indicates that there are m neurons in the ith convolution layer, and the step size is j.P n×n i, j indicates that the step size of the ith pool layer is j and the kernel size is n × n

Fig. 8
Fig. 8 The classification performance of SNN on CIFAR-10 under hard reset, soft reset and CIRM respectively

Fig. 13
Fig.13 The test results of SNN models based on different algorithms on CIFAR-100

Table 3
SNN (VGG-15) performance comparison (CIFAR-100) model, but it is obvious in improving the energy consumption of the model.Compared with soft reset, our algorithm achieves lossless model conversion with low energy consumption.