An Improved Resampling Particle Filter Algorithm Based on Digital Twin

The problem of weight degradation is inevitable in particle ﬁltering algorithms, and the resampling approach is an important method to reduce the particle degradation phenomenon. To solve the problem of particle diversity loss in existing resampling methods, this paper proposes a new digital twin-based resampling algorithm to improve the accuracy of particle ﬁlter estimation based on the traditional resampling algorithm. The digital twin-based resampling algo-rithm continuously improves the resampling process through the data interaction between the data model and the physical model, and realizes the real-time correction capability of particle weights that traditional resampling methods do not have. The new algorithm calibration rules are divided according to the size of particle weights, with particles of large weights retained and particles of small weights selectively processed. Compared with the traditional resampling algorithm, the new resampling algorithm reduces the mean square error of the particle ﬁlter estimation results by 16.62 % , 16.49 % , and 13.86 % , and improves the computing speed by 7.67 % , 2.25 % , and 7.54 % , respectively, in the simulation experiments of nonlinear systems with univariate unsteady state growth model. The algo-rithm is experimentally demonstrated to accurately track a person in motion in an indoor building in a non-rigid target tracking application, which illustrates the eﬀectiveness and reasonableness of the digital twin-based resampling algorithm.


Introduction
With the development of information technology, multi-sensor information systems have been applied in many fields.Taking the digital twin (DT) as an example, the realization of the system's real-time online response, a visual representation based on online data, as well as the analysis and prediction of data and assisted decisionmaking requires the combination of sensors with different characteristics, such as radar, photoelectric sensors, image sensors, and acoustic sensors, to complete the organic fusion of multifaceted information.
The digital twin technical specification is still evolving, but essentially consists of a regulated system and model that primarily replicates the behavior of the system under multi-sensor information [1].The real-time data streams generated by the system during continuous operation provide data information to the model, so the model can be simulated in real time and thus the parameters can be adjusted to improve operational efficiency through digital twin technology [2].The model of the system can be either data-based or physical.A common approach to model calibration starts with a physics-based model and incorporates data to calibrate the model and ensure that it is as close to reality as possible within the range of available data.Since the manual execution of calibration is time-consuming and the dynamic change of model parameters leads to the dimensionality of the parameter space and the discrepancy between the model output and the data, digital twin technology can realize the automatic calibration process of the system model, which is beneficial to solve the above problems.
A particle filter is essentially a recursive Bayesian estimation method based on Monte Carlo Simulation, where the posterior probability density function required for a real problem is represented by a set of random samples with entitled values of the system state, where the set of samples is the so-called particles.
Monte Carlo methods were proposed for application in statistics in the 1950s [3,4] and further solved practical problems by sequential importance sampling (SIS).In the following years, Handschin et al. introduced the method to the field of automation control [5,6], and Zaritskii and Akashi et al. extended the method to other fields [7,8].Due to the computational power of computing devices at that time and the complexity and degradation of SIS computation, the problem of inefficiency, long computation time and inaccurate results were faced by this method.Until 1993 Gordon et al. proposed a Bayesian bootstrap filtering method [9], which introduced a resampling step based on the original sequential importance sampling, thus reducing the effect of particle degradation, and produced the SIR particle filter algorithm.In the literature [10] the name particle filter (PF) was formally proposed.Subsequently, Monte Carlo methods have been developed in various fields, including bootstrap, survival of the fittest, condensation, sequential Monte Carlo methods, and regularised particle filter [11], which are now collectively known as particle filtering [12,13].
In resampling algorithm for improving the particle degradation problem, Kong et al. have shown that the variance of importance weights in particle filtering increases with time [14], i.e., the weights of particles increase with the number of iterations to produce a small number of particles with significant weights, while most of the remaining particle weights are almost negligible, a phenomenon called the degradation of weights.The purpose of resampling is to replace the particles with small weights by the decomposition of the particles with large weights.Representative studies on resampling algorithms include Multinomial Resampling [15], Stratified Resampling [16], Residual Resampling [17], Systematic Resampling [18], Parallel Resampling [19], etc.However, resampling destroys the parallelism of the sequential importance sampling algorithm, frequent resampling reduces the robustness to wild values in the measured data, and the resampled particles are no longer statistically independent from each other, introducing additional variance to the estimation results.The resampling methods include the design of improved intelligent particle filtering resampling strategies based on genetic algorithms and the solution of resampling particle receding through the fusion of cumulative distribution function and regularized resampling steps [20,21].Since the resampled particles contain multiple duplicate particles, the resampling process may lead to the loss of particle diversity, so none of the above resampling algorithms can fundamentally solve the particle weight degradation problem.
Since the weight degradation problem in particle filter is inevitable, in order to overcome some problems caused by resampling method in alleviating the weight degradation problem, a new resampling algorithm is proposed in this paper, which can effectively compromise between increasing particle diversity and reducing the number of particles with small weight.The algorithm is based on digital twin technology, digital twin modeling of the process of particle filtering, real-time mapping of the data set of particles to the twin system through the data interaction to get the particle weights state information, and selective pre-processing of particles to improve the local optimal solution problem.
The new resampling algorithm provides more accurate results compared to the existing resampling algorithm.At the end of the paper, experimental data and simulation results are provided to further illustrate the effectiveness and accuracy of the algorithm.
The paper is structured as follows.Section 2 is divided into a study of the basic concepts and theoretical foundations of the particle filtering algorithm, pointing out the role assumed by the resampling algorithm in it.Section 3 examines the four selected resampling algorithms and points out the shortcomings and respective characteristics of these methods using some numerical examples, and then proposes a new improved resampling algorithm based on digital twin and explains the method and procedure of the new resampling algorithm.Section 4 illustrates the effectiveness and rationality of the method through simulation experiments.Section 5 gives examples of practical applications of the new resampling algorithm in target tracking.Section 6 summarizes the entire text.

The theoretical basis of the particle filter algorithm
A particle filter is a method to achieve recursive Bayesian estimation through nonparametric Monte Carlo simulations for any state space model that is nonlinear and the noise is non-Gaussian distributed, with an accuracy that approximates the optimal estimate.To better introduce the particle filtering algorithm, the Bayesian estimation theory must be analyzed first.

Bayesian estimation theory
The state space model is a time-domain model describing the change process of a dynamic system.The state transfer equation and the observation equation of the state space of a non-linear dynamic system are usually defined as: where x k denotes the state quantity at moment k and z k is the observed value at moment k. f (•) and h (•) can be linear or nonlinear, representing the state transition function and the observation function, respectively.u k−1 is the process noise in the process of state transition, v k is the observation noise in the observation process.u k−1 and v k are independent of each other.
In this paper, we use X k = x 0:k = {x 0 , x 1 , . . ., x k } to denote the system state values from 0 to k moments, and Z k = z 0:k = {z 0 , z 1 , . . ., z k } to denote the system observation values from 0 to k moments.The observation z k is only determined by the state x k and is independent of the other state values.
Bayesian recursive estimation includes two steps, prediction and update, the prediction and update process can be expressed as follows: (1) Prediction: If the state sequence conforms to a first-order Markov process, then p(x k |Z k−1 ) is obtained by p(X k−1 |Z k−1 ).They satisfy the following relationship: When the state x k−1 is known, the states x k and Z k−1 are independent of each other, so that from the Eq.(3) we can obtain: By integrating x k−1 at both ends of Eq.( 4), we obtain the Chapman-Kolmogorov equation: (2) Update: Obtain p(x k |Z k−1 ) by p(x k |Z k ).
According to Bayes' theorem, the posterior probability at moment k is as follows: Independent of z k as a new observation from Z k , i.e.: Thus, we get: Bringing Eq.( 7) and Eq.( 8) into Eq.(6), and considering that the current moment observations are independent of the previous moment observations, we can obtain: where Thus, the essence of the Bayesian problem can be summarized as follows: At each moment k, the posterior probability density function p (x k |Z k ) of the state x k is obtained using the obtained actual measurements Z k = {z 0 , z 1 , . . ., z k }.Thus, the state estimate xk , i.e.: Eq.( 5)(9)(10) constitute the general form of Bayesian estimation.While Bayesian estimation theory provides a clear and complete set of recursive estimation methods in state-optimal estimation problems, the method requires integral operations.

Particle filter algorithm based on Monte Carlo simulation
Monte Carlo simulation takes integral operation as the mathematical expectation of random variable, and converts the integral operation into solving sample expectation by extracting a series of samples that obey the posterior probability distribution of the state.Assuming that N samples {x i k } N i=1 are drawn independently from the posterior probability distribution whose state satisfies p (x k |z 0:k ), then the posterior probability density of the state can be approximated as follows: where δ (•) is the Dirac function, and p (x k |z 0:k ) denotes the approximation of the posterior probability density function p (x k |z 0:k ).Then the estimated value of the state quantity in Eq.( 10) can be approximated as follows: Furthermore, for any function g(x k ) of the state quantity x k , its mathematical expectation can be expressed as follows: Bayesian estimation theory and Monte Carlo integral idea constitute the basic mathematical framework of particle filter algorithm, as shown in Fig. 1.According to the law of large numbers, as the number of samples N increases, the expectation of the sample E (x k ) will get closer to the true expectation of the state quantity E (x k ).

Bayesian importance sampling
The particle filter algorithm based on Monte Carlo simulation approximates the posterior probability density of a state by a set of discrete samples and replaces the integral operation with the expectation of the samples.However, there are difficulties in the implementation of this method, because the state posterior probability density function p (x k |z 0:k ) is usually difficult to sample directly [22].The difficulty of computing mathematical expectations needs to be solved by Bayesian importance sampling.
Introducing a known easily sampled probability density function q (x 0:k |z 0:k ), the mathematical expectation E(x k ) of x k can be written as: Let w k (x k ) be the unnormalized description importance weight, and we can have: Substitute Eq.( 15) into Eq.(14), and get: Since q (x 0:k |z 0:k ) is a probability density function, Eq.( 16) can be expressed as follows: where E q (•) represents the expected value calculated on the probability density function q (x 0:k |z 0:k ).By using Eq.( 17), a set of samples {x i k } N i=1 with number N can be selected from q (x k |z 0:k ) and the expected value of the state can be calculated indirectly. where As can be seen from Eq.( 18), important resampling solves the integral operation problem in Bayesian estimation and makes the solution process easier to implement.

Sequential importance sampling
Although the importance sampling approach solves the problem that the posterior probability density of states p (x k |z 0:k ) is difficult to sample.However, p (x k |z 0:k ) contains observations from all past moments and the computational effort increases as the number of observations increases.
To control the amount of computation at each step, the sequential importance sampling (SIS) method was proposed, it is essentially: The posterior probability density of the desired state is expressed by the weighted sum of a series of random samples, and then the estimated value of the state is obtained.The importance density function is written in the following recursive form: If the posterior probability density p (x 0:k−1 |z 0:k−1 ) at moment k − 1 is known, we can get the posterior probability density p (x 0:k |z 0:k ), i.e.: According to Eq.( 15), we can obtain: Substituting Eq.( 19) and ( 20) into Eq.(21), we can obtain: In practical applications, q x i k x i 0:k−1 , z 0:k = p x i k |x i k−1 is usually chosen, at which point the recursive formula for importance weights is simplified to:

Particle degradation and resampling strategies
After several iterations of the SIS algorithm, there will be a phenomenon that the weights of a few particles are larger and the weights of the remaining particles are getting smaller and smaller, and there is a serious polarization of the weights.This phenomenon is known as particle degeneracy.
The effective particle number is usually used to measure the degree of degradation of particle weights, and the effective particle number equation is: where w i k denotes the weights of the corresponding particles and var(w i k ) denotes the variance of the particle weights.It is difficult to accurately calculate the value of N ef f in practical applications, and its approximate estimate is usually used: where w i k is the normalized weight.Since the smaller the N ef f , the larger the variance of the particle weights.The ideal case is w i k = 1/ N with the same normalized weights, when Neff = N .The extreme undesirable case is when some weight is 1 and the rest are zero when Neff = 1.Usually, a definite threshold N tresh is set, and when N ef f is less than N tresh , a resampling policy is applied.
The resampling process is shown in Fig. 2. In Fig. 2, the dots with uneven sizes represent the particles before resampling, and the dots with uniform sizes represent the particles after resampling.The diameter of the dots is proportional to the particle weight.
Bayesian estimation, Monte Carlo integration, sequential importance sampling, and resampling constitute a complete standard particle filter algorithm.Fig. 3 is a schematic diagram of the standard resampling particle filter (SIR) algorithm.

Comparison of basic resampling algorithms
In particle filter, the basic resampling algorithms include four types: Multinomial Resampling, System Resampling, Stratified Resampling, and Residual Resampling.
(1)Multinomial Resampling Multinomial Resampling was proposed by Gordon, which laid the foundation of a general-purpose sampling algorithm.JD Hol optimized the generation mode of random numbers and realized the calculation of complexity O(N ) [23].Firstly, N random samples are independently selected and sorted from the interval U (0, 1].Then, the particle x i k is copied, the number of copies is n i , and n i is equal to the number of u j located in the interval ( . N ordered uniformly distributed random numbers are extracted as follows: where u is a random number seed obeying uniform distribution, which satisfies ũ ∼ U (0, 1], and the generated set of random numbers {u j } N j=1 satisfies the condition of independent distribution on the interval (0, 1]. (2)Stratified Resampling The Stratified Resampling algorithm was proposed by Carpenter [10], which mainly uses the idea of stratified statistics.
Assuming that the total number of sampled particles is N, the Stratified Resampling algorithm divides the whole interval (0, 1] with equal intervals into N independent subintervals.Then the corresponding random number subinterval is (0, 1/N ), [1/N, 2/N ), . . ., [(N − 1)/N, 1], where the jth level denotes the interval [(j − 1)/N, j/N ).The random number of each layer obeying the random distribution is found in Eq.( 27).Then, the particle x i k is copied, the number of copies is n i and n i is equal to the number of u j located in the interval ( In Eq.( 27), u j is a random number that obeys a uniform random distribution on the interval (0, 1]. (3)System Resampling The System Resampling algorithm was proposed by Kitagawa in 1996 [24].It is the same as Stratified Resampling in terms of sample space processing and random number generation.The interval (0, 1] is divided into equally spaced N layers, and the method of random number generation is obtained by Eq.( 27).
The pseudo-code of the System Resampling algorithm is as follows: Algorithm 1 System Resampling algorithm.
1: Generate a random number r ∼ R[0, 1/N ] (Assign a uniformly distributed random number to the initial value) 2: W cum = 0 (W cum denotes the cumulative sum of the weights w i k , with an initial value of 0) 3: for n = 1 : N do while W cum > r do 7: end while 10: i (n) = k; (i (n) denotes the number of individual particles finally decomposed.i (n) = 0 means the original particles are discarded) 11: end for System Resampling algorithm is mainly to compare the magnitude of W cum and r in a loop, and finally determine the number of times the original particle is decomposed.As can be seen from the pseudo-code, the algorithm suffers from high complexity, large storage space, and long computation time.
(4)Residual Resampling The Residual Resampling algorithm first multiplies the weights by the number of particles N and then rounds them to obtain the initial value of the number of particle decompositions [25].Then, the fractional part after rounding is resampled systematically, and the updated value of the number of particle decomposition is obtained.The two parts are added to the final value of particle decomposition.
The pseudo-code for the Residual Resampling algorithm is as follows: Algorithm 2 Residual Resampling algorithm.
1: Generate a random number r ∼ R[0, 1/N ] 2: for n = 1 : N do is the round down operation, i(n) achieves the comparison of the weight of each particle with the value of r to determine the number of particles decomposed, the larger the weight, the greater number of decompositions.When the weight value is 0, i(n) = 0) (Adjust r value according to particle weight adjustment) 5: end for Multinomial Resampling mitigates the problem of weight degradation to some extent and the main problem is that the arrangement of the generated uniformly distributed random numbers is disordered.Stratified Resampling can confine the particles to different subintervals because the random numbers are not the same at different stratified positions, so the final random numbers have the property of being independent of each other.The difference with Stratified Resampling is that the random numbers in each stratum in System Resampling are at the same position, i.e., the random numbers in each subinterval are no longer independent of each other, but have a minimum difference in position.In Residual Resampling, the running time of the algorithm is smaller when the particle weights are mostly 0, i.e. when the weights of a few particles are large.Compared to System Resampling, which compares and cycles each particle indiscriminately, Residual Resampling abandons particles with a weight of 0 after rounding operation, resulting in fewer cycles.The comparison results can be obtained from Table 1.

Improved resampling algorithm based on digital twin
After analyzing the principles of four basic resampling algorithms, this section proposes a new improved resampling algorithm based on digital twin, which is based on the traditional resampling algorithm and improves the local optimal solution problem arising from the traditional resampling algorithm by establishing a digital twin model to selectively preprocess the particle weights.The digital twin-based resampling algorithm is based on the traditional resampling algorithm, which selectively preprocesses the particles according to the magnitude of their weights before resampling.The framework of the new resampling algorithm is shown in Fig. 4.
The framework package entity layer, information layer and system layer.The entity layer is the complete process of particle filtering algorithm, obtaining particle weights data set; the information layer performs digital twin mapping on the state of the entity layer, completes the processing and updating of particles according to the particle preprocessing rules, and is the bridge of information interaction between the system layer and the entity layer; the system layer achieves the function of particle degradation prediction through the real-time information interaction between the twin system and the information layer, including the detection of the weights distribution, the simulation of the results of simulation and controlling the degree of particle pre-processing.The processing rules are divided according to the size of the particle weights, and the particles with larger weights are kept, and the particles with smaller weights are optimized according to the following rules.The rules for processing small-weight particles are as follows: (1) The set of particles x i k , w i k N i=1 is divided into two sets with larger and smaller weights according to the weights, which are denoted as As shown in Eq.( 28): where W is usually selected as the N ef f value in the set w i k N i=1 according to the resampling strategy as a criterion for distinguishing the size of the particle weights, so as to effectively judge the degraded particles in the set.
(2) The particles with smaller weights are processed, and the processed particles are denoted by xl k , as shown in Eq.( 29): where l = {1, 2, . . ., N − M }, x m k is a randomly drawn particle in the set {x m k , w m k } M m=1 and r ∈ (0, 1) denotes a random number in the interval 0 to 1. (3) To achieve particle diversity, an update strategy is designed for the processed particles, as shown in Eq.( 30).
where xl k is the updated particle, u is a random number obeying u ∼ U (0, 1) , P S is the particle pretreatment degree factor in the range of [0,1].From Eq.( 30), when the value of P S is set to increase gradually from 0, the amount of particles that need to be processed by (1 − r)x m k + rx l k increases gradually.Therefore, the adjustment of the resampling result is realized by adjusting the size of P S.
Theorem: For the particles xl k , it takes values between x m k − x l k and x m k + x l k .For the particles xl k , it takes values between x m k − x l k and x m k + x l k .Proof : From Eq.(29) with r ∈ (0, 1) and x m k > x l k , it is easy to show that xl k is between x m k − x l k and x m k + x l k .For Eq.( 30), we will prove its rationality under two conditions. ( where r ∈ (0, 1) and x m k − x l k > 0. Therefore, when r = 0.25, the minimal value of xl k is x m k − 1 8 x l k , and when r = 1, the maximal value of xl It can be shown that, xl k is between x m k − x l k and x m k + x l k .Considering these two cases together, it follows that xl k is between x m k − x l k and x m k + x l k .Therefore, this theorem can be proved.Compared with the traditional particle filtering algorithm, the DT-based sampling algorithm completes the update of particles with smaller weights without changing the value of large weights of particles, reduces the number of resampling, and reduces the particle degradation phenomenon.
(b) The new particle set is systematically resampled, and the resampled particle set is Output the state estimate at moment k: The selection of particle diversity is achieved by adjusting the size of P S to achieve the adjustment of resampling results.The processing and updating process of particles with small weights is the result of manual selection, which reduces particle scarcity and increases particle diversity.When r = 0.5 and P S = 0 are selected, the process directly replaces particles with smaller weights with particles with larger weights, which can be considered as a complement to the traditional resampling algorithm.

Simulation results and experimental analysis 4.1 Simulation comparison analysis of traditional resampling and new algorithm
To demonstrate the effectiveness of the new resampling algorithm, the effect of each resampling in particle filtering is compared by simulation.
The model of literature [12] is cited as a simulation model in this paper, as shown in Eq.(35) and Eq.(36).The model is a univariate nonstationary growth model that is widely used in particle filter tests because of its highly nonlinear and bimodal nature, which makes it difficult to estimate.
where k denotes the time, u k−1 is the process noise of the state equation x k , and v k is the observation noise of the observation equation z k .
The simulation environment is as follows: The CPU is Intel(R) Core (TM) i5-8259U CPU @ 2.3 GHz; RAM is 8 GB; The operating system is Windows 64-bit.
Initial parameter setting: Firstly, the process noise and the observation noise are simulated, and the process noise variance is set to 10 and the observation noise variance is set to 1, both of which follow zero-mean Gaussian distribution.Then, the initial values of the state equation, observation equation, and particle filter (PF) estimation are set to x 1 = 0.1, the initial PF estimate is 1, and the time sampling point k is 100.The simulation is initialized to the parameters at moment 1 and the simulation starts at moment k = 2. Simulations are performed for particle numbers N of 10, 100, and 500, respectively.The particle pretreatment degree factor P S = 0.81.
Table 2 shows the mean square error (MSE) between the particle filter estimates and the state equation values obtained from the Stratified Resampling, Multinomial Resampling, System Resampling, Residual Resampling and new resampling simulations for the number of particles N = {10, 100, 500} , and the number of resampling S = {1, 20, 100}.
As can be seen from Table 2, with the increase in the number of resampling and the number of particles, the MSE calculated by the four resampling algorithms shows a decreasing trend.However, when S = 100, N = 500, the MSE is larger than that when S = 100, N = 100, indicating that particle degradation occurs in conventional resampling in this case.
Comparing the resampling algorithm proposed in this paper, the error decreases continuously as the resampling continues, and the error maintains the trend of decreasing as the number of particles increases without particle degradation.Further, the data in Table 1 shows that at the moment of S = 20, N = 10, the new resampling algorithm reduces the error by 16.62% compared to the polynomial resampling with the lowest mean square error MSE under this condition of the traditional resampling algorithm, and at S = 20, N = 100, it reduces the error by 16.49% compared to the residual resampling with the lowest mean square error under this condition of the traditional resampling algorithm, and at S = 20, N = 500, the reduction is 13.86%.It is demonstrated that the new resampling algorithm is feasible and the results are better than the four traditional resampling algorithms.Fig. 5 shows the simulation results of the four algorithms compared with the new algorithm in terms of particle filtering estimation error when 50 times sampling points are selected, the number of resampling S = 100 and the number of particles N = 5000.We use the same random number seed when generating particles in all four comparison results, hence the peak between 20 and 30.As can be seen from Fig. 5, the simulation results of the four traditional resampling algorithms are almost identical, and the mean error values are larger than those of the new resampling algorithm.
The simulation experimental results in Table 2 and Fig. 5 show that the MSE obtained by the four resampling algorithms are almost the same when the number of particles N gradually increases, indicating that the four algorithms operate with similar effects.Moreover, as the number of resampling times S and the number of particles N increase, the MSE of the four resampling algorithms show the results of increasing and decreasing, producing the phenomenon of particle degradation.
This indicates that in multiple simulation experiments, the particles with larger weights identified and replicated in each resampling may be the particles with the largest local weights rather than the particles with the largest global weights, but it can be seen from the experimental data that the MSE of the new resampling algorithm does not produce an increase or decrease, which proves that the algorithm does not produce particle degradation.
In order to verify the convergence speed of the proposed algorithms, the average value of RM SE of one-dimensional nonlinear system under different resampling times of each algorithm is obtained by making a number of simulations of the same system model under the same noise condition.Among them, the set number of particles is 600, the noise is Gaussian noise, the number of resampling times are selected as 10, 20, 30, . . ., 100 respectively, 10 simulations are made for each resampling corresponding to the algorithms and the average value is taken, and the simulation results are recorded as shown in Fig. 6.From Fig. 6, it can be seen that the RM SE of the new algorithm stabilizes when the number of resampling reaches 80.

Performance analysis of particle filter
Considering the accuracy of the simulation results, this paper compares and analyzes the DT-based resampling algorithm with four traditional resampling algorithms.The simulation model and the parameters of the environment and procedures are set as in section IV-A.The state model of the simulation is Eq.( 35), the observation model is Eq.( 36), the initial value of the state variable x 1 = 0.1, the initial PF estimate is 1, the initial value of the variance of the PF error is 5, and the time sampling point is 100, P S is set to 0.81.The most ideal simulation result is that the estimated signal of the particle filter completely covers the signal of the state with noise.When the number of particles is N=5000, the simulation results of the DT-based resampling algorithm for the state with noise, the observed signal, and the estimated signal of the particle filter are shown in Fig. 7.It can be seen from Fig. 7 that the value of the status with noise is very close to the estimated value of the particle filter, indicating that the effect of the particle filter algorithm is ideal.

State
Status value with noise Observations value with noise PF estimated value Fig. 7 The simulation results of the resampling algorithm for the status with noise, the observed signal, and the estimated signal of the particle filter To further ensure the accuracy of the particle filtering algorithm for estimating the state values, simulations of the noisy state values and the particle filter estimated signals of the DT-based resampling algorithm at a 95% confidence interval were set up and the result is shown in Fig. 8.It can be seen that the estimated value of the particle filter is completely within the confidence interval, and the effect is very ideal.Fig. 9 shows the results of the particle filtered signal and the error of the observed and state signals for the DT-based resampling algorithm.The red line in Fig. 9 indicates the MSE between the particle filter estimates and the actual state values, and the black line indicates the MSE between the observed and actual state values.It can be seen that the MSE between the particle filter estimates and the actual state fluctuates around zero, and the MSE results obtained from the particle filter estimates are smaller and closer to the actual state values compared to the observations.

Resampling efficiency analysis
A good resampling algorithm can shorten the running time of the particle filtering algorithm by increasing the number of resamples.Therefore, the running time of resampling particle filtering is recorded by the experimental simulation to realize the judgment of the running efficiency of the DT-based resampling algorithm.The average time over 100-time sampling points is chosen for the number of particles N of 5000, 7500, and 10000, respectively, with the hardware conditions and the parameters of the simulation settings unchanged.The number of resampling times for both the DT-based resampling algorithm and the four conventional resampling algorithms was set to 100, and the mean values of the particle filtering run times were recorded, as shown in Table 3.It can be seen that the computation time is essentially proportional to the number of resampled particles as the number of particles increases, but the DT-based resampling algorithm has a time advantage by improving the running time by 7.67%, 2.25%, and 7.54%, respectively, compared with the minimum time of the other four resampling algorithms.resampling.In Fig. 10, particles with a resampling count of 0 indicate low weights are discarded, and the positions of discarded particles are filled with particles with large weights that have a high resampling count.The position of particle distribution after resampling is shown in Fig. 11.In Fig. 11, the particle distribution locations before resampling are indicated in blue, and the particle distribution locations after resampling are indicated in red.It can be seen that the particle distribution is more concentrated after resampling, which reduces the variance and leads to better results for the particle filter.From the experimental results in this section, it can be seen that the DT-based resampling algorithm can achieve the real-time requirements for target tracking in the specified area during the motion target tracking process, effectively solving the problems faced by the particle filter algorithm for real-time trackings such as low efficiency, long computation time and inaccurate results, and satisfying the resampling algorithm design requirements.through simulation experiments.To address this problem, this paper proposes a new digital twin-based resampling algorithm, which completes the update of particles with smaller weights without changing the value of large weights of particles, reduces the number of resampling, and reduces the particle degradation phenomenon.Four traditional resampling algorithms are compared with this algorithm, and it is concluded that the new resampling algorithm improves the operational efficiency and estimates the particle filtering results very accurately compared with the traditional resampling algorithm.In addition, the practical application proves that the algorithm can meet the real-time requirements of the target tracking in the specified area in the process of moving target tracking, and effectively solves the problems of low efficiency, timeconsuming calculation, and inaccurate results faced by the particle filter algorithm in real-time tracking, and meets the design requirements of resampling algorithm.

Fig. 1
Fig. 1 Flow of particle filter algorithm

Fig. 4
Fig. 4 Framework diagram of digital twin-based resampling algorithm

1 : 3 :
Particle set initialization, k=0: for i = 1, 2, . . ., N , the sampled particles x i k N i=1 are generated by the prior p(x 0 ).2: for k = 1 : N do Importance sampling: for i = 1, 2, . . ., N , generate the sampled particles xi k N i=1 from the importance probability density, calculate the particle weights w i k = w i k−1 p z k | x i k and normalize the weights wk x i k =

Fig. 5
Fig. 5 Comparison results of four resampling algorithms with the new algorithm

Fig. 6
Fig. 6 RMSE mean curve of each algorithm under different resampling times

Fig. 8
Fig.8State signal with noise and particle filter estimated signal and its confidence interval

Fig. 10
Fig. 10 The results of 100 particle resampling

Fig. 12
Fig. 12 Tracking results in different frames

Table 1
Comparison of basic resampling algorithms

Table 2
MSE calculated by the resampling algorithms

Table 3
The mean values of different particle filters running times