Image-based finite-time visual servoing of a quadrotor for tracking a moving target

This paper proposes an image-based visual servoing control method for a moving target of a quadrotor UAV (QUAV). Firstly, the dynamic image model with moving target parameters is established based on the image moment features in the virtual camera plane. For the unpredictability of the moving target in space, we use a high-order differentiator to estimate the state parameters of the moving target. In order to solve the problem of image depth information caused by a monocular camera, we derive a nonlinear finite-time linear velocity observer from the virtual image plane, which can not only estimate the linear velocity information of QUAV but also avoid the measurement of image depth. Based on the above information, we design the global finite-time controller and use Lyapunov theory to prove the finite-time stability of the system. Finally, the numerical simulations verify the convergence of the proposed control scheme, and the ROS gazebo simulations demonstrate the improved performance of the proposed control scheme in tracking error.


Introduction
Quadrotor UAV (QUAV) has been widely used in autonomous detection [1,2], payload transportation [3], target tracking [4], and other missions because of its vertical takeoff and landing characteristics. In these missions, the QUAV needs to have the ability of autonomous flight, and the primary task to achieve the autonomous flight of the QUAV is to obtain their spatial position information. The traditional QUAV combines low-cost GPS (Global Positioning System) and lowcost IMU (inertial measurement unit) to obtain its position information. However, this positioning method will not provide reliable position feedback when the QUAV is in an indoor, low latitude, and complex urban environment [5]. The most common way to solve this problem is to combine it with visual technology. Under this scheme, QUAV usually carries a low-cost monocular camera. The rich visual information provided by the camera can help us achieve QUAV target tracking [6,7].
Using images to obtain the position and attitude information of the robot is called visual servoing [8].
Visual servoing technology has two branches: positionbased visual servoing (PBVS) and image-based visual servoing (IBVS). PBVS requires complete spatial data for 3D reconstruction, which requires high computational power and is unsuitable for small QUAV. IBVS calculates the position and attitude information according to the selected image features. IBVS does not require the accuracy of the camera and has better robustness to calibration error and image noise [9], so it is suitable for small QUAV. It should be pointed out that due to the underactuated characteristics of QUAV, the dynamics and image of QUAV are coupled, so the IBVS of QUAV is more challenging than the full actuated system [10].
The earliest IBVS of QUAV was started by Nicolas [9], who realized image-based visual servoing of static targets using the optical flow feature. Due to optical flow features, the coupling degree of the QUAV visual servoing will be improved [11]. Researchers begin to try to use invariant features, such as image moments [12,13]. Image moments contain many features, such as point features and line features. In addition, image moments are scale invariant. Taking advantage of this feature and introducing the concept of a virtual camera plane, Hamed and Giuseppe [14] eliminate the coupling of pitch angle and roll angle, reducing the complexity of controller design and designing an adaptive controller. Subsequently, image moment features in the virtual camera plane are widely used [15][16][17].
Because most QUAVs use low-cost monocular cameras for image-based visual servo, it will lead to the inability to obtain image depth information accurately. To solve this problem, Hamed Jabbari and Jungwon estimated the linear velocity of QUAV in the virtual camera plane [16]. This method uses an observer to compensate for the depth information of the image. After that, Zheng and Wang discussed the IBVS of QUAV under slope conditions [17]. They designed a nonlinear velocity observer to estimate the linear velocity information of QUAV. The linear velocity observer derived from the virtual plane can avoid acquiring depth information. Then they use the backstepping method to design the controller and prove the global asymptotic stability for the first time.
The above methods are designed for static targets. However, QUAV has an urgent need for tracking moving targets. Due to the lack of accurate prior parameters of moving targets, it is more challenging to track unpredictable targets. S. Masoud and A.Abdollah designed an indirect adaptive neural controller for QUAV using a radial basis function (RBF) neural network to enable QUAV to track moving targets [18]. After that, Masoud and Hamed Jabbari [19] used an artificial neural network to process the speed of moving targets and considered the influence of external interference. Ning [20] uses error symbol compensated robust integral (DCRISE) feedback control to design a visual servoing controller for moving targets, which improves the system's robustness. Zhiqiang and Xuchao [21] use a higher-order differentiator to estimate the motion parameters of the moving target and transfer the parameters as feedforward to the design stage of the controller. This method eliminates the coupling of the controller to the motion parameters. Moreover, it should be noted that the QUAV often needs better system response capability when facing autonomous flight missions [22].
In the current research, the discussion on the convergence of the system controller is mainly asymptotic convergence. As time tends to infinity, the system state reaches the equilibrium point. Alexis [23] proposed a finite-time controller for position control, which realized the hover control of QUAV in a finite time . Tian et al. [24] applied a multivariable hyper twist algorithm to realize the attitude control of QUAV and proved the stability through theory and experiments. Harshavarthini [25] uses Lyapunov-Krasovskii functional to design a finite-time fault-tolerant controller for the attitude of the QUAV. Gajbhiye [26] considered the finite-time control of the slung payload transportation of QUAV and proposed a geometric finite-time innerouter loop control strategy. Wenwu [27] uses the homogeneous theory to design the finite-time position controller and the finite-time attitude controller of QUAV so that QUAV can realize hovering control in a finite time. Guanglei and Guangbo [28] designed a finitetime controller using the nonsingular terminal sliding mode to realize the finite-time control of QUAV imagebased visual servoing. We use a table to summarize the above research work, as shown in Table 1. We consider these methods through three dimensions: the convergence rate of the controller, whether to consider the depth information of targets, and whether to discuss external disturbances.
We can see that no control scheme can simultaneously satisfy the requirements of tracking a moving target, finite-time convergence, considering the target's depth information, and verifying the disturbance. We (1) We propose the global finite-time control scheme used for the image-based visual servoing of a moving target for the first time. In this scheme, we use a nonlinear tracking differentiator to estimate the parameters of the moving target. Then a global finite-time nonlinear linear velocity observer is designed using the backstepping method. Finally, we use the backstepping method to design the global finite-time controller of the QUAV. (2) To solve the problem of acquiring the image depth information of a monocular camera in the imagebased visual servoing of a moving target, we propose a global finite-time nonlinear linear velocity observer. The depth information of the QUAV relative to the target is estimated by estimating the linear velocity of the altitude axis in the virtual camera plane. (3) The numerical simulation and ROS gazebo simulation results show high error convergence performance and better tracking control performance.
The rest of this paper is organized as follows. In Sect. 2, we introduce the QUAV and the image moment dynamics. In Sect. 3, we introduce the design of the controller. In Sect. 4, the simulation experiment of the proposed control scheme is carried out. Finally, our conclusions are given in Sect. 5. In this section, we will introduce the dynamic modeling of QUAV and image moment feature dynamics.

Quadrotor model
This section considers using two frames to describe the translation and rotation motion of the QUAV (see Fig. 1). The inertial frame I = {O i , X i , Y i , Z i } is assumed to be fixed at a point on the earth. We assume that the mass center of the quadrotor UAV is the origin of the body-fixed frame Moreover, we assume that the quadrotor UAV is always a rigid body. The position of the origin of the body frame B in the inertial frame is expressed as ζ = (x, y, z) T . We use three Euler angles φ, θ , and ψ to represent the rotation between two frames. R : B → I is the rotation matrix between two frames.
The mass of the QUAV is m and the inertia is J = diag{[J x x , J yy , J zz ]}. The linear velocity and angular velocity in the body-fixed frame of the QUAV are expressed as V ∈ R 3 and ω = (ω 1 , ω 2 , ω 3 ) T ∈ R 3 . According to [29][30][31], the kinematics and dynamics equations used to describe QUAV with disturbance are expressed as follows: where sk (ω) is skew symmetric matrix, which means that sk (a) b = a × b, a ∈ R 3 and b ∈ R 3 are arbitrary vectors, and × denotes the vector cross product. g is the gravity acceleration. F ∈ R 3 is the force. τ ∈ R 3 is the torque. U 1 is the total thrust.

Image dynamics
This section also uses the image moment feature based on a virtual camera. First, we assume a camera frame C = {O C , X C , Y C , Z C }, which coincides with the body-fixed frame of the QUAV. We assume that the camera is fixed at the mass center of the quadrotor and has a downward FOV. The virtual camera frame ν is the same as the actual camera frame. However, the roll angle and pitch angle of the virtual camera frame are always zero, and the yaw angle is consistent with the actual orientation of the quadrotor (see Fig. 2). We assume that there is a fixed point I P = I x, I y, I z T in the inertia frame, and it is represented as C P = C x, C y, C z T in the camera frame and as ν P = [ ν x, ν y, ν z] T in the virtual camera frame. Therefore, we get where R T ψ is a rotation matrix used to describe rotation around the Z −axis, ψ denotes yaw angle. Then, we get whereȮ ν denotes the linear velocities of camera and virtual frameworks in the inertia frame. v = ν v x , ν v y , ν v z T is the linear velocity of camera framework in the virtual frame, and is the velocity vector of a moving point in the virtual frame.
According to the perspective projection model, the projection of point P onto the virtual camera plane can be expressed as where λ is focal length and ( ν u, ν n) are the point coordinates in the virtual camera frame. From (7) and (8), one obtains Suppose that there are N stationary points in a level plane in the inertial frame, which are subject to the following assumptions.

Assumption 2
The observed target is a planar object that locates at a level plane of inertial frame and its binary image is obtained by segmentation algorithm.

Assumption 3
Image points are always in the field of view(FOV) of the camera.

Assumption 4
The sensor can get an accurate measurement value in the controller design stage. Remark 1 Assumptions 3 and 4 are difficult to guarantee in the natural environment. These two assumptions are added here only to ensure that in the simulation experiment, the image features will always remain in the camera FOV, and each sensor can also accurately give feedback.
Then according to [13], we can get that the image moment features are as follows: where ν u g = 1 N N k=1 ν u k and ν n g = 1 N N k=1 ν n k . ν u k and ν n k are the two components of the kth point.
Based on (9) and (10), the dynamics of the image features are defined as follows [14]: where q = q x , q y , q z T . z * is the desired altitude.
In order to control the yaw motion of quadrotor UAV, we select the image feature q ψ to describe the corre-sponding motion according to [12], which is defined as follows: (12) where the time derivative of q ψ isq ψ = −ψ + ψ , and ψ is the equivalent of an undefined term that indicates the velocity of the target in the yaw direction.

Controller design
This section first gives the closed-loop error equation of the system and then gives the design of the target trajectory observer and the design of the linear velocity finite-time observer in the plane of the virtual camera. Finally, we give the design process of the global finitetime controller. Figure 3 shows the block diagram of the whole system.
The following lemmas are useful to derive our main results.

Lemma 1 [32] Consider the nonlinear systemẋ
where c > 0 and α ∈ (0, 1). Then, the system is finitetime stable. In addition, the finite convergence time T (x) satisfies that Lemma 2 [33] For any real numbers x i , i = 1, . . . , n and b ∈ (0, 1] the following inequality holds: Lemma 3 [34] For real variables x, y, and any positive constants a, b, c, the following inequality is true.

Image feature error dynamics
We define the desired image features as follows: Therefore, the image moment feature errors of translational motion are as follows: Taking the derivative of the above formula and using (11), we can obtain the following dynamicṡ

Trajectory observer
Before designing the controller, we need to estimate the trajectory parameters. The standard method uses a high-order differentiator, but the high-order differentiator is sensitive to noise. Therefore, we use the nonlinear tracking differentiator proposed by Han and Wang, proving its stability. The general form of the nonlinear tracking differentiator with the input v(t) is as follows [35]: where x 1 tracks the origin signal v(t) and x i+1 (i > 0) is the estimation of ith-order derivative. R is a coefficient to determine the rate of convergence for (17). The function f (·) is suggested by Han and Wang as follows [35]: where α i is a coefficient to reflect the degree of the nonlinearity, and α i = 1 is corresponding to the linear case. is used to determine the linear interval, which can prevent vibration when the system is in the neighborhood of origin point. For our system, the velocity of the target object is d, its estimated value isd, and the estimation error isd. Therefore, the following observer is designed as follows: where

Finite-time velocity observer
In order to compensate for the depth information of the monocular camera, estimate the linear velocity in the virtual plane, and ensure the convergence performance of the observer, we design a finite-time linear velocity observer (FTO).

Theorem 1
The velocity observer and the corresponding update law are defined as follows: are the corresponding estimation errors, which define as follows: This linear velocity observer is globally finite-time stable.
Proof We first take the time derivatives of (23) and (24) then substitute (13) and (14), respectively. We can get the following expression: Now we choose a Lyapunov candidate function Taking the derivative of time for (27). Then substitute (21) and (22) into to (27), we can get the following: where r is a design parameter. (28) is definite negative. Inspired by [17] and [33], we can select a set of design parameters as follows: where n = 3 represents the order of the system. Then, we use the parameter (29) in (28), and we can geṫ So far, we have been able to prove that the observer can be globally asymptotically stable (GAS). We continue to prove that it can be globally finite-time stable (GFTS). Scaling the (27), we can get Combining (33) and (34) , and using Lemma 2, we can get where r = 2α. And then, we select a function as follows: The result means thatL ≤ − β 1 4 L α holds. That is, we can find a Lyapunov function satisfying Lemma 1. So far, the proposed velocity observer is globally finitetime stable (GFTS).

Finite-time controller
We mainly use the backstepping scheme to design the IBVS controller of QUAV, and we need to use the proposed velocity observer.
We define the first Lyapunov function: By substituting (13), we can get the time derivative of the Lyapunov functioṅ We treatv as a virtual control input, and choosev d = UpdateV 1 in (37) using (38) Then we define the second Lyapunov function Since we need to get the time derivative of (40), we need to get the time derivative of q 2 first. Taking the time derivative of (38), then substituting (13) and (22) into it, we can geṫ where q Taking the time derivative of (40), then substituting (39), (41) and (33) into it, we can get We can also prove that (42) is also GFTS.

Theorem 2 Considering f as a virtual control input and design f as follows:
1 V 2 is definite negative, and the system is GFTS.
Remark 2 If we use the small angle assumption, we can get the desired attitude of the QUAV through the following equation However, this paper does not adopt this assumption, so it is necessary to continue the backstepping design to obtain the desired angular velocity.
Continue to define the third error term UpdateV 2 in (42) using (43) Then we define the third Lyapunov function Using (50) and (49), we can get the time derivative of V 3 where 1 c 1ḟ d is expressed as follows: Theorem 3 Consideringḟ as the control input and designḟ as follows: V 3 is definite negative, the system is GFTS.
Proof Substituting (54) into (52), we can geṫ Using Lemma 3 to scale (55), we can geṫ Using Lemma 2 for (56) and scaling (51), we can obtaiṅ The result means thatV 3 So far, we have obtained the controller, but we also need to obtain the angular velocity and thrust that can be directly used to control the QUAV. Since in the virtual camera frame, the force f of the QUAV is expressed as By taking the time derivative of (60) and substituting (15), we can get Finally, we only need to combine (61) and (54) to obtain the desired thrust U 1 and desired angular velocity ω 1 , ω 2 respectively.

Remark 3
This paper only studies the controller of translational motion obtained from the image moment features in the virtual camera plane. When we get the desired angular velocity, we can use a PD or PID controller to realize the translation control of the QUAV in the horizontal and altitude directions.
After the above process, we get the controller to control the translational motion, and we also need to get the controller to control the yaw motion. The image feature error is defined as where q d ψ is the desired value. According to (12), we can geṫ

Theorem 4 Design the control input as
with k 5 > 0, the image feature error q 4 while converge to zero in a finite-time.
Proof The relationship between the time derivative of Euler angles and angular velocity is as follows: Now we choose a Lyapunov function Then we take the time derivative of (66) and substitute (63) (65) into it, we geṫ By scaling (66) and combining the results with (67), we can get The result means thatV ψ ≤ − k 5 4 V α ψ holds, and the V ψ is GFTS. Finally, we can get that q 4 will converge to zero in a finite time.

Remark 4
In the actual QUAV system, yaw motion control is usually regarded as an independent channel. Therefore, according to the characteristics of its dynamic model and image moment error model, the design parameters of the yaw motion controller are selected as follows: Remark 5 After the above controller design process, we can use Lemma 1 to describe the convergence time of the system quantitatively. Recall that convergence time T (x) as follows: Then the convergence time of the translation motion and the yaw motion can be expressed as follows: where q 1 (0) and q 4 (0) represent the corresponding initial state of the system, respectively.

Remark 6
In the controller design stage, the rotor of QUAV is assumed to be ideal. For the control allocation of the actuator, since the four rotors can produce a single thrust U 1 and a full torque vector τ = (τ 1 , τ 2 , τ 3 ) T for rotation, we can use the following equation to obtain the desired angular velocity of the four rotors.
where n 1 , n 2 , n 3 , and n 4 denote the angular velocity of the front, right, rear, and left rotor, respectively. b and d are the thrust and drag factors. l is the distance between each rotor center and the center of mass of the QUAV.

Simulation
In order to verify the effectiveness of the proposed controller, we set up four groups of simulations. In the first group, we apply the proposed method to static targets to ensure that our method can be applied in static scenarios. We apply the proposed method to a moving target in the second group. We compare the proposed method with the previous methods in the third group. In the fourth group, we conduct a simulation comparison in Robot Operating System (ROS) gazebo environment. The model parameters of the QUAV and the camera are shown in Table 2.
The first three groups of simulation are numerical simulations conducted in MATLAB R2019b of Windows 10×64. The fourth group of ROS Gazebo moving

Numerical simulation of the stationary target
The selection of control parameters of the controller is shown in Table 3. It should be noted that the current desired image moment can be obtained when the QUAV is at position (0, 0, −4)m in the inertial frame I with attitude (0, 0, 0)rad. Therefore, the desired height is z * = −4m. The numerical simulation is divided into two groups: one without external disturbance and the other with external disturbance Figure 4 shows the performance of the proposed control method on static targets without external disturbance. Figure 4a shows that QUAV finally hovers at the position of (0, 0, −4)m, and Fig. 4b shows that the attitude of QUAV is (0, 0, 0)rad. Figure 4c shows the convergence of the image moment characteristic error in the plane of the virtual camera. It can be seen that the proposed control method can make the system reach the desired state, and it can be seen from the yaw feature that the system is stable in a finite time. Figure 4d shows the linear velocity in the virtual plane. We can see that the designed linear observer can accurately estimate the actual linear velocities of QUAV. Figure 4e, f shows the trajectories of feature points in the virtual camera plane and the actual camera plane, respectively. Through the axis of the trajectories, we can find that theoretically, using a low-cost 1080p resolution camera in practice is enough to deal with the current visual servoing task. Figure 4g, h shows the thrust input and torque input of the system, respectively. Figure 4i, j, k shows the observation state of the target motion observer in three axes, respectively. Figure 4l shows the space trajectory of the QUAV. We can see that the target observer can accurately estimate the motion state of the target. Figure 5 shows the performance of the proposed control method on static targets with external disturbance. We can see that although we impose external disturbances on the system, the system is robust to external disturbances.

Numerical simulation of the moving target
Firstly, we give the motion constraint equation of the target, and its trajectory in space is a square in the x Oy plane. The simulation results are shown in Fig. 6.
The motion constraint equation as follows: The target is always kept in the x Oy plane during movement, i.e., v z = 0 m/s. At the same time, the target does not spin at any angle, that is, ω z = 0 rad/s. The numerical simulation is divided into two groups, one without We focus on the following aspects for this simulation: (1) Image moment feature error.
(3) Target trajectory observer. Figure 6c shows the convergence of image moment characteristic error. We can see some weak fluctuations in the figure due to the QUAV tracking the target to the inflection point, but they can quickly converge to the desired value. Figure 6d shows the linear velocity in the plane of the virtual camera. We can see that the proposed finite-time linear velocity observer can also accurately estimate the linear velocity of QUAV in the scene of target motion. Figure 6i, j, k shows the situation of the target trajectory observer. We can see that in the scene of target motion, the target trajectory observer can also accurately estimate the motion parameters of the target. Figure 7 shows the performance of the pro-posed control method on moving targets with external disturbance. We can also see that the system is robust to external disturbances. Finally, Fig. 7l shows the spatial trajectory of QUAV. We can find that there is no time delay in the proposed control method for tracking the moving target.
To further illustrate the effectiveness of our proposed method, we also carried out a group of nonlinear target tracking simulations. The target trajectory parameters are as follows: (76) Figure 8 shows the numerical simulation results for nonlinear moving targets. Because we use a nonlinear tracking differentiator, we can get the motion trajectory estimation parameters of the target, so the system shows adaptability to nonlinear moving targets. Therefore, we can verify the effectiveness of the proposed control method for tracking nonlinear moving targets.

Comparative simulation of moving targets
In order to better illustrate the performance of the proposed method, we conducted a set of comparative experiments. The methods involved in the comparison include the artificial neural network method proposed by Masoud [19], which is noted as RBFNN. The other method uses the target observer proposed by Zhiqiang [21], which is noted as VE-backstepping. The methods selected here all use the same dynamic model and image moment features in the virtual camera plane. At the same time, to quantitatively illustrate the perfor-mance of the proposed method, we use the four indicators proposed by Jing and Qiang [36,37] to analyze the control method. Our method is noted as VE-FTO-FTC. The four indices are as follows: (1) Integrated Absolute Error (IAE). It is used to measure the tracking performance of the control method.
IAE = |e (t)|dt.  (4) Integrated Square Control (ISDU). It is used to measure the fluctuation degree of control signal.
whereū is the mean of the control input.
Remark 7 IAE represents the accumulation of absolute error, and the smaller this indicator is, the stronger the control effect is. ISDE represents the variance of error, which describes the fluctuation of error. When the control effect of the controller is more robust, the value is smaller. Similarly, IAU describes the accumulation of controllers. The higher the value, the stronger the control effect. ISDU is the variance of the controller. The more significant the value, the better the control intervention effect. Figures 9 and 10 show the image moment feature errors and control inputs under three control methods, respectively. Figure 10 shows the control effect of our proposed method. However, the superiority of our proposed control method cannot be better demonstrated only by the data plots. Therefore, we use the above four indicators and obtain Tables 4 and 5.
As shown in Table 4, our control method has smaller IAE and ISDE, which means that our method can make the system reach the desired value as soon as possible.
In addition, Table 5 shows that our proposed control method has large IAU and ISDE, which means that our method has a better control effect.

Simulation experiment in ROS gazebo
The simulation parameters in ROS gazebo are shown in Table 6. And Table 7 shows the control parameters in ROS gazebo. Since the environment of the ROS gazebo is close to the actual environment, we suggest selecting relatively small control parameters. The simulation environment we use is Prometheus framework [38], which is developed based on MAVROS. We need to build our scenario to use this environment, as shown in Fig. 9a. For the requirements of the visual servoing task, we design the ground markers as shown in Fig. 9b.
In the ROS gazebo, the sensor parameters of QUAV are provided by the IMU module of the gazebo. It should be noted that the module can add Gaussian white noise to simulate the measurement noise. These param-    Table 8 shows the parameters of the sensors.
In the simulation process, we first make QUAV fly to the initial position and maintain the initial attitude. After that, the QUAV will hover in this position and wait for the command of the visual servoing. After receiving the start command, the QUAV starts to enter the IBVS task independently. At the same time, the system will record the current data of the QUAV for subsequent analysis. The motion constraint equation as follows:   The target is always kept in the x Oy plane during movement, i.e., v z = 0 m/s. At the same time, the target does not spin at any angle, that is, ω z = 0 rad/s. As shown in Fig. 12c, the proposed control method can also converge the image feature error to the expected value in the ROS gazebo. Figure 12d shows that the designed finite-time observer can estimate the linear velocity of QUAV more accurately. Figure 12g shows the spatial trajectory of QUAV. It can be seen that the control effect is ideal. It should be noted that in Fig. 12b, the attitude angle in the figure has oscillated, but its range is within 2 degrees. Therefore, we think the QUAV can be in hovering mode.
We also conducted comparative experiments in ROS gazebo, and the comparative methods are still RBFNN and VE-backstepping. Figure 13 shows the variation  of image moment feature error with time under three control schemes. We also use IAE and ISDE to analyze the performance of the controller. Table 9 shows the result.
The above simulation experiments show that the proposed control method has better control convergence, and the designed finite-time observer can also accurately estimate the linear velocity in the virtual image plane of the QUAV. However, parameters such as system quality, inertia, and camera focal length are not easy to obtain in practice. Therefore, we need to further discuss these problems in the future.

Conclusion
In this paper, we propose a novel image-based visual servoing control scheme combining target motion differentiator, finite-time observer, and finite-time controller, and we apply the scheme to the research of moving target tracking of QUAV. We use the differentiator to estimate the target motion parameters and transfer the parameters to the controller to ensure that the system can track the target in real time. Aiming to acquire camera depth information and the linear velocity of QUAV, we design a finite-time observer to compensate for this information. Finally, we design the global finite-time controller of the system using the simplified backstepping method. We have made assumptions about the visibility of the target, but in practice, the target is likely to be out of the field of view of the camera. At the same time, we do not conduct real machine experiments. Therefore, we plan to consider the target visibility constraint in future research and real machine verification.