Adaptive fault-tolerant visual control of robot manipulators using an uncalibrated camera

In this paper, we propose an adaptive fault-tolerant visual control scheme for robotic manipulators with possible actuator failures in an uncalibrated environment. Most existing visual control approaches for robot system do not take into account actuator failures, which may prominently affect the transient performance of the system in practice. In order to moderate the detrimental and adverse effects of actuator failures on the system, a new adaptive algorithm is proposed to compensate stuck-type failures occurred in actuators. Moreover, by proposing a decoupling method, the uncertain parameter model of actuator failure is successfully separated from the dynamics model while the two models are coupled in most existing results [e.g., (Rugthum and Tao in Robotica 34(7):1529–1552, 2016; Rugthum in: International conference on engineering, applied sciences, and technology (ICEAST), pp 1–4, 2018)]. And the stability of the dynamic system and the convergence of the image error are proved by the Lyapunov analysis method. Finally, the effectiveness of the proposed control scheme is verified by comparing and analysing the tracking performance of the 3-DOF manipulator under different failure parameters.


Introduction
In recent years, visual servo control has attracted considerable attentions for its various applications such as positioning control, tracking control, and target grasping. Many researchers have made tremendous efforts on handling the visual regulation problem for robot manipulators and made lots of remarkable results [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. For classic visual servoing methods, the position-based and image-based visual servoing methods utilize the current image and desired image to generate pose errors [7] and image errors [8], respectively. In hybrid visual servoing solution, image features and rotational motion information extracted from desired and current images are simultaneously considered [9]. In [10,11], by computing the interaction matrix, the photometric information of entire image was utilized to achieve visual control instead of using conventional point features, which was proved to be more robust for occlusions and specular scenes. Moreover, in [12,13], the dense depth maps [12] and the pixel intensities [13] were utilized to enhance system robustness. To handle the internal uncertainties, some uncalibrated visual servo-ing methods have been proposed in [14][15][16][17]. In [14], by designing a new depth-independent interaction matrix, the unknown camera parameters were successfully linearized and estimated online. Subsequently, Chien et al. [15] further presented a novel parameter update law to deal with the unknown time-varying depth information and the decoupling of Euclidean homography was not required. Furthermore, Zeyu et al. [16] proposed an uncalibrated visual servoing method based on projective homography, in which a novel task function based on the element of projective homography was devised to realize visual servoing. To make the visual servoing more intelligent, Baoquan Li et al. [17] proposed a novel monocular visual servoing strategy, which can drive a wheeled mobile robot to the desired pose without a prerecorded desired image. Although the study on visual servoing methods has made considerable achievements, the results mentioned above did not take into account the effect of actuator failure on control performance. In practice, the high reliability and safety of systems have become necessary requirements for the further application of robots. Thus, actuator failure compensation has become one of active topics in the field of robot.
Generally speaking, actuator failure can be roughly divided into two categories: partial loss of effectiveness (PLOE) and total loss of effectiveness (TLOE). In PLOE case, the output of the actuator loses part of the effectiveness of the input. Meanwhile, in TLOE case, the output is stuck at an unknown value no matter what the value of the input is. Over late decades, some interesting fault-tolerant control approaches have been presented, including multiple model control in [18,19], fault detection and diagnosis design in [20,21], slide mode control in [22][23][24][25]. Besides these, adaptive control is also a promising technique [26][27][28][29][30][31][32][33]. A remarkable feature of adaptive control is that the unknown fault-related information and system uncertainties can be estimated online by adaptively adjusting the controller parameters. In [29], a direct adaptive state feedback control method was proposed to compensate actuator failure in single output or multiple output linear systems. For nonlinear systems, in [31][32][33], a series of adaptive fault-tolerant controllers were designed by utilizing backstepping-based iteration method. As a specific application of nonlinear system, the study of robot fault-tolerant control has also achieved remarkable results in [34][35][36]. In [34], by combing multiple individual failure compensators with adaptive control technique, a new controller was developed for cooperative robotic system. Yet, one problem what should be noted is that with a complete parameterization of failure pattern, the number of estimated parameters would increase exponentially as the number of failure patterns increase, which may severely affect the system transient response. To remove this restriction, in [35], a dynamic controller structure was newly proposed to reduce the number of possible actuator failure patterns such that the adaptation of uncertain parameters can be more efficient. Furthermore, Kececi et al. [36] synthetically investigated fault-tolerant control problem of redundant manipulator systems. However, the above studies in [34][35][36] are not applicable for robot systems based on visual control. In fact, up to now, how to deal with the fault-tolerant visual control problem of robot in an uncalibrated environment still remains open.
Motivated by the observation above, in this paper, we present a new adaptive fault-tolerant visual controller for robot manipulators with total loss of effectiveness of actuators . One of the major difficulties is how to extract control signal from input signal and actuator failure disturbance. To overcome this problem, a novel decoupling-based method is first proposed in this paper. Based on the Lyapunov analysis method, it is proved that the image tracking error can be asymptotically convergent to an arbitrarily small region of the origin. In summary, the study has the following main contributions: • Compared with the previous control scheme in the literature, e.g., [6], our raised one additionally contains quadratic feedback of the track errors, and an adaptive actuator failure compensation mechanism. In particular, the nonlinear scaling term corresponding to the depth can be compensated online by the quadratic feedback. Moreover, the adaptive actuator failure compensation mechanism is newly designed to cancel the overlarge failure disturbances errors, and the development of such a failure compensation mechanism does not depend on any prior knowledge of the actuator failures. Therefore, for applications in an unknown environment, the proposed control scheme is more feasible than those traditionally proposed. • The separation of uncertain parameters model of actuator failure and dynamic model is successfully achieved by proposing a novel decoupling-based met-hod, while the two models are coupled in most existing results (e.g., [34,35]). Subsequently, an adaptive mechanism is developed to compensate for the degrading effect caused by actuator failures. As a result, the proposed controller is more robust than the existing ones. • taking nonlinear dynamics into account, a new Lyapunov positive-definite function is constructed, based on which it can be proved that the image tracking error converges to a small neighborhood of origin asymptotically. Without stringent assumptions on image position tracking errors.
The rest of the paper is organized as follows. In Sect. 2, we introduce the model of a visual servoing manipulator based on an uncalibrated camera and formulate the control problem. In Sect. 2, a method based on decoupling is proposed to realize the separation of the control signal and failure signal. In Sect. 4, an novel adaptive controller and adaptive scheme are proposed to realize the control objective, and stability of the system is proved by the Lyapunov analysis method. Finally, In Sect. 5, The proposed controller is applied to simulation model of the 3-DOF manipulator to verify the stability and effectiveness of the controller.
Notations and definitions In this paper, a bold capital letter denotes a matrix, a lowercase letter expresses a vector and a scalar, vector, or matrix accompanied with a bracket (t) show that its value changes with time. Besides, let I k×k and 0 m×n to represent the k × k identity matrix and m × n zero matrix, respectively.

Problem statement and preliminaries
In this section, we formulate the studied control problem, and review some basic knowledge on visual servoing systems.

Robot and camera model description
In this paper, we consider the robot model and vision system as shown in Fig. 1. The robot joints are driven by concurrent actuators, which are described in detail in the third section of this paper. Camera setup is fixed near the robot, and a feature point marked on the endeffector can be traced by the vision system, the position on the image plane of the vision system is denoted by y(t). To clearly describe control problem, we made the following assumptions: 1. The internal parameters of the camera are not calibrated. 2. The external parameters of the camera, i.e., the transformed matrix between the robot and the camera, are unknown. 3. Unknown actuator failure disturbances may exist in the input mechanism of the robot system.
Control problem Under the assumption mentioned above, design an adaptive fault-tolerant visual controller such that the projection position y(t) of the feature point on the image plane can asymptotically approaches to the desired projection position y d .

Robot kinematics and visual model
As shown in Fig. 1, we set up three coordinate frames, namely the robot base frame, the end-effector frame, and the camera frame, to represent the relationship between the robot motion and the visual system. The joint angle of the manipulator is expressed by a n × 1 vector q(t), where n denotes the number of DOFs. Denote the homogeneous coordinates of the feature point with respect to the robot base frame by a 4 × 1 vector x(t). From the forward kinematics of the robot, we havė where J(q(t)) denotes the Jacobian matrix of the robot. The homogeneous transformation matrix between the camera frame and the robot base frame is an unknown constant since the camera is fixed. Thus, the coordinates of the feature point on the camera frame can be expressed as follows where c x(t) denotes the coordinates of the feature point with respect to the camera frame and T denotes the homogeneous transformation matrix from the robot base frame to the end-effector frame. Note that the matrix T can be written as where R is 3 × 3 rotation matrix and p is 3 × 1 translation vector. As a result, it can be seen that the matrix T represents the external parameters of the camera. Let where u(t) and v(t) represent the pixel coordinates of the feature projection on the camera image plane.
Under the perspective projection model where is a 3 × 4 matrix determined by the camera's internal parameters.
where α and γ are scalar factors on the image plane, and ϕ is the angle between the u and v axes.
represents the position of the main point of the camera. c z (q(t))) is the depth of the feature point with respect to the camera frame. Represented by M the product of the matrix and the homogeneous transformation matrix T, that is where r T i represents the ith row vector of the rotation matrix, and ( p x , p y , p z ) is the coordinate of p, The matrix M is called perspective projection matrix and its dimension is 3 × 4. Note that matrix M is only related to internal and external parameters, independent of the position of the feature point. Then (5) can be written as The depth formula of feature point is given by where m T 3 is the 3rd row vector of perspective projection matrix M.
When the intrinsic parameters and extrinsic parameters are not calibrated, the perspective projection matrix M should be estimated by the coordinates of the feature point and their projections. In addition, the following important properties should be noted [37].

Property 1 Given a sufficient number of the world coordinates and projections , the matrix M can be kept with in a certain range.
The perspective projection model can be written as As a result, if M is a solution of (8), for any nonzero β, the matrix βM is also a solution of (8  (11) where m i j represents the unknown element of the ith row and jth column of the matrix M.

Property 2
The rank of the perspective projection matrix M is 3. The proof can be referred in [14].
To map robot motion from joint space to image space, by differentiating (8), we gain the following velocity mapping relation: where A(y(t)) is a 3 × 4 matrix that can be written as follows: Since there is no depth factor 1 c z(q(t)) in matrix A(y(t)), we call it depth-independent interaction matrix [14]. Furthermore, since the components of matrix A(y(t)) are linear with those of matrix M, it can be proved that A(y(t)) has a rank of 2. The proof can be referred in [14].

Property 3
For any homogeneous vector s, the product A(y(t))s can be linearized as the product of (s, y(t))θ : where is a regression matrix independent of the camera parameters and θ is the parameter vector representing the components of M.
To estimate the perspective projection matrix M, we need to select at least five different positions of the feature point. Consider five positions x(t j ) of the feature point and its corresponding projection y(t j ) at time instant t j ( j = 1, 2, 3, 4, 5). Define the following timevarying error vector: whereM denotes the estimate of the matrix M. Note that e(t j ) ∈ R 3×1 is a vector whose third component is always zero. From Eqs. (8), (9), we have Then, from Property 3, Eq. (15) can be rewritten as where θ =θ − θ represents the estimation error.θ is a vector of the estimated parameters corresponding to the estimated matrixM. The matrix W x t j y t j ∈ R 3×11 is independent of the unknown parameters. To simplify the notation, let Proposition 1 It is assumed that the estimated unknown parameterθ leads to e(t j , t) = 0 at five different positions x(t j )( j = 1, 2, 3, 4, 5) of feature points. If it is impossible to find that three of the five images y(t j ) corresponding to these positions are collinear, the rank of the estimated projection matrixM is 3. The proof can be referred in [14].
Remark 1 It should be noted that the estimated projection matrixM does not necessarily satisfy Eq. (10), so we can define the time-varying error vector in Eq. (15) directly. When e(t j , t) = 0, it can be seen from Eq. (20) and Property 1 that the estimation matrixM can be determined to a proportion of the matrix M.
Visual control problem Given the desired image position y d and the real-time perspective projection y(t) of the feature point on image plane, design a proper dynamic torque τ such that the image tracking error (y(t) − y d ) is asymptotically convergent to zero even in the presence of actuator failures: lim t→∞ y(t) − y d = 0.
3 Dynamics of manipulator with unknown actuator failure

Conventional manipulator dynamics
It is well known that the conventional dynamic of manipulator can be described as H(q(t))q(t) + 1 2Ḣ (q(t)) + C(q(t),q(t)) q +g(q(t)) = τ , where H(q(t)) ∈ R n×n is a positive definite inertia matrix. The term C(q(t),q(t)) is a skew-symmetric matrix representing the Coriolis and centrifugal force, the term g(q(t)) denotes gravity, and τ ∈ R n×1 denotes joint input for manipulator. H(q(t)), there exists m 1 , m 1 and any homogeneous vector s which holds

Property 5 ([2])
For any homogeneous vector s, it can be obtained that following equation holds s T C(q(t),q(t))s = 0.

Dynamics of manipulator with unknown actuator failure
In the case of concurrent actuation, at the ith joint, i = 1, 2, ..., n, m i actuators are concurrently connected, and the number of actuators m i at each joint is different. Therefore, the torque input at the ith joint can be given where τ i j is the torque applied by the jth actuator of the ith joint. The state of actuator with TLOE type failure can be defined as where σ i j denotes the state of the jth actuator of the ith joint, j = 1, 2, . . . , m i . A widely used actuator failure model [36] is whereτ i j represents the unknown constant torque value generated by the failure actuator, which is applied to the design and analysis of adaptive controller. A basic existence assumption of the adaptive compensation scheme for unknown system and failure parameters is as follow • The number of actuator failures simultaneously is admitted at most m i −1, and without the knowledge of failure parameters, the remaining actuators can still adaptively adjust controls to achieve a desired objective.
The torque input of the jth actuator of the ith joint of the manipulator can be written as where v i j represents the control input. A meaningful actuation design is that all actuators have the same control signal, i.e., equal actuation scheme: when applied to the joint input of robot in (21), we have Introduce the parameter matrix B and vector h as From Eqs. (22), (23), we express the control signal τ (t) as where B ∈R n×n is a positive gain matrix, v represents the n × 1 joint input signal to be designed, h(t) denotes the unknown constant torque generated by the failure actuator, which is a n × 1 constant vector.
To simplify the design of the control input, an unknown parameter vector λ is introduced, and λ satisfies the following equation: whereλ is the estimate of λ, so we define thatλ i = λ i −λ i is the estimation error of λ i . Moreover, the vector v(t) will replace v(t) as the control input to be designed. Substituting (23), (25) in (24), the specific process of separating the control signalv(t) is as follows Therefore, the control input with actuator failure can be rewritten as where r(t) can be regarded as system input disturbance Substituting (26) in (18), the manipulator dynamics equation can be given the following form:

(27)
Remark 2 According to (27), the control inputs of manipulator system with unknown actuator failure are simplified asv(t), h(t) and r(t). The uncertain parameter B is not required to be within known interval, the term h(t) is bounded and unknown.

Adaptive image-base visual servoing controller for manipulator with actuator failure compensation
In this section, to drive the manipulator such that the feature point asymptotically approaches to the desired position, a novel visual servo controller with actuator failure compensation is designed. To handle the uncertainty of parameters, adaptive schemes are proposed to estimate unknown parameters online. Moreover, the stability of the proposed controller is theoretically proved by the Lyapunov analysis method.

Adaptive fault-tolerant visual controller
The desired projection position of the feature point is defined as y d , which is a known constant vector. Define the following image error vector y(t): where y(t) denotes the current projection position and y(t) is a 3 × 1 vector in which the 3rd row is always zero. Figure 2 visually presents the block diagram of image visual servo closed-loop control with actuator failure. In the control process, the embedded sensor firstly collects the desired state variables from the robot, such as joint velocity, joint angle, image coordinates, etc. Then, the adaptive algorithm updates the unknown parameters according to the values of the collected state variables, thus making the controller constantly update the output values. Next, the controller output performs estimated actuator failure compensation, and finally drives the projection of feature points (the output of the controlled object) to track the desired time-varying trajectory.
To ensure the control performance of the system, we propose the following controller The first term is to counteract the gravity of the manipulator. The second term denotes a velocity feedback in the joint space, where K 1 ∈ R n×n is a positive definite velocity gain matrix. The third term is the image error feedback of the system, whereÂ(t) is the estimate of the depth-independent interaction matrix A(t),m(t) is the estimation of the 3rd row of vector the perspective projection matrix, and B ∈ R 3×3 is a positive definite position gain matrix. The last term is designed to compensate for actuator failure, whereĥ is the estimate of the unknown constant torque h. Substituting (29) in the  (27), the closed-loop dynamics of the system are organized as follows

× Â (y(t)) − A T (y(t))
whereh =ĥ − h represents the estimate error of h. From Property 3, it can be concluded that: where θ =θ − θ , representing the estimate error of unknown parameter θ , and Y(q(t), y(t)) is a regression matrix independent of unknown parameters.

Remark 3
It should be noted that the depth information 1 c z(q(t)) does not appear in controller. The quadratic term y(t) of Eq. (29) is to compensate for the effect caused by the removal of depth information. By utilizing depth-independent interaction matrix and actuator failure compensation technique, it is proved that our controller is more robust than other existing ones [14][15][16][17].

Estimation of the unknown parameters
In this subsection, adaptive algorithms are proposed to estimate the unknown parameters online. To illustrate the superiority of our method, some comparative simulations on control performance with the Slotine Li algorithm have been conducted [14].
As shown in Fig. 3, we select m positions on the trajectory of the feature point, such that m equations like Eq. (17) can be obtained. The update law of unknown parameter θ is given as follows: where ∈ R 11×11 and K 3 ∈ R 3×3 are positive definite diagonal gain matrices. Note that the first term is designed by the Slotine-Li algorithm and the second term denotes the online minimization of the errors based on the gradient descending algorithm. Furthermore, for actuator failure compensation, the adaptive rules of unknown parametersλ andĥ can be designed by Slotine-Li algorithm where P ∈ R n×n and Q ∈ R n×n are positive definite diagonal gain matrices.

Remark 4
The purpose of introducing the last term on the right of Eq. (32) is to ensure the full rank of the estimated perspective projection matrixM and guarantee the asymptotic stability of the system. According to Proposition1, the feature point position is required to be selected carefully such that the matrix M is full rank, i.e., at least three projections of five feature point positions should be non-adjacent.

Stability analysis
Now we prove that when the developed adaptive faulttolerant controller is applied to the plant (27), the closed-loop signal boundedness and asymptotic image tracking are guaranteed.

Theorem 1
The controller (29) is updated by the adaptive laws (32)- (34), when applied to robot model in Fig. 1, guarantees the closed-loop signal boundedness as well as asymptotic output tracking: lim t→∞ y(t) = 0, despite parameter uncertainties in the system in addition to unknown failure index, failure times, and failure values.
Proof Assumed that one or more than one actuators fails at time instants (t k , t k+1 ), k = 0, 1, 2, . . . , N with t 0 = 0, t N = ∞ as the time intervals on which the actuator failure pattern σ is fixed. By using the positive definite function on the interval (t k , t k+1 ), the adaptive mechanism can be designed. Note that the actuator failure pattern σ is unchanged: Closed-loop dynamics (30) multipliesq(t) from left to right and results in: It is derived from the Eqs. (9) and (12): The adaptive law (32) left side multiply θ T (t) results in Differentiating the function in (35) yields: From (1) and (9), we havė By combining (33) − (37) and the adaptive rule From (41), for each time interval [t k , t k+1 ],V is a non-incremental function. Thus, V is bounded and all closed-loop signals in V are bounded. Then, the joint accelerationq(t) is bounded from the closed-loop dynamics (30). From the Barbalat lemma, we conclude that When W t j θ(t) = 0, the rank of the matrix M(t) is 3. From the closed-loop dynamics (30), the following equation holds
From (43), it is obvious that D(θ(t), y(t)B y(t) = 0 when the rank of the Jacobian matrix J(q)(t) is greater than or equal to 3. Note that Since the rank ofM(t) is 3, it can be proved that the rank of D is 2, then y(t) = 0 can be obtained. Consequently, the image error will converge to zero as the time approaches infinity, that completes the proof of the Theorem 1.
Remark 5 It should be pointed out the convergence of image errors does not mean that the end effector converges to the desired position. This is because the projection of all feature point on a ray in 3D space may have the same image coordinate on the 2D image plane, which may result in no change of the projection position of the feature point when the end effector moves. Therefore, it is necessary to select multiple feature points to achieve the real-time visual control.

Simulation study
To demonstrate the effectiveness of the proposed control scheme and parameter updating laws. In this simulation study, we implement the adaptive failure compensation scheme for the 3-DOF manipulator. Specifically, five positions of the feature points and corresponding projections are chosen as follows. where x(t j )( j = 1, 2, . . . , 5) represent the five world coordinates of the feature points, and y(t j )( j = 1, 2, . . . , 5) denote its projections. Furthermore, the fixed element p z = 0.9899.

Simulation conditions
In this simulation, A feature point is selected at the origin of the end-effector frame, the simulation model of the robot is shown in Fig. 4. The physical parameters of robot manipulator are shown in Table 1, and camera intrinsic parameters are shown in Table 2. Moreover, We assume the initial and desired coordinate of the feature point on the image plane are, respectively, y 0 = (860, 602, 1) T and y d = (692, 196, 1 The gain parameter in the controller are initialized as B = 0.015I 2×2 and K 1 = diag{10, 380, 80}, the gain parameters in the adaptive law are set as follows: K 3 = 0.00001I 2×2 , Q = diag{10, 1, 0.03}, P = diag{500, 500, 500},

Simulation results
In this simulation study, it is assumed that each joint of the manipulator is jointly driven by two actuators. We given the following actuator failure mode for the robot: within the simulation time T = 500 s, the failure of one actuator in the third joint will occur every T * = 50 s, and   Fig. 6, where the red and green pentagrams represent the initial and final positions of the feature point, respectively, while evolutions of image tracking error, joint velocity, and control inputs are displayed in Figs. 7, 8 and 9, respectively. It is seen that with our proposed scheme the image tracking errors are controlled into a small residual around zero. Moreover, the closed-loop signals boundedness are guaranteed.
Comparing Fig. 7 with Fig. 10, the results show that the value of adaptive gain will affect the convergence speed of image error. The larger the adaptive gain is, the faster the convergence speed of image error is, and the accuracy is also improved.

Remark 6
It should be pointed out that the selection of experimental parameters is very important to improve    the performance of the system, especially the choices of gain parameters in controller and adaptive parameters in Eqs. (29), (34). In this experiment, the adaptive gain parameters are mainly obtained by using the control variable method. Some guidelines are summarized as follows: (1) The designed matrices in the updating laws (32)-(34) are required to be positive definite diagonal gain matrices, (2) Supposing that the elements of the matrices and Q are set to be smaller, but the matrix P are chosen larger, then the tracking error could be made smaller.

Remark 7
In this experiment, the physical parameters of the robot in Table 1 are selected based on the DH parameters of Puma560 manipulator. Note that physical parameters of the robot will not affect the results of this experiment as uncertainties of the robot can be effectively compensated with proposed adaptation mechanism.

Conclusion and future work
In this paper, we propose an adaptive visual tracking control method for the manipulator. The proposed controller utilize the depth-independent Jacobian matrix to make the unknown camera parameters present linear characteristics in closed-loop dynamics of them such that a novel adaptive algorithm is developed to estimate these unknown camera parameters. The uncertainty of actuator failure have also been taken into account and handled by the adaption laws, upon which, an adaptive visual servoing controller with actuator failure compensation is established to drive the movements of feature point. The asymptotical convergence of image tracking error to zero is proved by the Lyapunov method. Furthermore, the effectiveness and superiority of the proposed control scheme have been illustrated by the simulation. However, how to achieve the time-varying faulttolerant control of robot manipulators under a physical platform is still a challenging problem, and this may be regarded as a possible future research topic.

Data availability
The datasets generated during the current study are available from the corresponding author on reasonable request.

Conflict of interest The authors declare no conflict of interest.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.