Advanced Optimal Control-Based Design of a Gough-Stewart Platform

: Designing a robot with the best accuracy is always the attractive research direction in robot community. In order to create a Gough-Stewart platform with guaranteed accuracy performance for a dedicated controller, this paper describes a novel advanced optimal design methodology: control-based design methodology. This advanced optimal design method considers the controller positioning accuracy in the design process for getting the optimal geometric parameters of the robot. In this paper, three types of visual servoing controllers are applied to control the motions of the Gough-Stewart platform: leg-direction-based visual servoing, line-based visual servoing and image moment visual servoing. Depend on these controllers, the positioning error models considering the camera observation error together with the controller singularities are analyzed. In the next step, the optimization problems are formulated in order to get the optimal geometric parameters of the robot and the placement of the camera for the Gough-Stewart platform for each type of controller. Then, we perform the co-simulations on the three optimized Gough-Stewart platforms in order to test the positioning accuracy and the robustness with respect to the manufacturing errors. It turns out that the optimal control-based design methodology helps getting both the optimum design parameters of the robot and the performance of the controller {robot+dedicated controller }.


Introduction
Parallel robots are becoming more and more attractive due to their better performances compared with classical serial robots in terms of the high speed and acceleration, payload, stiffness, accuracy [1]. Nevertheless, the traditional control of parallel robot is always troublesome because of the high non-linear input/output relations.
It can be found in [2] that a large number of researches focused on the control of parallel robots. Generally, the only Dawei Gong 1 School of Mechanical and ElectricalEngineering, University of Electronic Science and Technology of China, Chengdu 611731, China way to ensure the high accuracy of a parallel robot is to get the robot model as detailed as possible for the model-based controller. However, in reality, due to several factors such as the errors from manufacturing and robot assembly, even detailed models still suffer from the problem of inaccuracy. Therefore, more and more researches focus on finding out alternative controller to sidestep the complex kinematic architecture of the robot and to reach a better positioning accuracy performance compared with the classical model-based controllers. Sensor-based controller is an efficient method which estimates the pose of the end-effector with external sensors [3,4]. Visual servoing is a sensor-based controller which takes one or several cameras as external sensors and closes the control loop by the vision information obtained from the camera. A large number of researches focused on controlling parallel robots with visual servoing with the development of the image processing and image acquisition technology [4][5][6][7][8]. It has been proven that the end-effector pose can be estimated effectively throughout the direct observation by vision [9,10] or the indirect observation [11,12]. In addition, the choices of image features applied in visual servoing of parallel robots are numerous, such as the image moments [13,14] when the camera can observe the end-effector in direct or the observation of robot legs when observing directly the end-effector is difficult to realize (such as the machine tool) [6].
When vision-based controller is applied to control parallel robots, the positioning accuracy is one of the most important internal performance and the positioning accuracy comes from the error of observation of the image features [15]. The types and number of cameras that are used, together with the kinds of image features all have influence on the observation error [15]. In addition, the geometric parameters of robots and the camera position also affect the positioning accuracy since they change the interaction models, which leads to effects on the positioning accuracy [16,17]. One problem should be mentioned is that the mapping between the image feature space and the Cartesian space is not free of singularities [18]. The existence of the singularity of the interaction model has a great influence on the accuracy performance of the parallel robot [19]. In conclusion, in order to ensure the best accuracy performance for the pair frobot +controllerg throughout its workspace, the robot geometric parameters and camera position should be optimized pzhzhx@126.com P P Pc P1 P2 P1 ra P6 in advance.
The optimal design methodology of the robot aims at getting the optimal design geometric parameters of the robot that minimize a given objective under constraints. In [20], when visual servoing is applied to the control of parallel robots, the controller singularity and the internal performance (especially the positioning accuracy) should be taken into account in advance. In addition, the visual servoing controller is never considered in the optimal design process before. Therefore, in this work, the "control-based design" methodology considering the controller performance is developed and the positioning accuracy, together with the controller singularity of the corresponding controllers will be taken into account during the robot design process in order to get the optimal geometric parameters of a Gough-Stewart platform for a dedicated controller with the best performance of accuracy and avoid the instability issues appeared in the control process. In this case, three types of vision-based controllers will be considered: • Leg-direction-based visual servoing (LegBVS) [14], • Line-based visual servoing (LineBVS) [21], • Image-moment-based visual servoing (IMVS) of a feature mounted on the platform [13].
To the best of our knowledge, this is the first time that we design a spatial 6 DOF parallel robot with the optimal control-based design methodology. This paper is organized as follow: Section 2 presents the robot architecture, design requirements and the specifications of visual servoing controllers. The concept of visual servoing applied for controlling the Gough-Stewart platform are reviewed in Section 3. In Section 4, the controller accuracy performance (the error model relating the error from the camera observation to the positioning error of the robot ) and controller singularities which lead the instability of the robot are discussed. Optimal design procedure based on the visual servoing controllers is introduced and solved in Section 5. Then, in Section 6, the co-simulation between Simulink and ADAMS with result analysis are described. Finally, some conclusions are drawn in Section 7.

Robot Architecture and Specification
In this paper, we optimize the geometry of the Gough-Stewart platform with visual servoing in order to get the excellent performance of the pair frobot+controllerg. The Gough-Stewart platform, also called hexapod, is a parallel robot with 6 degrees of freedom (DOF): the moving platform of the Gough-Stewart platform translates along the three axes of the space and rotates around the three axes of the space with respect to the fixed base [22]. The Gough-Stewart platform designed in this chapter is a 6-UPS robot ( Fig. 1(a)). The moving platform of the robot is linked to the fixed base by 6 individual chains BiPi (i = 1 6). The connection of the chains with the base is a U joint located at Bi (i = 1 6), the chains are attached to the end-effector by a S joint located at Pi (i = 1 6) and the prismatic actuator allows the change of the lengths of the links BiPi (i = 1, ,6)( Fig. 1(b)).  The base and the moving platform of the considered Gough-Stewart platform are symmetric hexagons (Fig. 1(c)). The radius of the circumcircle of the base is rb , and the radius of the circumcircle of the moving platform is ra . The angle Fig. 1(c)). The complete workspace of the Gough-Stewart platform is a six-dimensional space. We should consider both its 3D location and the orientation of the moving platform. In [ (T&T) angles was proposed in order to represent the orientation workspace of the Gough-Stewart platform. It was proven that the T&T angles take full advantage of a mechanism's symmetry. The orientation matrix of the T&T angles is defined as follows (see details in [23]): c  s  s  c  s   s  s  c  c  s  c  s  s  c  c  c  s   s  c  c  s  s  c  c  s  s  c  c  c  σ  θ ) , , R (1) cφ = cosφ, sφ = sinφ and φ = σ ϕ . ϕ is called azimuth, θ is tilt and σ is torsion in T&T angles.
Based on the T&T angles, a novel 3D workspace subset named maximum tilt workspace was proposed in [24]. This workspace measure is defined as the set of positions that the center of the moving platform can attain with any direction of its z-axis making a tilt angle limited by a given value. Therefore. the orientation workspace of the Gough-Stewart platform can be kept to be symmetrical. Then, the configuration of the Gough-Stewart platform can be defined by the vector x = [xt ;yt ;zt ; ϕ ; θ; σ] while [xt ;yt ;zt] represents the 3D location of the center of the moving platform and [ ϕ , θ,σ] defines the T&T angles. The ranges of the azimuth, tilt and torsion are thus defined respectively as ϕ ( π; π], θ[0; π/12] and σ[0; π/12] in this case. The requirements that must be achieved by the Gough-Stewart platform in this case are given in Tab. 1. They have been fixed after discussion with some of our industrial partners. First of all, the maximum tilt workspace of the Gough-Stewart platform should cover a cube of side length l0 100 mm and the range of T&T angles being ϕ ( π; π], θ[0; π/12] and σ[0; π/12]. In this workspace, several performances should be guaranteed. Thus this cube will be called the regular dexterous workspace (RDW) of the robot.
Additionally, considering the reality (gain of place), the footprint of the robot must be as small as possible.
The Gough-Stewart platform optimized ought to satisfy all the following geometric and kinematic constraints throughout the RDW: • The RDW should be free of singularity (both of the Gough-Stewart platform and the visual servoing controllers applied in this case), • The robot positioning error should be lower than 1 mm, • The robot orientation error should be lower than 0:01 rad, • Some distances are constrained in order to avoid collisions or to have unpractical designs: the distance rb between the origin of the base frame O and the U joint position Bi, the distance ra between the origin of the platform frame Pc and the S joint Pi, the radius of the prismatic actuator's BiPi cross-section denoted as R ' and, finally, the camera frame location ( Fig. 1(b)). These constraints will be further detailed in Section 5.
In order to get the desired 1 mm of positioning accuracy and 0.01 rad orientation accuracy specified in Tab. 1, we propose to apply visual servoing approaches. A single camera is chosen to be the external sensor and mounted onto the ground in order to control the motions of the Gough-Stewart platform. The resolution of the camera is 1920  1200 pixels and the focal length is 10 mm). The best way is to observe some image features attached on the moving platform directly with the camera. However, in some cases, it is difficult to observe the end-effector, such as the milling operations. Alternative features proposed in [14] are the cylindrical legs of the robot's prismatic actuators. Therefore, in this case, three types of classical visual servoing approaches (LegBVS [14], LineBVS [21] and IMVS [13]) will be tested.
The two first controllers take the image features extracted from the observation of robot legs, while the last one will be used to observe the platform directly. The optimal design parameters of the Gough-Stewart platform for each type of controller will be found and based on the analysis of the obtained results, the best pair {robot+controller} will be determined.
In addition, several comments should be illustrate here. First, the dynamic criterion is not mentioned in these specifications. Actually, for the visual servoing, high-speed motion is not the purpose, except for a few specific scenarii [25,26]. Therefore, only the geometry and kinematics performance of the robot will be considered. Besides, a repeatability of 1 mm and orientation accuracy of 0.01 rad could also be obtained by standard encoder-based controller. However, this paper does not aim to prove that visual servoing gets a better accuracy performance compared with standard encoder-based control. This paper aims to prove that in the condition of controlling a robot with visual servoing (or any other types of sensor-based controllers), in order to obtain the guaranteed accuracy, it is essential to optimize the robot and the controller at the same time in the design process.
In the next section, some brief recalls on visual servoing will be given in front of presenting the optimization problem formulation.
cylinder axis (Fig. 3). Then the function relating the time

Recalls on Visual Servoing
In this section, a simple review on visual servoing is presented. Then, we provide some recalls on three considered approaches in particular [13,14,21].

Basics of visual servoing
Visual servoing takes the camera as the external sensor and applies the computer vision data to the servo loop to control the robot [3]. With the help of the so-called interaction matrix L [4], the mapping from the time derivative of image features s to the spatial relative camera-object kinematics screw t can be obtained through this relationship: The components of the interaction matrix L are highly nonlinear and are function of both the visual features s and the robot end-effector configuration x which is also a function of s, i.e. x = x(s), x(s) being also a highly nonlinear function. We should mention that, in simulation, the value of x can be obtained directly by creating a virtual sensor measuring the simulated robot configuration. In practice, the vector x must be rebuilt from the measurement s. Several strategies for dealing with this problem are detailed in [3,4].
The same as other control methods, we define an error e(t) and minimize this error in vision-based controller [4] where the vector m(t) is a set of image measurements (based on the choice of different types of visual servoing controller). The image features s are obtained from the image measurements, a represents the set of additional parameters of the system, such as the intrinsic parameters of the camera used or the models of the objects observed. The vector s is the desired value of the image features.
Combining equations (3) and (2), the relationship between the relative camera-object velocity and the time derivative of the error can be obtained by We apply an exponential decoupled decrease of the error such as e˙= le, then we can get a classical controller: where  is a positive gain, τ could be related to the motor velocities by q J τ   , q  being the robot motor velocities. From (2), a simple visual servoing error model can be developed: where s  being a small error in the observation of the features s and x  being an error in the configuration of the robot end-effector. As we present above, the components of the interaction matrix L are high nonlinear functions of both s and x. The controller singularity appears when the interaction matrix is rank deficient [27][28][29], a small error of observation s  leads to a large positioning error of robot x  . The controller singularity affects a lot in the stability of the control process which should be avoid. Positioning error models based on (6) and the controller singularities of the visual servoing [13,14,21] will be further detailed in Section 4. Now, we make some recalls about the features observed in the three different types of controllers [13,14,21].

Visual servoing image features
In this section, three different types of image features observed in the three different types of visual servoing controllers [13,14,21] are presented. We do not repeat the detailed computation of interaction matrix here. Interested readers can go to [3,4,13,14,21] for the information.

Leg-direction-based and line-based visual servoing
The legs of parallel robots are mostly designed with slim and cylindrical rods. For the Gough-Stewart platform, the camera observes the cylindrical links PiBi of the prismatic actuators (Fig 2). From Fig. 3, it is observed that the projection of the robot leg PiBi in the image plane are two lines 1 i l and 2 i l , which are defined as the intersections of the image plane and the planes S and S with normal (c n , c n ) lying on the camera frame origin C and the observed cylinder. In what follows, the superscript "c" denotes the camera frame (equivalent to the world frame in our paper). For each cyliinder, the normal vector c n k) (k = 1, 2) can be extracted from the camera observation [6]. Therefore, when we observing n cylinders, the vector s of the image features is defined as: For leg-direction-based visual servoing [14], the vectors c n k) are used to rebuild c u i which is the direction of the cylinder axis (Fig.3). Then the function relating the time derivation of c u i and the spatial relative camera-object kinematics screw τ c can be established [6,14]: where M u T i is the interaction matrix of the robot line i for leg-direction-based visual servoing. It has been proved in [30,31] that, in order to fully control the motion of the Gough-Stewart platform, a minimum of three independent line directions u i must be known. The interaction matrix M u T can be obtained by stacking the matrices of M u For line-based visual servoing [21], the vectors c n k) are used in order to rebuild the Pl cker coordinates of the axis of the cylinder for the link PiBi (Fig. 3), P4 the robot With the help of equation (7), the equation (8) can be written in the matrix form is the antisymmetric matrix associated to the cross product [32], Mhi is the interaction matrix related to hi . Therefore, for a line Li, we have Where M uhi is the interaction matrix related to the Pl cker coordinates of Li. Similarly, we can prove that M uhi is of rank 2, thus, in order to fully control the 6 DOFs of the Gough-Stewart platform, observing a minimum of three independent legs is necessary. Then the interaction matrix M Gough-Stewart platform, LegBVS and LineBVS may lead to the same controller performance, at least in terms of controller singularit y (will be proven in Section 4). For LebBVS and LineBVS, in the interaction matrix relating the configuration of the robot platform to the image features s, the radii R' of the observed cylinders ( Fig. 3) is involved, together with the camera pose and the robot geometric parameters (see [6,14]). All these parameters will be optimized later during the design optimization process in o r d e r t o g e t a g o o d c o n t r o l p e r f o r m a n c e .

Image moment visual servoing
IMVS is an approach based on the observation of a target T mounted on the moving platform of the robot (Fig.   2). The image moments can be extracted from the image plane through the observation of the camera [13]. The target T can be a dense object defined by a set of closed contours or a discrete set of m image points [33]. The definition of the image moments, as well as the interaction matrix associated with any moment is provided in [13]. For a Gough-Stewart platform with six DOFs, a set of six independent moments should be selected as the image features. In this work, T is set to be a discrete model composed of three points (A1 , A2 , A3) (Fig. 4). The six independent moments are: the coordinates xg , yg of the center of gravity of the discrete model, the area a of the triangle A1A2A3 in image plane, α the orientation of the discrete model in the image, c1 , c2 two combinations of moments invariant to scale (see definitions in [33]). Then we have: Where τ is the twist of the moving platform of the Gough-Stewart platform,    involved, as well as the camera pose. The value of these parameters will be optimized later during the design optimization process. It should be noticed that, despite the fact that there is no explicit appearance of the robot geometric parameters in the interaction model of image moment visual servoing controller, they still have influence on its performance: the location of the robot workspace is defined by the robot geometric parameters. If the distance between the workspace and camera location is long, the accuracy performance will be worse than if the workspace is closer to the camera location. Accordingly, we still need to optimize the robot geometric parameters in order to optimize the overall robot accuracy.
In the next section, we deal with the computation of some performance indices of the visual servoing controller.

Controller Performance
Concerning the requirements of positioning accuracy for the robot design, two types of controller performance will be defined and considered: • the presence (or even proximity) controller singularities, the singularities of the interaction matrices impact both the positioning accuracy and the controller stability [3]. • the positioning error comes from the camera observation error and the interaction model of the corresponding visual servoing controller.
Then, in this section, singularities of the corresponding controllers and the positioning error models are described.

Controller singularities
It was defined in [27] that the rank deficiency of the interaction matrix L leads to the visual servoing controller singularity. In this section, based on the study of the controllers defined in Section 3, we show the conditions of rank deficiency of the corresponding interaction matrices.

Leg-based visual servoing singularities
The singularity problem of the mapping between the space of the observed image features and the Cartesian Figure 5 Example of a Type 2 singularity for a 3-U PS robot: the platform gets an uncontrollable rotation around P1P2 [31] space has great influence on the accuracy of visual servoing.
Thanks to the work of [28], a tool named "Hidden robot" was developed in order to simplify the study of the controller singularity problem when visual servoing is applied on the control of the parallel robot. It reduces the study of the complex singularities of the interaction matrix to the study of the singularities of the virtual parallel robot hidden in the controller.
In [31], the problem of LegBVS controller singularities for the control of the Gough-Stewart platform has been detailed studied. The Gough-Stewart platform consists of six UPS legs.The corresponding hidden robot of the UPS leg is made of UPS legs. Since UPS legs have 2 degree of actuation, only three legs to be observed are enough to fully control the Gough-Stewart platform when using leg direction observation [28].
The singular configurations of 3-UPS-like robots have been deeply studied in [34] and [35]. Type 2 singularities appear when the planes P1, P2, P3 (whose normal directions are defined by the vectors 1 u , 2 u , 3 u and the plane P4 (passing through the points P1 ,P2, P3 in Fig. 5) intersect in one point (that can be at infinity) (Fig. 5).
Singularities of LineBVS applied to the control of the Gough-Stewart platform have never been studied before. The concept of the hidden robot is to find what kind of virtual actuators correspond to the features of observation applied in visual servoing.For LineBVS, we take the Plücker coordinates of a line Li as the image feature to be observed and it can be defined from the fact that a 3D point and a 3D orientation define a unique 3D line [36].Therefore, we should find the virtual actuators corresponding to the 3D line Li .
As we see from Fig. 6, Bi is the 3D point and i u the unit vector, Li (i = 1,2,…,6) is the 3D line they define. The active U joint in space is the virtual actuator that makes the vector i u move. In general, the actuated PPP chain should be added on the preceding leg links so that the point Bi can move in space. Therefore, for a UPS leg, its corresponding hidden robot when using line-based visual servoing is a PPPUPS leg (Fig. 6). However, in the case of a Gough-Stewart platform, all the U joints are fixed on the base which means that the points Bi are fixed in space. Then the actuated PPP chain is no longer needed and the 3D lines Li passing through the robot links can be defined only by the vector i u .
Therefore, the corresponding hidden robot of Gough-Stewart platform

O' y'
P1 A1 x' then pixellized (Fig. 7). We suppose that the error of estima-Active U joint Active PPP chain Figure 6 Corresponding hidden robot leg when the line Li in space is observed is the same as the hidden robot when applying leg-directionbased visual servoing, the 3 UPS robot, which means that these two visual servoing controllers share the same conditions of controller singularities. Then, we suppose that in terms of controller performances, LegBVS and LineBVS are the same (which will be proven in the following Section).

Image moment visual servoing singularities
For IMVS, the controller singularity appears when the matrix Lm is rank deficient. The expression of the matrix Lm is rather complex and it is difficult to find the condition of rank deficient analytically. We should define a criterion of "proximity" to controller singularities. A list of indices that could be adapted in the analysis of robot singularity were presented in [37]. In this case, we take the inverse conditioning of the interaction matrix as the index of the controller singularity to estimate the numerical stability of the interaction matrix Lm .

Positioning accuracy model
We propose to use the model (6) to describe the controller repeatability, as it was done in [38].

Observation errors in the leg-based visual servoing
The positioning error models when observing the robot links in the Leg-based visual servoing approaches have been detailed presented in [15,17]. The positioning error comes from the camera observation error of image features (For LegBVS, the features are the leg directions, for LineBVS, the features are the leg Pl cker coordinates). When we use the camera to observe the robot links, the link edges are projected into the image plane into lines k ij l (Fig. 3), which are tion of the lines k i l is due to a random shift of ±0.5 pix in the pixels corresponding to the intersections of the Then we get the error model where Δs stands for the small variations of the image features s, Δp contains all errors In this case, the camera observation noise is set to be ± 0 . 5 pixel, which is a typical noise for cameras. Thus every component of vector can take the values +0.5 or 0.5. With the help of equation (6) and (14), we can get the observation error model for LegBVS and LineBVS written under the generic form: Δx = LPΔp (15) where L P = L + Jn .  (16) where Q = [x 1 p x 2 p x 3 p y 1 p y 2 p y 3 p] T , S is the matrix which transforms the time derivatives of the set of image moments m to the time derivatives of the coordinates of the points projected to the pixel plane. Thus, the equation (11) can be written as the form (17) We estimate that the error of estimation of each component of Q to be ±0.5 pix (see Fig. 8) for the location of each point projected in the image plane. Then the error model of the image moment visual servoing controller can be written in the form:

Positioning accuracy
For the positioning accuracy of the Gough-Stewart platform, we have Δx = [ΔtxΔtyΔtzΔwxΔwyΔwz] with [ΔtxΔtyΔtz] being the translation errors along the three axes and [Δwx Δwy Δwz] being the rotation errors around the three axes. Then the positioning error is defined as (19) and the orientation error is defined as

Optimal design Procedure
In this section, the design procedure developed in order to obtain the optimal parameters of the Gough-Stewart platform together with the parameters of the controllers are described

Design variables
Robot design parameters: As we presented in Section 2, the Gough-Stewart platform can be defined by the following geometric parameters: ra , rb , α0, α1 , α2 (Fig. 1(c)) ).All these parameters have an effect on the size of the robot performance, as well as on the controller performance. In addition, when LegBVS and LineBVS are applied, the radius of the cylindrical distal links of the Gough-Stewart platform also has influence in the positioning accuracy [15], thus the radius of the cylindrical distal links PiBi (i = 1 ,2 , …,6 ), denoted as R (see Fig. 3), is a decision variable of the optimization process. When image moment visual servoing is applied, the coordinates of the discrete three points model [x 1 y 1 x 2 y 2 x 3 y 3 ] (in moving platform frame x'O'y') defining the configuration of the model (Fig. 4) affects the controller interaction model. They must be optimized when dealing with image moment visual servoing.

Controller design parameters:
The configuration of the camera is normally parameterized by six independent parameters and it affect the controller interaction model. In order to observe the robot (both the robot legs and the end-effector) in a symmetrical way,: • The camera frame orientation is set to be parallel to the robot fixed frame, • The camera origin is imposed to stay on a vertical line passing though O ((xc , yc) of the camera frame origin set at (0, 0)).
Additionally, some other variables that we used in the optimal design process need to be defined: L is the length of the prismatic actuator BiPi (L= i i P B , i = 1,2,…,6). l0 is the dimensions of the side length of the cube RDW (see Tab.1) [1]. Design variables: Based on the explanations above, two different sets of design variables (grouped in a vector y), depending of the types of controllers are defined:  (20) In the next section, the optimal design problem for the Gough-Stewart platform will be formulated.

Objective function
As mentioned in Section 2, the robot should be as compact as possible. The footprint of the Gough-Stewart platform is evaluated by the radius r b of its base. Therefore, the optimization problem is formulated in order to minimize the value of rb .

Constraints
The constraints provided in Section 2 are reviewed here. Throughout the RDW, following geometric and kinematic constraints must be satisfied: • The RDW should be free of singularity (both of the robot and the controller): Singularities of the controllers are detailed presented in Section 4.1. In this case, we used the inverse condition number of the interaction matrix L, denoted as қ 1 (L). In the RDW, we want to have The "mechanics" singularity of the Gough-Stewart platform is different, this problem is really complex and has been studied decades ago [1,[39][40][41][42]. In [43] and [44], a kinetostatic approach taking account of the force transmission was proposed to determine the singularity-free zones of a parallel robot. When the pressure angle is close to 90 deg, the parallel robot is close to a singular configuration. Therefore, we calculated the pressure angles β = [β1, …, β 6 ] T for all the six robot legs of the Gough-Stewart platform [43,44]. In the RDW, we want to have • The value of the robot positioning accuracy ought to be lower than 1 mm and the orientation accuracy should be lower than 0.02 rad. The positioning error model is defined in Section 4.2. The models (6) and (15) being linear in terms of the observation error, the maximal positioning error Etmax = max t E and the maximal orientation error Ewmax = max w E of the robot will be found at one the corners of the hyperpolyhedron defining the observation errors [45]. The repeatability constraint can be formulated as: The aforementioned RDW throughout which all the constraints (21) to (24) must be satisfied should cover a cube of side length l0 100 mm and the range of T&T angles being ϕ ( π; π], θ[0; π/12] and σ[0; π/12]. The algorithm of calculating the size of the Largest Regular Dexterous Workspace (LRDW) is detailed presented in [46] and is adapted in this case for getting the cubic LRDW among the RDW of the manipulator for a given decision variable vector y.

Problem formulation and optimization results
For designing a compact Gough-Stewart platform with the detailed spcifications given in Tab. 1, the following optimization problem is formulated: minimize over subject to rb l LRDW > 100 mm (26) As introduced in Sec. 3, observing three legs is enough to fully control the Gough-Stewart platform when leg-based visual servoing controllers are applied. In this case, as a matter of comparison, we will optimize the geometric param- algorithm implemented in the MATLAB fmincon function. A multistart algorithm, combined with random initial points initialized by a Genetic Algorithm, was also used in order to increase the chances to reach the global minima. The optimal design results are given in Tab. 2.
As we see from the results of optimization, in terms of the footprint of the robot, the Gough-Stewart platform designed based on the LegBVS, LineBVS and IMVS are close from each other and the differences are almost negligible. Especially, for robots designed for leg-based visual servoing controllers, the geometric parameters of the robot are the same under the same observing condition (Case1 and Case2). This result proves our suppose proposed in Section 4. 1. 1. From the equation (8), we see that the coordinates of points Bi are constant, then the time derivative of hi and u i are linearly dependent, which means that the LegBVS and LineBVS share the same controller performance.
In the next section, we will perform co-simulations with ADAMS and Simulink to test the robot accuracy performance.

B6 B5
Base with error on joint Figure 11 Gough-Stewart platform optimized using image moment visual servoing

Simulation method
In order to validate the optimization results and test the robot accuracy performance, the co-simulations are performed within a connected ADAMS-Simulink environment (Fig. 13). Five Gough-Stewart platform models with the optimal geometric parameters obtained from the optimal design process (one model per controller) are created in the software ADAMS.
Real-time data (block "Data acquisition") of the ADAMS simulator are extracted :  Figure 12 Test points in the LRDW  of the points Pi and Bi ( Fig. 1(b)), • For IMVS, the coordinates of the three pointsA1 , A2 and A3 (Fig. 4) are extracted.
These real-time data are used to rebuild the image seen by the pinhole camera ("Simulation camera"). Then the random noise related to the observation errors presented in Section. 4 are added. Then we extract the image features s depending on the controller type and use them in order to control the robot based on the controller defined in (5).
Then, each robot is driven from their home pose to the desired poses with the dedicated controller. All their positioning accuracies and orientation accuracies are recorded during the co-simulation.
Additionally, in order to test the robustness of the accuracy of model with geometry errors, the same co-simulations were operated with the error added in model. The models we added errors on joints are defined as blow: we add a random error on the location of the joint Bi on the base of the robot, the distance between the accurate joint Bi and the joint with error B' i , denoted as l BiB' i , (l BiB' i = 0.1 rb) (see red parts of In the next step, The designed robot prototypes were controlled with another controller, different from the one dedicated during the design process, for verifying the original purpose of performing control-based design. In what follows, for brevity, only the result of LineBVS applied to the robot designed for the image-based moments will be given here.
Results are shown and analyzed in the next subsection.

Simulation results
In this section, we denote as: • Since we have proved that the LegBVS and LineBVS have the same control performance for the Gough-Stewart platform and the geometric parameters of the robot designed for these two controllers are the same under the same observation condition, we only perform the co-simulations for the robot controlled with LineBVS . We played each simulation during five seconds and recorded the positioning error. For the robot of Case A going to point T 4 , Pose 2, from the results, we found that the robot is converging at around 0.  Studying the results, we see that the robot Model 5 in Case C leads to the minimal positioning error and orientation error. For robots in Case A and Case B, the mean value is very close to the requested value of 1 mm. However, there are some points in the workspace for which the error is slightly upper this limit (maximal error of 1.24 mm in both cases). Actually, the positioning accuracy model applied during the optimal design process (Section 5) in order to estimate the controller performance was really simplistic. It was thus source of inaccuracies of positioning error estimation during the optimal design process. However, even with this simplistic model, the maximal robot positioning error (1.24 mm) is only slightly upper the threshold of 1 mm while their mean values stay close to 1 mm. Additionally, for the measured orientation error obtained from all the cases, they are far lower than the requested 0:01 rad. For the results obtained from the models with geometry errors, they are similar to the results obtained from the accurate models, which proves that the robustness of the accuracy of models when applying the visual servoing controllers.
Then we study the result of [Case D], which is the most important. For the Gough-Stewart platform optimized for IMVS but controlled with the LineBVS, the mean error is far bigger than the requested value of 1 mm, and the maximal error even grows up to 1.56 mm. This result confirms that it is necessary to optimize a robot for a dedicated controller. In other words, control-based design of {robot+controller} helps ensuring the vision-based control accuracy performance.
Another problem which is the most interesting is that the discrete three points model we obtained from the optimal design when using IMVS form a triangle (Triangle 1) which is not a regular triangle. Therefore, in order to study why it is such a configuration, we create a discrete three points whose configuration is a regular triangle (Triangle 2). The coordi-  Fig. 14 and Fig. 15). In addition, we performed the same co-simulations as we did for Model 5 in Case C, but the target is changed to the new three discrete points model (Triangle 2) in IMVS. The simulation results show that the maximal positioning error comes to 1.6 mm and maximal orientation error comes to 6.0e-4 rad, while the corresponding results are 0.63 mm and 4.3e-4 rad when observing the three discrete points model Triangle 1.
The results prove that the configuration of the discrete points model has an influence on the observation of image moments and affects the controller accuracy performance. As a result, it is necessary to perform topology optimization on the configuration of the shape of target observed during the design process.

Conclusion
In the work presented above, a novel advanced optimal design methodology "control-based design" is performed in order to design a Gough-Stewart robot with the best accuracy performance of the pair { robot+controller}. We have proven that the controller performance (accuracy, singularity) are affected by the robot geometric design parameters. Thus, in the design process of robot, it is necessary to find the optimized geometric parameters of the robot that will allow the best performance of the pair {robot+controller}. Three different classical types of visual servoing controllers: LegBVS, LineBVS and IMVS were proposed to be applied on the Gough-Stewart platform. Positioning error models considering the camera observation error were developed based on the study of these three types of controllers. In addition, in order to avoid the instability issues, the singularities of these controllers were analyzed for purpose of avoiding the controller singularities. In the next step, the design optimization problem for getting the optimal geometric parameters and the placement of the camera for the Gough-Stewart platform has been formulated for each type of controller. Then, co-simulations between ADAMS and Simulink for the Gough-Stewart platforms optimized for the three controllers were performed. The results showed that the robots designed for these three visual servoing controllers had the similar size (robots designed for LegBVS and LineBVS share the same size). The robot designed for IMVS had a better positioning accuracy compared with the other two robots optimized for LegBVS and LineBVS. Especially, the co-simulation results show that when one controller is applied on a robot designed for another one, the positioning error performance was no longer guaranteed, confirming the importance of the control-based design approach.

Acknowledgements
The authors sincerely thanks to Professor S bastien Briot of E´cole Centrale de Nantes for his critical discussion and reading during manuscript preparation.

Funding
Not applicable

Availability of data and materials
The datasets supporting the conclusions of this article are included within the article.

Authors' contributions
The authors' contributions are as follows: Dawei Gong was in charge of the whole trial; Minglei Zhu wrote the manuscript; Shijie Song assisted with sampling and laboratory analyses.