Obstacle Avoidance and Terrain Identification for a Hexapod Robot

* Abstract: Legged robots have advanced potential in mobility compared with wheeled or tracked vehicles in harsh environment. In real applications, both environmental perception and terrain identification play an important role in ensuring the safety of the robot. However, few papers involve the obstacle avoidance and terrain identification methods for legged robots. An obstacle avoidance and terrain identification method is proposed in this paper. Firstly, a novel dynamic obstacle avoidance motion planning is proposed which takes full account of the motion ability of the robot. Secondly, a terrain identification method is proposed. The robot can distinguish typical terrains like flat floor, step-on, step-down, ditch and so on. Besides, the ground properties like stiffness are identified to improve the performance of the robot. Finally, the proposed method is integrated on hexapod robots and tested by real experiments which show the method effective. Our methods and experimental results can be a valuable reference for other legged robots.

robots can select foot positions autonomously which makes legged robots have advantages in structured and field environment [1,2]. Marco Hutter et al. [3][4][5][6] introduced a quadrupedal robot ANYmal which can be used in factory environment. Gerardo Bledt et al. [7,8] introduced the MIT Cheetah 3, a quadrupedal robot which is demonstrated by a low cost of transport (COT). Marko Bjelonic et al. [9] introduced a six-legged robot which performs autonomous navigation in unstructured terrain. Feng Gao et al. [10][11][12] introduced a six-parallel-legged robot which can walk on irregular terrain while keeping the body balance and complete tasks like opening the door with haptic information. Although some breakthroughs have been made in the studied area of legged robots, the environmental adaptability is still a major issue to be further studied.
With respect to the obstacle avoidance and navigation method for robots, many researchers have performed much work. Anthony Stentz [13] presented the Focussed D* algorithm for real-time path replanning. Maxim Likhachev et al. [14] presented a graph-based algorithm to produce bounded suboptimal solutions in an anytime fashion. Lin Lei et al. [15] presented an improved genetic algorithms based path planning for mobile robots under dynamic unknown environment. Aksel Andreas Transeth et al. [16] used the terrain for locomotion in a snake robot. David Wooden et al. [17] introduced autonomous navigation for a rough-terrain quadruped robot Bigdog. Annett Stelzer [18] presented a stereo-vision-based navigation for the six-legged robot DLR Crawler in rough terrain. Anrey V.Savkin et al. [19] presented a collision free navigation for a non-holonomic robot in unknown complex dynamic environments. Chun-Hsu Ko et al. [20] developed an effective scheme for moving obstacle avoidance. Muhannad Mujahed et al. proposed a new approach for reactive collision avoidance, the Admissible Gap (AG). Although the obstacle avoidance methods have been studied for many years, most of these them did not take full consideration of the robot's motion ability. Therefore, an obstacle avoidance method which considers the robot's mobility is proposed in this paper.
With respect to the terrain identification and classification, many researchers have performed much work. Karl Iagnemma et al. [21] proposed an on-line estimation method to identify key terrain parameters by using on-board rover sensors. Fernando L. Garcia Bermudez et al. [22] used vibration data to classify terrain. Paul Filitchkin and Kaite Byl [23] proposed a feature-based terrain classification method for Boston Dynamics' LittleDog. Graeme Best et al. [24] used the actual position and goal position to build a support vector machine (SVM) classifier to distinguish different terrains. Joshua Christie et al. [25] offered an acoustics based method to perceive the terrain-robot interactions during locomotion. Camilo Ordonez et al. [26] used a probability neural network (PNN) to train the recorded observer current to identify terrain. Will Bosworth et al. [27] measured ground stiffness during standing and differentiated ground type by impedance during jumping. Most of these methods used machine learning algorithms to train many samples. However, ours identifies terrain with the contact force data in a more direct way.
The organization of this paper is as follows: Section 2 introduces the system overview of the hexapod robot. Section 3 presented the obstacle avoidance and terrain identification method. Section 4 offers the experiments to validate the method effective. Finally, Section 5 concludes this paper.

System Overview
As shown in Figure 1, the six-legged walking robot called Qingzhui is used in this paper. This robot is highly adaptable to the environment. It can walk on many terrains like concrete, grass, asphalt, carpet, wood and so on. The maximum climbable gradient can be 25 degrees. As shown in Figure 2, each leg has three degrees of freedom: abduction-adduction degree, rotation degree of the thigh and rotation degree of the shank. The robot hosts two RGBD cameras and a LIDAR as perceptive sensors and an inertial measurement unit (IMU), eighteen encoders and eighteen torque sensors as proprioceptive sensors.
The tip can be regarded as a light sphere, and the origin of tip coordinate system is the center of the sphere. The tip position can be calculated as follows.
( 1) where θa, θt and θs represent the angles of the hip abduction-adduction degree, the rotation degree of the thigh and the rotation degree of the shank, and lAB and lBE represent the length of the thigh and shank. ( 2) where Ftip is the contact force, J is the Jacobi matrix and τ is the joint torque captured by the torque sensors.
Besides, the detected point in the view of camera can be transferred into the point in global coordinate system as follows.
where G p is the point in global coordinate system, V p is the point in vision coordinate system, is the transportation matrix from the body coordinate system to global coordinate system and is the transportation matrix from the vision coordinate system to body coordinate system.
The control framework is shown in Figure 3. The human gives the control command by human machine interface (HMI) to change the motion direction and velocity of the robot. The vision sensors detect obstacles and the obstacle avoidance method affects the robot trajectory. Besides, the force sensors are used to classify the terrain to improve the performance of the robot.

Methodology
Both obstacle avoidance and terrain identification play an important role in ensuring the safety of the robot. In detecting and rescuing missions, not only the static obstacles but also the dynamic obstacles can inflict damage on robots. Therefore, we introduce a dynamic obstacle avoidance method in Section 3.1. This method takes full account of the motion ability of the hexapod robot and formulate this question as an optimization problem. Besides, the terrain geometric type can be a feedforward module in the control system to improve the dynamic response of the robot. We introduce this problem in Section 3.2. What is more, the terrain-leg impedance model is introduced in Section 3.3.

Obstacle Avoidance and Motion Planning
The main idea of this method can be described as follows. The robot's desired velocity from upper control or human command may result in collisions. The robot needs to replan a new collision-free movement which accomplishes the desired movement as much as possible. However, the ability to move in different directions depends on layout design of the robot, which means the maximum speed changes along the movement direction. Therefore, the motion planning can be regarded as an optimization problem.
The robot used a LIDAR to get the point cloud of the surrounding environment. However, the point cloud from LIDAR is sparse. To get more information in the region of interest, one RGBD camera is used to accomplish the three-dimensional object recognition and track the target, and the other one is used to obtain the depth images of the ground in front of the robot. Based on the calibration results from our early work, the information captured by each sensor is fused into the robot coordinate system.

Figure 4: Obstacle avoidance method
After the fusion of the data from RGBD cameras and the LIDAR, the surrounding obstacles can be represented as some cylinders with four parameters: the center point of the circle projected in the ground plane (x, z), the radius r and the height of the cylinder h. As shown in Figure 4, the robot is represented by six orange cylinders. The distance dpq which is the Euclidean distance between the pth robot cylinder and the qth obstacle cylinder is calculated. The hexapod robot has different motion ability along different directions. To ensure both safety and speediness, the motion planning is represented as an optimization problem as follows.
where ut is the current forward velocity, ui is the velocity vector of direction i, dsafe is the minimum safe distance.
Based on this method, the robot chooses a most fast way without collisions. The calculation time is about 100ms, which is less than the gait cycle.

Terrain Geometrical Type
As shown in Figure 5, we mainly focus six types of the terrain geometric in this paper: flat floor, upslope, downslope, ditch, step on and step down. The robot can walk more smoothly if it knows which type of the terrain in front of it. As shown in Figure 6, the terrain is discretized as many small cylinders, and each cylinder is simplified as a point in side view. A cubic spline interpolation is used to connect these points and the geometrical type is decided based on the curve.

Ground properties
To describe the ground material in detail, many features (stiffness, damping, cohesion, friction, surface irregularity and so on) need to be considered together. It is difficult to measure all these parameters only based on the sensors installed on the robot. However, knowing the main features is good enough to identify the ground and increase the performance of the robot based on it. To extract the main features of the ground and reduce the amount of calculation, we regarded ground as a spring-damping, and identified the ground type based on the stiffness. Meanwhile, leg impedance control is used. The leg-terrain impedance model is shown in Figure 7, and the dynamic model is shown as follows. (7) where l represents leg and f represents foot. The whole model can be regarded as two spring damping models. The foot mass is ignored because it is much lighter than the leg. The stance phase is shown in Figure 7. The hip position change is represented as Dy and the tip position change is represented as Dll. The main purpose of this method is to classify the terrain types, therefore we ignore the damping. The stiffness can be calculated as follows.
where is the whole stiffness, is the leg stiffness and is the ground stiffness. We used a laser tracker to record the hip position and calculated the tip position based on the encoders. We tested  Table 1. Based on the result, the concrete and foam can be identified obviously in both two kinds of gaits. Therefore, we choose the data in touch down phase and calculated the terrain-leg stiffness to identify the terrain. The result is shown in Table 2. Three kinds of terrain (concrete, foam and wood) can be identified clearly by the measured stiffness based on the sensors. After classifying the terrain type, the parameters in leg impedance control can be adjusted autonomously and the robot vibration can be reduced with modified control parameters.

Experiments
In this section, we introduce three experimental results to validate our method. The first experiment is used to validate the obstacle avoidance method. The robot started walking at the entrance of the maze and autonomously walked out of the maze. The robot trajectory is shown in Figure 8 and the process is shown in Figure 9. The blue line represents the wall of the maze. The length of the maze is about five meters and the width is about 3 meters. The robot center is marked as red point, and the green circles and yellow circles represent the safety range of the robot. From the results, the robot can walk through narrow roads without collisions which showed the avoidance method effective.
The second experiment is shown in Figure 10. The robot traveled through obstacles first, and then walked over a step. After detecting the step, the robot adjusted its position before it and changed another gait to walk over it. This experiment validated the obstacle avoidance and terrain geometric identification. In the third experiment, the robot walked on three kinds of ground materials mentioned in Section 3.3 and the

·6·
velocity was set to zero. To compare the robot's performance with and without the parameters adjustment based on terrain identification more clearly, the result of walking on a foam is shown in this section. At first, the robot walked with its default parameters, and we collected the acceleration data. After that, the terrain identification worked, and the leg stiffness changed based on the retain identification result. The y-axis acceleration data is shown in Figure 11(a). To view the vibration more clearly, a boxplot is shown in Figure 11(b). The result showed the vibration reduced after terrain identification started. Because the terrain type is classified rightly, the control parameters changed and the performance of the robot improved.

Conclusions
This paper proposed an obstacle avoidance and terrain identification method for hexapod robots. The obstacle avoidance method takes full consideration of the mobility of the robot. The terrain identification is separated into two parts: terrain geometric identification and terrain property classification. The proposed method is integrated on real six-legged robots and several experiments were carried out. The results showed the method effective. Our method and experimental results can be a valuable reference for other legged robots, especially the legged robots which are used in outdoor environment. In future work, we will improve the dynamic performance of the robot and environmental adaptability of the robot. What is more, we will optimize our method to make the robot smarter.