Research on Body Positioning Measurement Method of Bolting Robot based on Vision Theory

In view of the intelligent demand of underground roadway support and the precise positioning of underground unmanned fully mechanized face, a method of body positioning measurement of bolting robot based on the principle of monocular vision is proposed. In this paper, a vehicle body positioning model based on image data is established. The data is obtained by camera, and the transformation between image coordinates and world coordinates is completed by coordinate system transformation. The monocular vision positioning system of bolting robot is designed, and the simulation experimental model is built to measure the effective positioning distance of monocular vision positioning system in the simulation experimental conditions. The experimental platform of bolting robot is designed, and the vehicle is measured Real time data of body positioning, analysis of experimental error and demonstration of reliability of the method. In this method, the real-time localization of underground mine is realized by the robot of bolting, and the accuracy and efficiency of localization are improved, which lays the foundation for the localization control of mining face and the automation and unmanned of the robot of bolting.


Introduction
The Coal is the strategic energy in China, among which 51.34% of the total is coal resources with a buried depth of more than 1000m (Lan et al.2016), while the risk coefficient of deep coal seam is very high, which poses a great threat to the life safety of the miners. Casualty accidents occur frequently in the process of mining, about 70% of which occur in the mining operation area (Wei et al.2019). It is of great importance to study the unmanned technology in the fully mechanized face (He.2007).
At the end of 2020, 550 intelligent fully-mechanized mining working faces have been built nationwide. There are no complete successful examples of intelligent fullymechanized mining face. The development of mining intelligence is out of balance. The key breakthrough is the intelligent of tunneling. The proportion of supporting and anchoring time in the driving and anchoring is large. The research and development of drilling and anchor robots urgently need to be solved.
The bolting robot is one of the core equipment of the driving face. Its support time is directly related to the life safety of the staff and the efficiency of the tunnel driving (Zhang.2010). Therefore, the research must strive to improve the level of intelligence and automation of the bolting vehicle (Fu et al.2015). The key technology to realize the unmanned comprehensive driving face is to position and cruise the bolting robot independently. In order to realize the cruise operation of the bolting robot, the positioning of the bolting robot must be carried out. At the same time, the relationship between the body coordinate and the tunnel coordinate must be determined first, that is, the body positioning.
In recent years, some institutions of higher learning and related scientific research institutions have carried out research on the algorithm of mobile equipment fuselage positioning measurement in mine tunnels, and obtained some achievements. According to the positioning method, at present, the positioning and measuring system for underground mobile equipment includes UWB based positioning and measuring system, IGPS based positioning and measuring system, machine vision based positioning and measuring system and multi-sensor combined positioning and measuring method. Based on the positioning measurement system of UWB (Wu et al.2015), through two-tof ranging technology to realize the self positioning and orientation of the tunnel boring mach-mine, but in the process of measurement, four base stations occupy a large space, which is easy to interfere with the supporting equipment of the tunnel boring machine in the limited and closed tunnel, making it difficult to realize under the actual conditions. Based on the IGPS positioning measurement system (Du et al.2016;Huang et al.2017), the position and attitude parameters of the tunnel boring machine are obtained through the spatial intersection measurement technology. This method has the advantages of high autonomy and strong resistance to obstacles. Based on the machine vision positioning measurement system (Ma et al.2008;Cao et al.2017;Tian et al.2010),the explosion-proof camera is used to collect the image of the cross laser on the target of the fuselage. However, the light path of the cross laser is long, and the dust concentration is large in the process of tunneling, which makes the light blocked by the dust and affects the image collection effect. The multi-sensor combined positioning measurement method (Zhao.2013;Cao et al.2017;Shang et al.2013) will involve the fusion of a variety of data, increasing the complexity of the positioning measurement system. The above methods are mainly applied to the positioning of tunnel boring machine, shearer and rock drilling robot, but there are few research (Wang.2015;Zhi et al.2018;Li.2012) on the positioning of bolting robot. With the rapid development of computer vision image enhancement technology (Miao et al.2017), the application of vision positioning technology in underground special working conditions has become feasible (Du et al.2016), especially in the comprehensive mining situation with poor light, severe dust, water mist and hazard of vibration. At the same time, the bolt of roadway roof and side plate can be used as the basis of positioning and map navigation for the robot.
In order to meet the needs of unmanned anchoring in fully mechanized mining face, aiming at the problem of fast and accurate positioning of bolting robot, this paper proposes a kind of autonomous positioning system of bolting robot body based on monocular vision. The system uses CCD camera installed in the car to collect the distribution of roof bolts in the roadway. Through the extraction of image feature points and line fitting, combined with the visual estimation of positioning mathematical model Type B, complete the self positioning of the robot body in the ideal tunnel environment. In this paper, a monocular vision positioning system test platform is built to verify that the positioning system meets the requirements of autonomous, rapid and accurate positioning.
2 Bolting robot vision positioning system

Overall measurement principle
In the driving face, the roadway is supported by anchor rods and anchor nets. At present, drilling and anchoring are mainly carried out manually. The risk factor is high. The intelligent drilling and anchor support is the ultimate goal in the intelligent tunneling. According to the inherent structure of the anchor net in the roadway For the anchor rod, the anchor rod feature recognition is carried out, the trajectory tracking and positioning of the anchor rod are carried out, and the visual positioning basis of the bolting robot is proposed. The specific tunneling face environment is shown in Figure 1. .

Fig. 1 Tunneling workface
The vision positioning system of bolting robot consists of the CCD camera, laser rangefinder, bolting robot, anchor rod, computer system, supplementary light source, etc. The bolting robot carries out the support operation during the moving process, drilling the anchor into the top of the roadway, and exposing the end of the anchor. CCD camera is arranged at the back end of the bolting robot to take photos of the environment at the top of the tunnel. The laser rangefinder measures the distance from the camera to the top of the tunnel synchronously. The computer system receives the image data of CCD camera and the depth data of laser rangefinder, and calculates the relative displacement of the current position relative to the previous data acquisition position. By means of the accumulation of relative displacement, the positioning of the driving process of the tunnel boring machine is realized.
The computer system realizes the conversion from image data to location measurement data. The principle of ranging conversion is shown in Fig. 2 . When the camera is at i time, the camera captures the i-th image and compares it with the i-1 position image. The pixel coordinates of the same feature point of the I-1 image and the I-I image are different in the two images, each point has a group of image coordinate data in the two images respectively; through the combination of image coordinates of multiple feature points, the change of image plane angle and displacement is determined, and the relative location is realized, as shown in Fig. 3.The AB line in the figure represents the change of heading angle, and the plane composed of ABC represents the coordinate system offset and rotation between image i and image i-1.

Bolting robot positioning coordinate system
For the positioning of the bolting robot, it is mainly to determine the position and attitude of the bolting robot relative to the tunnel, that is, to determine the position coordinate of the origin of the fuselage coordinate system in the tunnel coordinate system and the rotation angle of the fuselage coordinate system relative to the world coordinate system. In the process of visual positioning, it is necessary to determine the world coordinate system, the fuselage coordinate system, the camera coordinate system, the image coordinate system and the pixel coordinate system relationship. The coordinate system is shown in Fig. 4.  (Ma et al.2008) Establish coordinate system: The tunnel coordinate system O W X w Y W Z w is established on the roof, and its origin ow is located at the projection point of the laser pointer origin of the heading machine on the roof. The tunnel coordinate system depends on the specific driving route, in which the O W Z w axis is the direction pointed by the laser pointer, i.e. the driving direction; the O W X w axis is vertical to the driving direction, horizontally to the right; the O W Y W axis is vertical to the O W X w Z w plane. The camera coordinate system O c X c Y c Z c origin O c is established in the camera optical center, the O c Z c axis is right along the image plane, the O c Y c is upward along the optical axis center line, the camera to the scene direction is positive, the O c X c is downward along the image plane; the body coordinate system O b X b Y b Z b coincides with the camera coordinate system. Image coordinate system XOZ coordinate origin o is in the image center, OZ is right along the image plane, OX is under the image plane; pixel coordinate system O f UV, coordinate origin O f is in the upper left corner of the image, U is right along the image plane, V is under the image plane.

Visual positioning estimation model of bolting robot
As shown in Fig. 1, the heading angle between adjacent images ∆φ = φ i − φ i−1 , ∆θ = θ i − θ i−1 , the pixel coordinates of feature point A in the adjacent images are (u i1 , v i1 ), (u i+11 , v i+11 ); the pixel coordinates of feature point 2 in the adjacent images are (u i2 , v i2 ) , (u i+12 , v i+12 ) , the deflection angles of the lines formed by feature points 1 and 2 in the images are φ i−1 , φ i , respectively, and the bolting robot is The heading angles are respectively θ i−1 and θ i when collecting adjacent images, then (1) The relationship between the i-1 acquisition pixel coordinate system and the i acquisition image coordinate system is (2) The translation vector between the two sampling pixel coordinate systems is Relative displacement under tunnel coordinate system combined with camera model The final position coordinate relative to the roadway

Bolting robot positioning parameter
The position and attitude of the bolting robot relative to the tunnel coordinate system is simplified as a rigid body. ΔZ represents the forward direction error of the geodetic coordinate system and the body coordinate system, ΔX represents the horizontal direction error of the geodetic coordinate system and the body coordinate system, and θ represents the heading angle error. Three positioning parameters of the ideal tunnel. Target body pose parameters are listed in Table 1.  According to the technical code for bolt support of coal mine roadways in 2018, the row spacing error between bolt holes shall not exceed 100 mm, and the bolt strike error shall not exceed 5°. For the 3.6m roadway with a width of 5.5m and a height of 3.6m, the anchor bolts on both sides of the roof shall not exceed 250mm according to the edge of the roadway, and the row spacing of anchor bolts shall be 800mm. There is no explanation of the deviation angle of the whole row in the national standard. Through the calculation of the roadway section size, the angle error of the whole row of anchor bolts shall not exceed 2.29°, and the analysis process is shown in Fig. 6.The circle in the figure represents the bolt end. Under the actual working condition, the bolt end spacing is about 700mm, so the average error in the forward direction between adjacent bolts is 33mm, and the average error in the horizontal direction is 13mm.

Camera model
Camera model is to realize the transformation from image coordinate system to geodetic coordinate system, including translation and rotation transformation.
Set the coordinate of the space point P in the tunnel coordinate system as (X w , Y w , Z w ) , the corresponding coordinate in the camera coordinate system as (X c , Y c , Z c ), the coordinate in the image coordinate system as (X,Z), and the coordinate in the pixel coordinate system as (u,v).
The relationship between the pixel coordinate system and the image coordinate system is In the relationship between image coordinate system and camera coordinate system, f is the effective focal length of the camera. [ The relationship between camera coordinate system and tunnel coordinate system is After the above formula is combined, the relationship between the pixel coordinate system and the roadway coordinate system In the ideal tunnel model, if the floor is parallel to the roof and the height of each point of the roof is the same, k = k x f = k z f, ρ = y c k , then the tunnel coordinate of the roof point P can be expressed as P (X w , Z w , 1) , u d = u − u 0 , v d = v − v 0 , the relationship between the pixel coordinate system and the tunnel coordinate system.
3 Image processing and feature point recognition

Image preprocessing
Because the bolt end of the roadway roof presents a black circle, the black edge of the bolt end needs to be extracted to extract the feature points. The circle feature of the bolt end is taken as the target to detect the center of the circle and detect the center of the circle. The interior environment of the roadway is dark, the dust concentration is high, and the water vapor is high. In addition, the vehicle body vibrates at low frequency during the work of the bolting robot, resulting in Image blur is easy to cause the final feature extraction failure. Therefore, based on the above problems, image preprocessing is the first step to improve the accuracy of feature extraction and the positioning accuracy of the visual system (Wang et al.2011;Comer et al.1963).In this paper, in order to enhance the contrast of the characteristic points of the bolt end with respect to the interface of the roadway surface (Apedo et al.2009)and make the characteristic points of the bolt end clearer, the histogram equalization algorithm is used to enhance the image (Wang et al.2012;Thomas et al.2013;Davids et al.2007),and the image is filtered by the least mean square error filtering, that is, Wiener filtering (Fay et al.2000).so as to eliminate the low-frequency vibration and image blur during the motion of the bolting robot.
In view of the portability of camera lens field of view, to ensure that different wide-angle cameras get consistent results, use RANSAC algorithm to correct image distortion (Lukasiewic et al.1984).The purpose of distortion correction is to make up for the image edge distortion caused by camera wide. By calculating and compensating the edge pixels, the distortion caused by camera internal parameters can be recovered. The distortion correction effect is shown in Fig. 8, Fig. 8(a) shows the wide-angle image before distortion correction, and Fig. 8(b) shows the image after distortion correction. Because the color of the coal surface is similar to that of the bolt end, the difference between the two is low, which is not conducive to feature extraction of the bolt end. Therefore, histogram equalization algorithm is used to enhance the image, so that the bolt end can show a higher brightness gray value, which is preparation for feature extraction.
Histogram equalization is to redistribute the gray level of the image in the pixel value through the nonlinear stretching of the image, so that the gray level is roughly evenly distributed in the same wide range as the pixel value, and the uniformity of its distribution can be adjusted according to the needs (Lukasiewicz et al.1984).As shown in Fig. 8 is the histogram of image gray distribution, and the image enhancement effect is shown in Fig.9(a)(b). Aiming at the image blur in the process of low-frequency vibration and motion of the bolting robot, the image is filtered by the least mean square error filter, that is, Wiener filter (Le et al.2018) .Wiener filtering algorithm estimates the fuzzy image and minimizes the root mean square error between the estimated image and the real image. The algorithm has achieved good results in the application of mine image restoration, and it has fast processing speed and good robustness. The effect is shown in Fig. 9(c).

Identification of feature points
The pre-processed image is processed for feature recognition Combined with the round characteristics of the bolt end, the bolt end is detected based on Hough transform, and the circular boundary and center coordinates are extracted. Based on the least square method and RANSAC algorithm, the anchor head feature points extracted from the image range are fitted with a straight line as the relative positioning reference line (Li et al.2015).As shown in Fig. 10(a), the white circle is the identified feature point, the red circle at the edge of the circle represents the fitted feature circle, and the red line is the fitted line at the center of the fitted circle. Fig. 10(b) shows an enlarged view of a part of the green block diagram Set up the experiment environment and mobile car experiment system. The mobile car is used to simulate the experimental bolting robot and vision system, the corridor is used to simulate the tunnel environment, and the bolt head is marked with yellow characteristics. The layout of the simulation experimental scene and the experimental process are shown in Fig. 13 respectively.
The corridor is used to simulate the tunnel, the black background environment at the top of the tunnel is simulated with black paper, and the bolt end is simulated with yellow reflective circle. The experiment is carried out in the dark environment. The moving process of the robot is realized by using the mobile car to simulate the robot, and the CCD camera is arranged on the car to collect the data of the roadway roof. At the same time, the laser rangefinder is used to measure the depth of each image synchronously; the experimental light source is arranged to simulate the actual light source in the dark environment; the data and computer are connected by data lines to obtain the image data. The computer is used to process the data, obtain the measurement results of characteristic points, the straight line of course angle fitting, the step displacement and other data. The computer software realizes the data processing described above, and realizes the position change through the process of image processing, data extraction, coordinate system conversion, etc.
During the experiment, the control data were collected. The comparison data includes the spatial position of the roof bolt end, the spatial position of the bolting robot when each image and depth data are acquired. In addition, the starting position of the forward direction of the bolting robot is set as the origin of the world coordinate system, and the spatial position conversion among the world coordinate system, the body coordinate system and the camera coordinate system is processed at the same time. At each data acquisition position, the horizontal position, forward position and heading angle of the bolting robot are measured. To ensure that the optical axis of the camera is vertical, use a horizontal collimator to adjust the camera lens level, as shown in Fig. 13.
By comparing the experimental data with the measurement data of the bolting robot experimental platform, the errors of the horizontal direction, the forward direction and the heading angle of the bolting robot are obtained, and the error analysis and the feasibility analysis of the experimental results are carried out.

Experimental data processing
Record the position of the experimental car in the process of moving, and collect image data through the vision system. Using the positioning method, the walking distance and angle of the mobile car are calculated in the computer. The track fitting of the car is realized by the method of accumulation positioning. In the process of one-time travel, take 5 images for each fixed point, calculate the average value of image data, and calculate the positioning position and measurement position of each point. See the table below for the sorting data  Use MATLAB to analyze the data, and calculate the heading angle error and X and Z direction error respectively. The heading angle positioning error increases with the increase of the positioning distance, but within the range of 17250mm, the maximum heading angle positioning error is 1.845°, and the positioning accuracy meets the positioning requirements.  Under the influence of x-direction and Z-direction positioning errors, the measuring track of the vision positioning system deviates to the negative X-axis direction.
Within 17250, the z-direction positioning error fluctuates within 100 mm, while the x-direction positioning error has a poor stability, showing a linear increase trend. Within 6750 mm, the x-direction positioning error is kept within 100 mm. Through the test, the z-direction positioning accuracy is high, while the x-direction positioning accuracy is low. Within the range of 6750mm, the accuracy of the visual positioning system meets the actual positioning requirements.

Error analysis and simulation experiments
During the experiment, due to the accuracy of the experimental platform, there will be optical axis non vertical error and depth error. Error analysis and simulation experiment based on the error of two directions.

Optical axis is not vertical error
A line perpendicular to the focal plane of the camera and passing through the focal point is called the optical axis. The optical axis perpendicular refers to the optical axis perpendicular to the image plane, which is an ideal experimental situation. There are many reasons for the non vertical optical axis, such as the roof is not parallel to the ground, the leveling accuracy is not enough, and the measurement accuracy is low.
Generally, the error of optical axis is within 1°and the direction of error appears randomly. At the ideal height, the error range is first obtained.
From formula 11: Among them, the s is coefficient of torsion.
The rotation coefficient of the optical axis is obtained by combining the rotation angles of the optical axis along the xaxis and the z-axis Among them, θ is the angle of x-axis, ω is the angle of z-axis, α is 1 ° First of all, 1000 simulation experiments of single point error are carried out, as is shown in Figure 4. According to the simulation, the positioning point error is circular distribution, the probability of X-direction positioning error within 8mm is 100%, and the probability of within 4mm is 91.4%; the probability of Z-direction positioning error within 4mm is 99.8%, and the probability of within 2mm is 72.7%. The course angle error is very small and can be ignored. Therefore, the point feature extraction error meats engineering application requirements. In order to explore the effectiveness of the optical axis non vertical error, 1000 simulation experiments were carried out for the accumulated positioning error within 50 meters, and calculate the probability diagram of error distribution at 50m. The simulation results are shown in Figure 5. In the range of 50 m, the location error of monocular vision estimation increases with the increase of location distance, generally, the error is distributed symmetrically along both sides, and the center of symmetry is the position with an error of 0 mm. It can be seen from Figure 5a，the Z-direction positioning error is kept within 80mm;It can be seen from Figure 5b, the X-direction positioning error is kept within 60mm; It can be seen from Figure 5c, and the heading angle positioning error is kept within 2°. The simulation results show that the positioning errors of X-direction, Z-direction and heading angle of the monocular vision positioning system can meet the positioning accuracy requirements in the range of 50 m.

Depth error
Depth refers to the distance from the camera lens to the image plane. Ideally, the measurement depth is constant. The depth was measured 100 times during the experiment, and the results are shown in Table 3. In an ideal case, the distance from the lens to the image plane is 2636 mm, the depth of 20 points is 2604 mm-2655 mm, and the error depth is 51 mm. Based on the characteristics of monocular vision and considering the influence of depth on the measurement data,1000 simulation tests were carried out for single positioning error. According to the simulation results, the positioning point error is linearly distributed with the depth, and the error range is less than 2mm within 51mm.
Taking 0.6mm as the variable amplitude, the simulation experiment is carried out for the error situation within 50m in X-direction and Z-direction, the z-direction positioning error is controlled within 2mm, the x-direction positioning error is controlled within 1.5mm, and the heading angle error is very small, which can be ignored. Therefore, the depth of field error has few impact on the single X-direction positioning error and the z-direction positioning error, which does not affect the heading angle positioning. The overall positioning error is within the allowable range of engineering application requirements.

Conclusions
Through the test platform of bolting robot, the experimental data are measured and error analysis is carried out. For the positioning system of bolting robot body based on single vision, the following conclusions can be obtained: 1. Monocular vision positioning system can realize the body positioning function. 1) Obtain image data information through camera; 2) improve image quality through image distortion correction, image enhancement, image noise reduction and other image processing methods; 3) realize positioning detection of bolting robot through feature extraction algorithm, line fitting algorithm and positioning calculation algorithm.
2. The influence of optical axis non vertical error and depth error on the experimental results meets the requirements. 1) The integrated error of single point is within 8mm, the combined error of x-direction and z-direction is within 100mm, and the accumulated error of heading angle is within 1.6°, which meets the engineering requirements. 2) The comprehensive error of horizontal direction and forward direction is normal distribution, and the probability of the comprehensive error within 50 mm is more than 98%. 3) The heading angle error is only affected by the optical axis not vertical. 4) The main error is that the optical axis is not vertical, accounting for more than 95% of the error.
The monocular vision vehicle positioning and detection method of bolting robot can realize the positioning function, and the error is within a reasonable range, so the method is feasible. Fu, S., Li, Y. and Yang J. et al. (2015), "Research on autonomous positioning and orientation method of tunnel boring machine based on ultra-wideband technology", Journal of China Coal Society, Vol. 40 No. 11, pp. 2603-2610. doi:10.13225/j.cnki.jccs.2015.7064. Wu, M., Jia, W. and Hua, W. et al. (2015, "Autonomous measurement method for positioning of a cantilever tunnel boring machine based on spatial intersection measurement technology", Chinese Journal of Coal,    Ideal roadway environment positioning model      Ideal roadway environment positioning model Analysis of angle error of anchor rod spacing      Ideal roadway environment positioning model Analysis of angle error of anchor rod spacing      Ideal roadway environment positioning model Analysis of angle error of anchor rod spacing     Optical axis non vertical error within 50m accumulated error