A novel method for measuring the length of hot large forgings based on machine vision system

In order to measure the length of hot large forgings, a novel method based on machine vision system is proposed. Firstly, according to the characteristics that the light strips in the images acquired by machine vision system are continuous and have peaks, the light strip is detected. Secondly, after calculating the sub-pixel edge points of each light strip using an improved sub-pixel edge detection algorithm, the sub-pixel edge points of each light strip are sorted according to the order of the abscissa from small to large. Three-dimensional (3D) points of each edge can be obtained separately by matching and 3D reconstruction. Finally, the 3D points of each edge are projected onto its fitting plane, and the projected points are fitted with a quintic polynomial curve. According to the curvature of the fitted curve, the edge point of the forging can be detected. The distance from the starting point to the edge point is the length of forging at the position of light strip edge. The length measurement experiment shows that the method can be used to measure the length of hot large forgings. The relative error of the length measurement system is 0.547%. The time for measuring the length of hot forging is 6.8 s.


Introduction
Large components are the critical components of manufacturing large equipment, which are wildly applied in the field of nuclear power, petrochemical, shipbuilding, and aerospace. Open die forging is usually employed in the manufacturing process of large components. At present, the dimension of hot forging in most factories is still measured by operators with hand-held calipers and mechanical gauges. The temperature of hot forging can reach 1000 °C, which is dangerous for operators. In order to avoid the potential harm to the operator, a lot of research works on non-contact dimension measurement of hot forging have been carried out.
The non-contact dimension measurement methods of hot forgings are mainly divided into laser scanning method and machine vision method. A lot of researches have been done to measure the diameter of hot forging using the laser scanner. The LaCam-Forge measurement system of hot large forging was developed by Rech et al. [1]. A laser scanner is used in the measurement system to obtain point cloud data by scanning the surface of forging several times. China First Heavy Industry Corporation (CFHI) has applied a laser scanner to obtain the 3D information of a cylindrical forging [2]. The dimension of the cylindrical forging is obtained by calculating the cross section of the laser scan data. Du and Du [3] propose a measurement method of hot large forgings based on two-dimensional (2D) laser radar. The 2D laser radar is driven by the servo motor to scan the surface of the measured forging and obtain the point cloud data. After 3D reconstruction of the point cloud data to obtain the 3D model of the forging, the size of the forging can be measured. A method of processing laser scanning data is proposed by Fu et al. [4][5][6]. The radial uppermost section of the ring forging is scanned horizontally by a laser scanner. The laser detector sends the information to the computer. Due to the limitation of the laser scanning angle and the rotary rolling process of the ring forgings, multiple scans of the forgings are required. Bokhabrine et al. [7] developed a system based on two Leica ScanStation2 time-of-flight (TOF) laser scanners for measuring the diameter of hot cylindrical metallic shell forgings in the forging process. The Leica Scan Station2 is a TOF 3D scanner designed for large scene. Measurement is performed with a pulsed green laser. The TOF laser scanners are placed more than 15 m from the hot shell to acquire accurate point cloud for reconstructing a 3D view of the shell and making diameter measurements. A spherical coordinate-based 3D laser scanning system that can be used to measure the diameter and length of hot forgings has been developed by Yu et al. [8] and Tian et al. [9]. The system consists of a TOF laser radar, a two-degree-of-freedom (2-dof) spherical parallel mechanism (SPM) scanning device, and two motors with controllers. The spherical parallel mechanism can drive the TOF laser radar as a theodolite for 3D scanning. By processing the 3D data of the surface of the forging obtained by multiple scans, the diameter and length of the hot large forging can be measured.
Machine vision system has also been employed to measure the diameter of hot forging. The method for acquiring images of hot forgings was researched by Dworkin and Nye [10]. Three different methods for acquiring images of forgings were evaluated by experiments. The experimental results show that the edge of hot forging can be extracted from the near-infrared image taken after installing an infrared pass filter in front of a monochromatic CCD camera lens. A hot large forging size measurement system consisting of a CCD camera and a moving platform is proposed [11,12]. The CCD camera is fixed on the mobile platform and can move with the mobile platform. The edge of forging can be detected from images captured by the CCD camera. The dimension of the hot forging is the distance that the mobile platform translates. Ma [13] has used a dimensional measurement system for stepped shaft forgings. The dimensional measurement system consists of two sets of equipment, each of which includes a CCD camera and a line laser. A dimensional measuring system is placed on each side of the forging. After synthesizing the three-dimensional information on both sides of the forging, the size of the stepped shaft forging can be obtained. Jia et al. [14] and Wang et al. [15] proposed a spectral selection method. This method can be used to obtain images of hot large forgings by eliminating the light radiated by hot forgings. By processing the light strips in forging image, the diameter of cylindrical forging can be calculated. A measurement system composed of two high resolution single-lens reflex cameras and a software application is presented by Zatočilová et al. [16]. Four boundary curves can be obtained by detecting the edges of forging in two images captured by the two cameras. The length and diameter of the shaft can be calculated from the 3D model constructed by the four boundary curves of the forging.
Although a lot of researches have been done on diameter measurement method of cylindrical forging, few researches have been done on the length measurement of hot large forgings. A non-contact length measurement method of hot large forging based on machine vision system is presented in this paper. This paper is organized as follows: Sect. 2 describes the composition of the machine vision system of hot large forging. The light strips detection method is discussed in Sect. 3. In Sect. 4, an improved sub-pixel edge detection method is employed to acquire the sub-pixel edges of light strips. In Sect. 5, D reconstruction of sub-pixel edge is introduced. Section 6 describes the length measurement of hot forging. Section 7 is the conclusions.

Machine vision system composition
The machine vision system used to measure the dimension of hot large forging is shown in Fig. 1. The machine vision system consists of two monochrome CCD camera (Redlake ES4020), a projector (3M PD80X), two short-wavelength pass filter (the cut-off wavelength is 450 nm), and a data processing equipment (Lenovo M425). The resolution of the monochrome CCD camera is 2048 × 2048. Because the shape of the light strips projected by the projector can be changed easily which can meet the requirements of machine vision of hot parts with different shapes, the projector is used in the machine vision system. The shortwavelength pass filter is employed to filter infrared radiated by hot forging.   Fig. 2 is higher than 1000 °C. As shown in Fig. 2, the light strips projected on the surface of the hot forging can hardly be identified because the light radiated by the hot forging is too strong. The images of hot forging acquired by the machine vision system are shown in Fig. 3. Figure 3a, b are images taken by the left and the right cameras, respectively. In Fig. 3, the light strips projected by the projector on the surface of hot forging are not affected by the light radiated by the hot part. The light strips projected by the projector are represented by n(w) , where w = 1, 2, ⋯ , 5 . Thus, the light strips in Fig. 3a, b are represented by n L (w) and n R (w) , respectively.

Light strip recognition
Since the gray value of the pixels on the equipment in the workshop in Fig. 3 is even larger than the gray value of the pixels on the light strips, the light strips should be recognized. The coordinate system is established in Fig. 3a. The horizontal direction is the x-axis, and the vertical direction is the y-axis. The unit of x-and y-axis is pixel. The coordinate of the pixel in the upper left corner in Fig. 3a is (1,1). l A and l B are the two vertical lines in Fig. 3a. The distance between the two lines is pixels, where = 100 . The x-axis coordinates of l A and l B are represented by X l A and X l B , respectively. The values of X l A and X l B need to be set according to the size of hot forging and the position of hot forging in the image. The gray value of the pixels within the range of l A and l B are shown in Fig. 4. As can be seen from Fig. 4, the gray value of the pixels where each light strips is located is usually larger than the value of the background pixels and is continuous in the selected area. Figure 5a, b are gray value of the pixels on l A and l B in Fig. 3a, respectively. In Fig. 5, the abscissa is pixel, and the ordinate is gray value of the pixel. In order to determine the peak points of the light strips on l A and l B , the inequality are employed as follows: where g(X, Y) is the gray value of pixel (X, Y); e = 1, 2, 3, 4 ; X and Y are positive integers.
The steps to find the peak points of the light strips on l A are as follows: Step 1: The maximum gray value of the pixels in Fig. 5a is represented by  Step 2: Set the gray value of the pixels with the gray value less than G in Fig. 5a to zero. If g(X l A ,Y) is not equal to zero, formula (1) is used to determine whether the gray value of the pixels on both sides of the pixel (X l A , Y) decreases in turn. If the gray value of the pixels on both sides of (X l A ) decreases in turn, (X l A , Y) is considered to be the peak point. The number of peak points in Fig. 5a is represented by n l A where Y ∈ [10,2038].
Step 3: If n s ≤ n l A < 2 ⋅ n s , then all the peak points (X l B , Y) on l B are detected using the same method as in step 2. The number of peaks is represented by n l B . Else if n l A > 2 ⋅ n s , then make G = G + 5 and repeat step 2.
Else if n l A < n s , then make G = G − 5 and repeat step 2.
Step 4: If n s ≤ n l B < 2 ⋅ n s , then the calculation process ends. Else if n l B > 2 ⋅ n s , then make G = G + 5 and repeat step 2 and step 3. Else if n l B < n s , then make G = G − 5 and repeat step 2 and step 3.
The gray value of the peak points on l A and l B is shown in Fig. 5. The coordinates of the peak points on the l A and l B are expressed as ( Due to the influence of interference light, the number of peak points is usually greater than the number of light strips. As can be seen from Fig. 4, each light strip is continuous between l A and l B . Thus, the light strips can be recognized by using the continuous characteristics of the light strips.
The coordinate of one of the peak points on l A is repre- . The steps of recognizing light strips are as follows: Step 1: Take the pixels (Y p − n r , Y p + n r ) on X p + 1 column, and then find the coordinate of the peak point (X p +1, Y m ) with the largest gray value where n r =6.
Step 2: According to formula (1), if the gray value of the pixels on both sides of (X p +1, Y m ) decreases in turn, (X p +1,Y m ) is considered to be the peak point. Else if the gray value of the pixels on both sides of (X p +1, Y m ) does not decrease in turn. (X p , Y p ) is considered to be interfering light. Then, end the calculation.
Step 3: If X p +1< X l B , let X p =X p + 1 , Y p =Y m , and repeat step 1 and step 2.
Else if X p +1= X l B , (X l A , Y l A (t l A )) is considered to be peak point on the light strip.
If the number of peak points obtained according to the above steps is not equal to the number of light strips, translate the straight lines l A and l B to the right by N p together, and repeat the above steps where N p = 120. The peak point of the light strip n L (w) on l A is represented by p n L (w) where p n L (w) =(X n L (w) , Y n L (w) ) T . According to the position of hot forging in Fig. 3a, the initial value of X l A is set to 1330.
The peak points p n R (w) of the light strips in Fig. 3b can be recognized in the same way where p n R (w) =(X n R (w) , Y n R (w) ) T .

Pixel edge detection
Canny edge detection operator is employed to calculate pixel edges of the light strips in Fig. 3a, b. In Fig. 3a, the upper and lower pixel edges of the light strip n L (w) on column X l A are represented by eu n L (w),l A =(Xu l A , Yu l A ) T and ed n L (w),l A =(Xd l A , Yd l A ) T , respectively. The pixel edge points on the upper side of the light band n L (w) can be obtained by searching the pixel edge calculated by the Canny edge detection operator on both sides of eu n L (w),l A . The pixel edge point on the upper side of light strip n L (w) is represented by eu n L (w),ku_n L (w) where eu n L (w),ku_n L (w) = (X ku_n L (w) , Y ku_n L (w) ) T , ku_n L (w)=1, 2 ⋯ num ku_n L (w) , and num ku_n L (w) is the number of pixel edge point on the upper side of light strip n L (w) . The pixel edge points on the lower side of light strip n L (w) are represented by ed n L (w),kd_n L (w) where ed n L (w),kd_n L (w) = (X kd_n L (w) , Y kd_n L (w) ) T , kd_n L (w)=1, 2 ⋯ num kd_n L (w) . num kd_n L (w) is the number of pixel edge point on the lower side of light strip n L (w).
The pixel edge points eu n R (w),ku_n R (w) and ed n R (w),kd_n R (w) on the upper and lower sides of the light strip n R (w) in the right image can be detected using the above method.

Sub-pixel edge detection
An improved sub-pixel edge detection algorithm is applied to calculate the sub-pixel edge of light strips [17]. In order to calculate the sub-pixel edge of the light strips in Fig. 3a, take the pixels in the 7 × 7 range centered on pixel edge. Figure 6 shows the pixels in the 7 × 7 range centered on pixel edge. Establish the coordinate system oxy in Fig. 6. The origin of coordinate system oxy is located at the center of pixel edge point. In the coordinate system, the horizontal direction is x-axis, and the vertical direction is y-axis. In order to calculate sub-pixel edge, a line y = ax + b is employed to divide the pixels in Fig. 6 into two regions where a and b are the coefficients to be solved for. The gray value of the pixel in Fig. 6 is represented by G(x, y).
The coordinates of the center of gravity of the gray value can be calculated from the coordinates of the pixels and the gray value of the pixels. The connection line between the coordinate origin o and the barycentric coordinate of gray value is the normal line of the line y = ax + b , which is represented by N. Because the line y = ax + b passing through the sub-pixel edge is perpendicular to the normal N, the slope a of the line can be obtained.
It is assumed that the gray values in the upper and the lower regions of the line y = ax + b are the same, which are represented by W 1 and W 2 , respectively. If the barycentric coordinate of gray value is located in the second and fourth quadrants, the calculation formulas of W 1 and W 2 are expressed as follows: If the barycentric coordinate of gray value is located in the first and third quadrants, the calculation formulas of W 1 and W 2 are expressed as follows: S6 Fig. 6 The pixels in the 7 × 7 range centered on pixel edge The sum of the gray value of the pixels in each column in Fig. 6 is represented by S . In addition, the value of S can also be calculated using W 1 and W 2 where =1,2, ⋯ ,7 . Therefore, the intercept b of the linear equation y = ax + b can be calculated from the two expressions for S . The intersection of the line y = ax + b and the normal N passing through the origin o is the sub-pixel edge corresponding to the pixel edge. The above sub-pixel edge calculation method can only calculate the sub-pixel edge corresponding to one pixel edge point at a time. The sub-pixel edges on the upper and lower sides of light strip n L (w) are represented by seu n L (w),ku_n L (w) and sed n L (w),kd_n L (w) , respectively, where seu n L (w),ku_n L (w) = (SX ku_n L (w) , SY ku_n L (w) ) T and sed n L (w),kd_n L (w) = (SX kd_n L (w) , SY kd_n L (w) ) T .
The sub-pixel edge points seu n L (w),ku_n L (w) on the upper side of the light strip n L (w) are sorted in ascending order of the x-axis coordinate values. The sorted sub-pixel edge points are represented by pseu n L (w),ku_n L (w) . The sub-pixel edge points on the lower side of the light strip n L (w) are sorted and represented by psed n L (w),kd_n L (w) .
Using the above method, the sub-pixel edge points on the upper and lower sides of the light strip n R (w) in Fig. 3b can also be detected. The sub-pixel edge points on the upper and lower sides of the light strip n R (w) are sorted in ascending order of the abscissa value and are represented by pseu n R (w),ku_n R (w) and psed n R (w),kd_n R (w) respectively.
The detected sub-pixel edges of light stripes are shown in Fig. 7. The magnified view in the rectangular frame in Fig. 7a is shown in Fig. 8.

Matching and 3D reconstruction
After obtaining the sub-pixel edge of the light strips, the corresponding points in the left image and the right image are matched by using the epipolar constraint. If pseu n L (w),ku_n L (w) is the corresponding point of pseu n R (w),ku_n R (w) , then w h e r e F i s b a s i c m a t r i x , pseu n L (w),ku_n L (w) = (SX ku_n L (w) , SY ku_n L (w) , 1) T , pseu n R (w),ku_n R (w) = (SX ku_n R (w) , SY ku_n R (w) , 1) T . The value of F is obtained through calibration. The value of the basic matrix F is According to the epipolar constraint shown in Eq. (4), the matched points pseu n L (w),tu_n L (w) and pseu n R (w),tu_n R (w) in the left and right images can be determined. The matched points can then be 3D reconstructed using the internal and external parameters of the machine vision system. The internal parameters of the left and right cameras are expressed as _ and _ , respectively. The values of _ and _ are expressed as follows: The rotation matrices of the left and the right cameras are represented by _ and _ , respectively. The translation matrices of the left and the right cameras are represented by _ and _ , respectively. The values of _ , _ , _ , and _ are expressed as follows: The space points EU tu_n(w) can be computed by using the camera matrices L and R : tu_n(w) , EZ tu_n(w) , 1) T , a n d tu_n(w) = 1, 2, ⋯ , num tu_n(w) . num tu_n(w) is the number of 3D points calculated using the upper sub-pixel edge points of light strip n(w).
The 3D point obtained after 3D reconstruction of the sub-pixel edge points on the lower side of the light strip n(w) is represented by Ed td_n(w) where Ed td_n(w) = (EX td_n(w) , EY td_n(w) , EZ td_n(w) ) T a n d td_n(w)=1, 2, ⋯ , num td_n(w) . num td_n(w) is the number of 3D points calculated using the lower sub-pixel edge points of light strip n(w).
The 3D points are shown in Fig. 9. The 3D coordinate system in Fig. 9 is O 3D X 3D Y 3D Z 3D , and the units of the axis are millimeter.

Length calculation
The length of hot forging can be calculated using the 3D points EU tu_n(w) or Ed td_n(w) . When calculating the length of the forging at the location of EU tu_n(w) , fit the 3D point EU tu_n(w) as a plane. The fitting plane is represented by Plane u_n(w) . The 2D points can be acquired by projecting the spatial point EU tu_n(w) on the fitting plane Plane u_n(w) . The 2D points on the fitting plane Plane u_n(w) are represented by qU tu_n(w) . The storage order of the 2D points qU tu_n(w) is the same as the storage order of the 3D points EU tu_n(w) where qU tu_n(w) = (qx tu_n(w) , qy tu_n(w) ) T .
The 2D points qU tu_n(3) on the fitting plane Plane u_n(3) are shown in Fig. 10. The coordinate system in Fig. 10 is O 2D X 2D Y 2D , and the units of the axis are millimeter. The first point in qU tu_n(w) is considered the starting point. In order to calculate the length of the hot forging where the upper edge of light strip n(3) is located, it is necessary to determine the pixel where the edge of the forging is  Fig. 10. A quintic polynomial is employed to fit the 2D points in Fig. 10, and the pixel where the edge of the forging is located is determined by the curvature of the fitted curve. The quintic polynomial is as follows: (13) qy = a 1 ⋅ qx 5 + a 2 ⋅ qx 4 + a 3 ⋅ qx 3 + a 4 ⋅ qx 2 + a 5 ⋅ qx + a 6 = 0   where a 1 , a 2 , a 3 , a 4 , a 5 , and a 6 are the coefficients of quantic polynomial. The value of curvature w is compared with . If w is greater than , the pixel used to calculate the curvature w is considered to be the edge point, and the calculation is stopped where = 0.0022 . The edge point of forging determined by calculating the curvature of the 2D point qU tu_n (3) fitting curve is shown in Fig. 10. The distance between the starting point and the edge point of forging in Fig. 10 is the length of the forging at the position of the upper edge of the light strip n (3).
The length of the hot forging obtained by calculating the edges of the light strips in Fig. 3 is shown in Table 1. The time required to calculate the length of the hot forging is 6.8 s.
An experiment to measure the length of workpiece using the dimension measurement system of hot large forging is carried out at room temperature. The experimental results show that the error of the length measurement can reach 0.547%.

Conclusions
A novel method for measuring the length of hot forgings using machine vision system is proposed in this paper.
Firstly, the light strips in the image of hot forging are detected according to the characteristics that the light strips are continuous and have peaks in the vertical direction. Second, an improved sub-pixel edge calculation method is employed to calculate the sub-pixel edge points of the light strips and sort them in ascending order of the abscissa. After matching the edge points of the corresponding light strips in the left and right images using epipolar constraints, 3D reconstruction is performed. Lastly, 2D points are obtained by projecting the 3D points of each edge of light strip onto its fitted plane. The curve represented by quintic polynomial is employed to fit the 2D points, and the edge point of forging is calculated according to the curvature of the curve. The distance from the starting point in the 2D points of each edge of light strip to the edge point of forging is the length of hot forging. Experiment shows that this method can be used to measure the length of hot large forgings. The time to measure the length of hot forging is 6.8 s. The experiment of measuring the length of the workpiece at room temperature shows that the measurement error of the measuring system can reach 0.547%.