Volume monitoring of the milling tool tip wear and breakage based on multi-focus image three-dimensional reconstruction

In precision machining, the milling tool’ geometry has a great influence on the milled surface quality. The research on milling tool state monitoring was mainly based on one-dimensional signals and two-dimensional images, which could indirectly obtain the tool state and wear area, but it could not provide the volume of milling tool wear and breakage area, thereby making it difficult to achieve quantitative analysis tool wear. This paper proposed a three-dimensional (3D) reconstruction method of the milling tool tip, it could build a 3D model of the milling tool tip, and then the volume of the wear and breakage region of the milling tool tip was extracted by the 3D model. Firstly, the focusing degree of image sequence’s pixels was calculated based on the non-subsampled discrete shearlet transform (NSST) and Laplace algorithm, and the 3D reconstruction of the milling tool tip was completed according to the shape-from-focus (SFF) principle; secondly, the depth values were optimized by fitting the focusing degree curve of pixels in the image sequence with Gaussian function; finally, the volume of the 3D point cloud of the milling tool tip was calculated by the Simpson double numerical integration method, and the material loss in the damaged region could be obtained. In the 3D reconstruction experiment of the milling tool tip, comparing the different focus degree evaluation operators of SFF, the proposed 3D reconstruction method has the least noise and the best performance in the root-mean-square error, correlation, and smoothness indexes.


Introduction
The milling tool wear and breakage have a direct impact on the surface integrity of the milled workpiece, and the milling tool wear and breakage state monitoring is an important research content in the field of precision machining. In recent years, the scholars mainly focused on acoustic emission signals, vibration signals, cutting force signals, and two-dimensional tool images to monitor the milling tool state. Fernández-Robles et al. [1] proposed a new method based on digital image processing, which obtained the tool wear area by the image capture and subsequent analysis of the micro tool. Dai et al. [2] designed a machine vision system for online tool state monitoring, which improved the workpiece quality and extended the micro-tool life. Huang et al. [3] proposed a tool wear state monitoring method using vibration signals based on short-time Fourier transform (STFT) and deep convolutional neural network (DCNN) in milling operations. In the method, the time-frequency maps of acquired Yeping Peng, Shucong Qin, Tao Wang, Yixi Hu and Shiping Nie contributed equally to this work. vibration signals were obtained based on STFT, and the DCNN model was designed to establish the relationship between time-frequency maps and tool wear state. The above methods are based on one-dimensional signals and two-dimensional images to study milling tool wear and breakage state monitoring, which cannot provide the 3D information of milling tool, but the 3D information on the milling tool tip can provide an important quantitative basis for studying the change in the milling tool wear morphology and wear mechanism during machining.
In summary, many scholars believe that only using signals and flank wear (VB) value as tool wear evaluation indexes have limitations, and they carried out three-dimensional reconstruction of wear area. Guo et al. [4] used the indirect method to obtain the relative three-dimensional model of cutting edge. Firstly, the cutting edge of the tool was copied to the surface of the soft metal, and the shape of the tool was reflected by the indentation shape; then the three-dimensional morphology of indentation surface was reconstructed by white light interferometer. Tian et al. [5] restored the 3D topography of the blade wear area by adding depth information to the wear area and quantified the wear volume, providing clues for the early warning of tool deterioration caused by wear. But the application of 3D morphology restoration technology to tool wear detection is still in its infancy. However, the wear state can be more accurately analyzed by three-dimensional morphology, such as wear depth, such as wear depth, wear area, and wear volume. Therefore, it is important to study the 3D shape and volume of the milling tool tip during milling process for better evaluation of tool wear state.
Since the milling tool diameter between 0.5mm~25mm is normally utilized in high-precision machining, the 3D reconstruction of the milling tool tip belongs to the field of 3D reconstruction of microscopic objects. Currently, the 3D reconstruction methods of microscopic objects mainly include structured light 3D reconstruction [6], laser scanning confocal 3D reconstruction [7], and SFF [8]. The principle of structured light 3D reconstruction is to calculate the depth information by observing the projection-fringe's deformation caused by the uneven surface of the object. The method has high efficiency and large field of view, but the streaks projected by the projector are hard to project on a millimeter scale, thereby it is difficult to be applied in machine tools. The principle of laser scanning confocal 3D reconstruction method is to obtain the depth information from tomography images of sample, the method has the advantage of high reconstruction accuracy, but it is difficult to be widely used due to expensive. The SFF is a widely used method for recovering the 3D shape of an object from an image sequence with various focus measure operators, the method has the advantages of simple structure design and low cost. Aiming at monitoring the 3D morphology of the milling tool tip on the machine tool, this paper adopted SFF method to complete the 3D reconstruction of the milling tool tip.
In recent years, many scholars have conducted research on the SFF method. A novel SFF method [8] based on a multiscale fusion perspective was proposed, which achieved higher accurate depth map estimation and better surface consistency of reconstruction results. A method was developed to extract 3D point clouds from multi-focus images of fiber networks [9]. This method was combined with a convolutional neural network and a depth recognition module to reconstruct 3D structures of nonwovens from microscopic multi-focus images. Furthermore, the SFF and the shape-from-shading (SFS) methods were combined to reconstruct the 3D wear area of the grinding wheel [10]. Compared with SFF or SFS alone, the reconstruction accuracy of 3D wear area obtained by the method was higher. A 3D microstructure reconstruction method of nonwovens for SFF was proposed [11]. The Sobel focusing degree measurement operator was used to determine the initial focus position of each frame image, and then the Gaussian interpolation algorithm was used to accurately estimate the focus position, which could well reconstruct the real 3D microstructure of nonwovens. An improved depth estimation method for SFF can achieve high-precision 3D shape reconstruction of teeth by iteratively refining the optimal focus position [12]. A focusing degree measurement method was developed based on the adaptive window to improve the 3D reconstruction accuracy of SFF [13]. The focusing degree based on 3D structure tensor analysis can provide accurate focusing degree value and better anti-noise ability [14]. Lee et al. [15] used the semi-variational function to determine the size and shape of the window pixel by pixel, and the quality of focusing degree measurement was improved. Martišek et al. [16] proposed a fast SFF method for 3D object reconstruction. Images with different scales were registered for image sharpening and 3D construction of objects. A 3D shape method was developed by one-dimensional discrete cosine transform to measure the focus degree [17], and the noise was suppressed. A multi-direction focusing degree measurement method was proposed [18] for accurate 3D reconstruction of SFF based on the tangential plane. A spatially consistent prior model was built using multi-focus image sequences to deal with the problem of low SNR in 3D reconstruction [19]. In this method, the prior model in the maximum a posteriori (MAP) framework could obtain spatially more consistent depth map and prevent edge artifact. Kim et al. [20] proposed a new measurement method of focusing degree to accurately reconstruct the surface topography with a discontinuous depth value. The above SFF methods have satisfactory results of 3D reconstruction with rich texture, however, they cannot acquire accurate 3D reconstruction when the objects are with weak texture. Since the texture details of the milling tool tip surface is insufficient, it is difficult to accurately evaluate the pixel focusing degree of the milling tool tip image by conventional pixel focusing degree measurement methods. Therefore, it is necessary to explore a pixel focusing degree evaluation method for weak texture image.
Aiming at measuring the wear and breakage volume of the milling tool tip, it is necessary to develop a volume calculation method of 3D point cloud for milling tool tip monitoring. In recent years, many scholars have studied the volume calculation methods of the 3D point cloud, and the volume calculation methods of complex objects mainly include the slicing method, convex hull method, projection method, and Monte Carlo method. Chang et al. [21] proposed a 3D point cloud volume estimation method based on the slice. The least square method was used to fit the curve to estimate the contour of point cloud slices and improve the accuracy of point cloud volume calculation by adjusting the number of slices. A 3D point cloud volume calculation approach was developed based on the adaptive concave slice method [22]. The adaptive slice method was used to determine the slice layer and slice thickness, and the K-nearest neighbor search algorithm was used to generate the correct slice boundary polygon. Thus the influence of the existing gaps and holes on the volume calculation was eliminated. A convex hull algorithm was improved to establish a 3D convex hull model of the crown and calculate its volume [23]. The method could acquire the crown volume of a single tree effectively and quickly. Zhi et al. [24] proposed a 3D point cloud volume calculation method based on the slicing method to calculate the volume of point clouds with irregular contours. However, the above 3D point cloud volume calculation methods should be performed with divided point clouds, and the volume of each block is accumulated to obtain the total volume.
Aiming at improving the efficiency and accuracy of point cloud volume calculation for milling tool tips, a new 3D point cloud volume calculation method was proposed to obtain the volume of the milling tool tip. The innovations of this work were as follows: (1) A 3D reconstruction method of milling tool tip based on the multi-focus image sequence was proposed. The pixel focusing degree of the milling tool tip image was evaluated based on the NSST and Laplace algorithm. Then the 3D model of the milling tool tip was reconstructed by the SFF method to realize 3D reconstruction of the milling tool tip with millimeter level and weak surface texture. (2) A 3D point cloud volume calculation method of milling tool tip based on the double numerical integration was proposed. The 3D point cloud volume of the milling tool tip could be efficiently and accurately obtained, which has potential use on online monitoring of milling tool wear and breakage volume.

Image sequence acquisition device
Aiming at 3D reconstruction of the milling tool tip, an image sequence acquisition device was designed, and the setup is shown in Fig. 1(b). The acquisition device system was mainly composed of the microscope, light source, milling tool, angle sensor, bracket, and tilt angle control knob. Firstly, the microscope was tilted on the workbench by a bracket, and the tilt angle control knob was adjusted to enable the microscope to photograph the position of the milling tool tip; then, the spatial position of microscope center axis was determined by angle sensor, the calibration position of the angle sensor is shown in Fig. 1(a), and the measurement position is shown in Fig. 1(b); finally, the milling tool tip was controlled by the machine tool to move uniformly along the center axis of the microscope, and the milling tool tip was photographed at equal intervals to obtain images with different focusing positions.

Ranging principle based on multi-focus image sequence
3D reconstruction based on the multi-focus image sequence, also known as the SFF method, is a technique to estimate depth information from a set of images with different focusing positions taken from the same viewpoint. Fig. 2 shows the basic principle diagram of the SFF method. The basic idea of the SFF method [25] is as follows: Firstly, a multi-focus image sequence is obtained by moving the optical microscope along the optical axis; then, the best focusing image frame of the pixel is obtained by calculating the focusing degree of the pixel in the same position in the image sequence. The moving distance corresponding to the best-focusing image frame is the depth value of the current pixel, and the moving distance refers to the distance from the reference surface to the imaging surface in the microscopic imaging system; finally, the depth information of the target object can be obtained by cycling through all pixels in the above steps.

The proposed approach
Aiming at reconstructing the 3D model of the milling tool tip and calculating the volume of the milling tool tip, a new 3D reconstruction method and a 3D point cloud volume calculation method were proposed in this paper, the schematic diagram is shown in Fig. 3. The approach mainly includes four steps: the first step was to obtain the low-frequency image and high-frequency image of the image sequence by the NSST; in the second step, combined with the ranging principle of the SFF introduced in Section 2.2, the highfrequency image of the image sequence was processed, the mapping relationship between the high-frequency image of the image and the depth map of the object was established, and the initial 3D point cloud of the milling tool tip was obtained by 3D reconstruction; in the third step, the depth values were optimized by the interpolation method, and the  final 3D point cloud of the milling tool tip was obtained, whose surface was more continuous; in the fourth step, the volume of the 3D point cloud of the milling tool tip was calculated by Simpson's double numerical integration method.

The NSST of the image sequence
The NSST was used to obtain the low-frequency images and high-frequency images of the image sequence. Aiming at solving the problem that wavelet transform could not well approximate curves in different directions on the twodimensional image, Labate et al. [26] proposed the shearlet theory, and the expression of shearlet transform is: where, the symbol " 〈•, •〉 " represents the inner product, a represents the scale factor, s represents the shear direction factor, t represents the translation parameter, f represents the image or multi-dimensional signal, and ψ ast represents the shearlet basis function. The NSST is a non-subsampled transform of discrete shearlet transform [26][27][28], which combines the non-subsampled pyramid transform with different shear filters to provide multi-scale and multi-direction decomposition. The NSST can be divided into two steps: multi-scale decomposition and direction localization. Fig. 4 shows the decomposition process of the NSST for two-level images.
Based on shearlet, Easley et al. [28] proposed a sparse directional image representation method based on the discrete shearlet transform and given a specific implementation step. The implementation of the NSST for image sequence includes the following four steps: First, an image sequence f k (x, y) , 1 ≤ k ≤ K, consists of K images with the size of M × N. K is the total number of images in the image sequence, M is the width where, the symbol " * " represents the convolution operator, j represents the number of decomposition layers, h 0 is the lowpass filter of the two-dimensional non-subsampled Laplace pyramid transform, and h 1 is the high-pass filter of the twodimensional non-subsampled Laplace Pyramid transform. In addition, when Second, the two-dimensional discrete Fourier transform of the high-frequency image is calculated as follows: where, the symbol "•" represents the product, Third, the shear filter ŵ s (u, v) is constructed in the frequency domain [29]. Supposing ∼ W is a window function in the frequency domain, the Window of ∼ W in this paper was obtained from the Meyer wavelet function, which satisfy ∑ S s=1 ∼ W(u, v) = 1 , the S represents the total number of shear directions under the discrete shearlet transform. Fig. 4 Decomposition diagram of the image with two layers of the NSST where, −1 p represents the mapping function from the pseudo-polar grid coordinates to the Cartesian grid coordinates, and ̂p represents the representation of the discrete Fourier transform of the impulse response function on the pseudo-polar coordinate system.
Fourthly, the non-subsampled shearlet coefficients can be obtained by the following: where, F −1 denotes the inverse Fourier transform.
A low-frequency image and a series of high-frequency images with different decomposition levels and different directions are obtained from each image by the NSST. The expression of the NSST of the image sequence is as follows: where, J represents the decomposition scale of the NSST, An example of the NSST on an image is shown in Fig. 5, where Fig. 5(a) is the image of the milling tool tip. A series of high-frequency images with different decomposition scales and shear directions were obtained by the NSST on the image of the milling tool tip. Fig. 5(b-i) shows a set of high-frequency images obtained when the decomposition scale j = 1.
Since the NSST can approximate the curve of the image from different directions, thus the NSST can obtain more the high-frequency information in all directions. However, highfrequency information is important information to judging the focusing degree of the pixel in the image sequence, and the high-frequency images obtained by the NSST could accurately judge the focusing degree of the pixel in this paper.

The mapping relationship between high-frequency images and depth map
After the NSST was applied to the image sequence, each image produced a low-frequency image and a set of highfrequency images. The low-frequency image expressed the basic information of the source image, and the highfrequency image expressed the details of the source image.
Results indicated that the change of microscope focus position mainly affects the high-frequency information of the image, therefore it was reasonable to establish the relationship between high-frequency image and depth map by the mapping function. The depth map of the milling tool tip could be obtained by the following steps. First, the gradient value of each pixel in the high-frequency image was calculated. Based on the high-frequency image obtained by formula (3.7), the Laplacian algorithm where, "{}" represents the set, and " average( ) " represents the average value function.
Second, calculate the focusing degree value of the pixel in the image sequence. The average method and weighting method were used to fuse the gradient value G j,l k (x, y) of pixels in high-frequency images with different decomposition scales j and different shear directions l to obtain the focusing degree value C k (x, y) K k of pixels in the image sequence, the calculation formula is: where, r 1 + r 2 + r 3 + r 4 + … + r J = 1, and r 1 > r 2 > r 3 > r 4 > … > r J .
Third, getting the depth map. According to the 3D reconstruction principle based on multi-focus sequence images introduced in Section 2.2, by comparing the focusing degree values of pixels at the same position coordinate (x, y) in the image sequence, the serial number k of the layer where the largest focusing degree value of the pixel point was located was used as the pixel value z(x, y) of the pixel coordinate in the depth map, the calculation formula is: After the depth map was obtained, the pixel coordinates (x, y) and the pixel value z(x, y) in the depth map were combined to obtain the relative coordinates (x, y, z) of the 3D point cloud of the milling tool tip.
Fourth, physical 3D point cloud coordinates is obtained. Assuming that the physical distance of pixels in the X-axis direction of the image in the real scene was ∆x, the physical distance of pixels in the Y-axis direction of the image in the real scene was ∆y, and the physical distance between two adjacent images in the image sequence was ∆z, the real 3D point cloud coordinates of the milling tool tip are (x 1 , y 1 , z 1 ).

Depth map optimization based on interpolation
The initial 3D point cloud of the milling tool tip was obtained by theoretical analysis in Section 3.2. According to the principle of the 3D reconstruction based on the multi-focus image sequence, the obtained initial depth value was rough, which might not correspond to the largest value of the pixel focus curve, and the ideal depth value was the depth value corresponding to the largest response value of focusing degree. Aiming at obtaining a more accurate depth value, according to the curve law of fitting the focusing degree value, a more accurate depth value could be obtained by interpolation between the image sequence numbers. In this paper, a relatively accurate depth value was obtained by Gaussian fitting interpolation of the focusing degree value of pixels in the image sequence. The Gaussian function expression is: where, a represents the height of the peak of the Gaussian function curve, b represents the abscissa value of the center of the peak, and c represents the standard variance. In this paper, the Gaussian function was used to fit the focusing degree of pixels at the same position in the image sequence to optimize the depth value. Assuming that the initial depth value of a point was the corresponding image sequence number z, the focusing degree value of pixels at the same position in the image sequence within the range of image sequence number [z − n 1 , z + n 1 ] was fitted, and n 1 determines the neighborhood range of fitting. An example of depth value fitting optimization was given here, and the fitting result is shown in Fig. 6.
The "♦" point in Fig. 6 represents the focusing degree value of the pixel, and the orange curve is obtained by fitting the focusing degree value of the pixel using the Gaussian function. z is the depth value before fitting and the abscissa value corresponds to the largest focusing degree value of pixels at the same position in the image sequence, h is the depth value after fitting and the abscissa value corresponds to the largest value of the fitted curve. After obtaining the Gaussian fitting curve, the depth value h corresponding to the largest response value of the curve is used to replace the original depth value z.

Calculation of 3D point cloud volume of the milling tool tip
The wear and breakage volume of the milling tool was an important parameter to evaluate the wear and breakage state of the milling tool. Aiming at obtaining the 3D point cloud volume of the milling tool tip more efficiently and generically, Simpson's double numerical integration method was used to calculate 3D point cloud volume of the milling tool tip. According to the depth map characteristics of the milling tool tip obtained in Section 3.3, the pixel value outside the milling tool tip area was zero in the depth map. By improving the complex Simpson's double integral formula, the relative 3D point cloud volume of the milling tool tip could be calculated as follows: where, z(x, y) represents the pixel value of the depth map. Finally, the real 3D point cloud volume V 1 of the milling tool tip was obtained by Equation (3.16).

Sample data
Aiming at verifying the performance of the 3D reconstruction method and volume calculation method proposed in this paper, the milling tool tip was controlled by the machine tool to move uniformly along the optical axis of the microscope, and the image sequence was taken at equal intervals to obtain the image size of 768×512. Fig. 7 shows two sets of different image sequences sample, which are the image sequences with different focus positions obtained when the milling tool tip has no wear or breakage and when has wear or breakage.

Parameter Settings
Four parameters have an important influence on 3D reconstruction results, namely decomposition scale J, shear direction s, shear filter's size, and the neighborhood Q of a pixel. The first three parameters determine the quality of obtaining high-frequency information, but data redundancy should be avoided when setting parameters. The setting of parameter Q requires a balance between noise and precision. In the experiment conducted in this paper, the decomposition scale J =4 of the NSST. Each image in the image sequence was decomposed into a low-frequency image and four high-frequency sub-bands. For the four high-frequency sub-bands, the number of shear directions of each high-frequency subband was 8、8、8, and 8, respectively, the different directions represented the different shear directions of the shearlet support base, and the details of the different directions of the image could be resolved. In addition, the size of the orientation filter for each decomposition scale was 16 × 16. The high-frequency image was obtained after the image sequence was transformed by the NSST. Then Laplacian algorithm was used to calculate the gradient value of each pixel in the high-frequency images, and the mapping relationship between the high-frequency images and depth map was obtained according to the method in Section 3.2. Aiming at improving the anti-noise, the gradient value of a pixel in its neighborhood was selected to calculate the gradient value of the pixel in high-frequency images (see equation (3.9)). Compared with the square neighborhood, the circular neighborhood has arbitrary direction symmetry. Hence the gradient within circular neighborhood was used to calculate the gradient of pixel values. In this experiment, the radius of the circular neighborhood was set to 2 pixels.

Comparison with SFF methods
The proposed 3D reconstruction method is implemented on real milling tool tip objects, and its performance is compared with some SFF methods including the Laplacianbased operators (F lp ) [28], the tenengrad-based operators (F ten ) [30], the gradient-based operators (F mean ) [31], the Fourier-based operators (F fft ) [32]、the wavelet-based operators (F dwt ) [33], and the NSST-based operators (F nsstmdml ) [34]. These methods are the most widely used focus measure operators to estimate image depth. In addition, aiming at better justify the performance of the proposed method in this paper to extract depth information from multi-focus image sequence, the Halcon software was used to recover 3D morphology from multi-focus image sequence for comparison. Two groups of image sequences were used to test the performance of the above methods. One set of the image sequence were taken when the milling tool size was 16mm and there was no wear or breakage, and the total number of the image sequence was 70. Fig. 8 shows the pseudo-color depth map obtained by seven methods. Similarly, another set of sequence images were taken when the milling tool size was 16mm and there was wear or breakage, and the total number of the image sequence was 70. Fig. 9 shows the pseudo-color depth map obtained by seven methods. Fig. 8 and Fig. 9 show the pseudo-color depth map obtained by several different methods, the pixel values in the depth map represent image serial numbers. By calibrating the pixel size, ∆x = 0.0048mm and ∆y = 0.0048mm in Section 3.2 were obtained, and the pseudo-color depth map in Fig. 8 corresponds to the physical distance ∆z=0.0382mm of two adjacent frames in the image sequence, and then the pseudo-color depth map in Fig. 8 was transformed into the 3D point cloud according to formula (3.13), and the real 3D point cloud of the milling tool tip was obtained as shown in Fig. 10. Similarly, the pseudo-color depth map in Fig. 9 corresponds to the physical distance ∆z=0.0378mm of two adjacent frames in the image sequence, and then the pseudocolor depth map in Fig. 9 was transformed into the 3D point cloud according to formula (3.13). Fig. 11 shows the real 3D point cloud of the milling tool tip.
Observing the pseudo-color 3D point clouds of two groups of the milling tool tip in Fig. 10 and Fig. 11, it was found that the reconstruction results of F Halcon , F lp , F ten , F mean , F fft , F dwt, and F nsstmdml have a lot of noise. The proposed algorithm is successful in the selection of the focus pixels from the image sequence, and the 3D reconstructed results by the proposed method have less noise and a smooth surface. Aiming at quantitatively comparing the 3D reconstruction effects of different focusing degree measurement methods, root mean square error (RMSE) of fusion image and full-focus image [35], Correlation (Correlation) between fused image and full-focus image [36], and smoothness of depth map (SSM) [37] were used as evaluation indicators, fusion images were obtained through depth map, and the full-focus image was obtained through the camera's built-in depth-of-field synthesis function. The RMSE value reflects the quality of the fusion image, and the quality of the fusion image directly determines the quality of the 3D reconstruction, representing less noise in the 3D point cloud. The correlation value reflects the similarity between the fusion image and the full-focus image, the closer the Correlation value is to 1, the less noise there is in the 3D point cloud. The SSM value reflects the smoothness of the depth map, and the smaller the SSM value is, the less noise there is in the 3D point cloud. The performance index calculation results of the 3D reconstruction based on different focusing degree evaluation methods are shown in Table 1. Fig. 12 shows the bar chart after normalizing the data in Table 1 to the interval [0. 5 1]. In Table 1 and Fig. 12, #Data1 reacts to the data in Fig. 10, and #Data2 reacts to the data in Fig. 11.
By observing Table 1 and Fig. 12, it could be found that the root mean square error (RMSE) value was the smalles t、Correlation(Correlation) value was the largest, and the smoothness (SSM) value was the smallest of the 3D reconstruction results of the proposed method, indicating that compared with the other six methods, the proposed method has the best performance on the 3D reconstruction of the milling tool tip.
In addition, aiming at verifying the robustness of the proposed 3D reconstruction method of the milling tool tip, the proposed method was used to reconstruct the 3D shape of the milling tool tip with different degrees of wear or breakage, and the reconstruction results are shown in Fig. 14. There is a corresponding relation between (a)~(f) in Fig. 14 and (a)~(f) in Fig. 13, Fig. 13 (a) ~(f) shows the images of the milling tool tip with different degrees of wear or breakage, and Fig. 14 shows the 3D point cloud obtained by 3D reconstruction of the milling tool tip using the proposed 3D reconstruction method. Fig. 14 shows that the 3D reconstruction method proposed in this paper has good performance in 3D shape reconstruction of the milling tool tip with different degrees of wear or breakage.

Comparative analysis of the 3D reconstruction results based on depth map optimization
In this section, the 3D reconstruction results of the milling tool tip before and after depth value optimization were compared and analyzed. The proposed method was used to interpolate and optimize the 3D reconstruction results of the milling tool tips at different wear or breakage stages (Fig. 14). Fig. 15 shows the 3D reconstruction results of the milling tool tip at different wear or breakage stages after depth value optimization. Fig. 15 shows that a better 3D model can be obtained by optimizing the depth value by Gaussian interpolation. By comparing the 3D point cloud before optimization and the 3D point cloud after optimization, it could be found that the 3D point cloud after optimization was smoother and more continuous than that before optimization.

Calculation of wear and breakage volume of the milling tool tip
In this section, the volume of the milling tool tip wear or breakage region was calculated based on the optimized 3D point cloud. Aiming at verifying the effectiveness of the volume calculation method in this paper, it was compared with the commercial software geometric, the calculation results are shown in Table 2. Fig. 15(a-f) shows a correspondence with #Data3~#Data8. Table 2 shows that the volume calculation method proposed in this paper is effective. The volume obtained by the proposed method is close to that obtained by Geometric, and their percentage error are controlled within 1%, indicating that the point cloud volume calculation method in this paper could accurately calculate the 3D point cloud volume of the milling tool tip. Fig. 9 The pseudo-color depth map obtained by 3D reconstruction of the milling tool tip with wear or breakage 1 3   Finally, the wear and breakage volume of the milling tool tip was obtained based on the 3D point cloud volume of the milling tool tip. Since the milling tool tip was moved to the same position to shoot the image sequence each time by controlling the machine tool, the reconstructed milling tool tip region was consistent each time, thereby the wear and breakage volume of the milling tool tip was obtained by the volume without wear and breakage was subtracted from the volume with wear and breakage. Table 2 shows the calculation results of wear and breakage volume. It could be found that the wear and breakage volume increases with the increase of the wear or breakage stage, which was consistent with the actual situation.

Discussion
The 3D reconstruction method and volume calculation method proposed in this paper were effective, but the following issues deserve further investigation. (1) The depth of field of the microscope was one of the important factors affecting the 3D reconstruction accuracy. In the same case, the smaller the depth of the field, the higher the accuracy of the 3D reconstruction.
According to the characteristics of the optical micro-scope, the depth of field of the optical microscope was inversely proportional to the magnification, the larger the magnification, the smaller the depth of field was, but the smaller the field of view was, thereby the reconstruction accuracy of the 3D reconstruction method proposed in this paper was relative. How to improve the accuracy of the 3D point clouds remains to be studied further. In the future,, the 3D reconstruction accuracy can be improved by reducing the depth of field under the same magnification of the microscope. (2) The results of the proposed method were poor for the 3D reconstruction of the regions with high reflection or weak texture. In the process of machining, the debris was usually attached to the surface of the milling tool tip. Under the irradiation of the light source, the debris region becomes highlighted. Fig. 16 shows an example of the highlighted reflective region. How to avoid the influence of highlighted or weakly textured regions on 3D reconstruction accuracy remains to be studied. In the future, the point cloud filtering method can be used to solve the problem of incorrect reconstruction of the highlighted region. (3) The 3D reconstruction method proposed in this paper takes a long time to obtain the 3D point cloud, which makes it difficult to monitor the milling tool wear and breakage state online in real-time. In the future, the algorithm or program will be optimized to improve the running speed.
(4) It was difficult to ensure that each 3D reconstruction of the milling tool tip region was the same. For example, the Angle at which an image sequence was taken and the starting point at which the image sequence was taken could lead to inconsistencies in the 3D reconstruction region, resulting in errors in assessing the wear volume of the milling tool tip. In this paper, the same milling tool tip region was reconstructed each time through the control of the machine tool, however, due to the vibration of the machine tool in the operation process, the reconstructed region will be slightly shifted, and the consistency of the reconstructed area can be further ensured by image processing methods in the future. (5) The accuracy and robustness of reconstruction need to be improved so that it can deal with complex scenes in the real processing environment. In the future, the stability of 3D reconstruction can be improved by optimizing the image sequence, such as image denoising and image enhancement.

Conclusion
Aiming at monitoring the wear and breakage volume of the milling tool tip, a new 3D reconstruction method of the milling tool tip and a 3D point cloud volume calculation method of the milling tool tip are proposed in this paper. The experimental results showed that the proposed 3D reconstruction method had less noise and a smooth surface, and had the best performance in root mean square error (RMSE), Correlation(Correlation), and smoothness (SSM). In addition, the 3D point cloud volume calculation method of the milling tool tip proposed in this paper was universal and could accurately calculate the 3D point cloud volume of the milling tool tip. Compared with the volume obtained by Geometric software, the percentage error was less than 1%. Additionally, the wear and breakage volume of the milling tool tip was extracted by calculating the 3D point cloud volume of the milling

Fig. 16 Highlight reflective region
The International Journal of Advanced Manufacturing Technology (2023) 126:3383-3400 3398 tool tip, which provided an important basis for accurately evaluating the wear and breakage state of the milling tool.