Fast stitching method for multi-view images of cupping spots

Aiming at the problems of cupping machine camera with a small field of view, complete and high-definition cupping spot characteristics need to be monitored in real time, so as to effectively ensure that the cupping machine can accurately control the cupping time to avoid harm to the human body and extracting the complete back contour for automatic acupuncture point positioning, a fast stitching method for multi-view image of cupping spots is proposed. The present study uses linear transformation and Gaussian smoothing to preprocess the images to enhance image details, improve contrast, and reduce the influence of tank occlusion; Then, we combine the BRISK algorithm and the match factor to match the image and estimate the overlapping area through establishing a binary tree model so as to improve the efficiency of feature point detection and matching. Experimental results show that compared with the AutoStitch, the image stitched which is high definition has no obvious seams, ghosting, and distortion. What’s more, the algorithm in this paper is more efficient and the back contour is more complete. The algorithm in this paper has good real-time performance and the clarity of the cupping spots is high, which is conducive to the subsequent real-time detection of the cupping spots. At the same time, the back contour of the image stitched by the algorithm in this paper is complete, which is conducive to the subsequent automatic acupuncture point positioning.


Introduction
Cupping which is a traditional therapy in the clinic of traditional Chinese medicine with a long history is commonly used in clinical. Cupping therapy is a physical treatment that uses the principle of negative pressure adsorption to treat the local skin of the human body [1]. It uses the tank as a tool, using fire, pumping, or other methods to exclude the air in the tank to cause negative pressure so that it is adsorbed on the surface of the body of the acupuncture point or the part to be pulled which will lead to congesting and bruising the local skin, so as to achieve the purpose of preventing and treating diseases. As the cupping machine has a large disturbance during operation, we distributed cameras around the machine to reduce noise impact when designing it; also, it is Ying-Bin Liu, Jian-Hua Qin, Meng-Yan Zhu, and Ting-Ting Huang have contributed equally.
B Jian-Hua Qin qinjh2@sina.com 1 School of Mechanical and Control Engineering, Guilin University of Technology, Guilin 541004, China not conducive to its work if the length of the cupping arm is too long, which also increases manufacturing costs. Therefore, the camera is close to the human body, and it is unable to take complete pictures directly take into account these factors, the automatic cupping machine is equipped with four cameras for monitoring human body conditions, but there are problems such as a small field of view, as shown in Fig. 1.
Image stitching has always been a concern, with broad research prospects, and has a wide range of applications in the fields of medical image analysis, three-dimensional reconstruction, remote sensing images, and disaster prevention [2][3][4]. In addition, automated panoramic image stitching is more complex, with extensive search literature and a variety of commercial applications in the fields of photogrammetry, computer vision, image processing, and computer graphics [5,6]. The method of image stitching mainly includes (1) Methods based on template matching; (2) Methods based on edge matching; (3) Methods based on feature matching. Among them, the feature matching method has certain advantages in solving the image scale invariance and perspective invariance and has better robustness and stability, so the method based on image feature matching is current The most influential stitching method. In an algorithm based on image feature matching, the Harris corner point feature detection algorithm [7] handles the problem of image rotation very well. On this basis, Lowe proposed the SIFT algorithm [8], which is an algorithm It has good robustness in transformations such as image rotation and scaling, but takes a long time and is poor in real time; In recent years, Lv and Fan et al. [9,10] improved the accuracy and efficiency of SIFT feature extraction. In order to solve the problem of the poor real-time performance of SIFT, Bay proposed SURF algorithm [11], which is derived by harr of an integral image, compared to SIFT is more time-consuming in real time [12]; fusion FAST algorithm and ORB algorithms [13] are highly real time but have poor stability sums for feature point extraction Disadvantages of low registration accuracy. The features of SIFT, SURF, and ORB are invariant for scaling and affine transformations, so they are widely used in image stitching [14]; Brisk Operators are another feature description algorithm with good real-time performance, proposed by Leutenegger in 2011. Compared to other feature algorithms, brisk operators have some advantages over the stitching of blurry images [15], but are poorer than sift operators in terms of stability and accuracy As a result, Yang Shuang proposed a fast image stitching method based on Brisk and Sift [16]. Qu proposes an unordered image stitching method [17]. Due to the long distance between the four cameras and the large parallax, when the above algorithm is directly applied to image stitching, cracks and black lines will be generated in the overlapping parts of the images, therefore a fast stitching method for multi-view images of cupping spots is proposed.
Our main contributions are as follows: (1) Proposing a method dedicated to stitching multi-view images of cupping spots, and verifying through experiments that the method has a strong ability to extract effective feature points, high computational efficiency, natural image splicing transition,  (2) The matching factor is proposed for accurately arranging the image transformation order. (3) Combined with the binary tree model, the overlapping area of the image is estimated by using the BRISK algorithm, and then the overlapping area is SIFT feature extraction. This method avoids spending a lot of feature detection time in non-overlapping areas and improves the efficiency of image registration.

Image stitching process
The algorithm flow of this article is shown in Fig. 2. Firstly, the grayscale linear transformation method [18] is used for preprocessing to enhance image details, improve contrast, and reduce the influence of tank occlusion; Secondly, setting the respective ROI according to the viewing angle of the camera acquired image combined with the BRISK algorithm and matching factor to match the image and estimate the overlapping area through establishing a binary tree model; Then, Using Brisk algorithm to detect feature points on ROI, after that, using BF to match between image feature points. Thirdly, SIFT feature point detection for overlapping regions and realize the feature Coarse matching of point pairs using the KNN algorithm [19]; Then, Robust homography estimation using RANSAC [20] algorithm; Finally, use the hat function-weighted fusion method [21] to fuse the images.

Image preprocessing
Image preprocessing is carried out by grayscale linear transformation and Gaussian smoothing. Let the input image be I , width, W and height H , and the image linear transformation formula is: When a = 1, b = 0, O is a copy of I; If a > 1, the contrast ratio of the output image O is increased; If 0 < a < 1, the contrast ratio of O is decreased than I Small. The change of the b value affects the brightness of the output image. When b > 0, the brightness increases; when b < 0, the brightness decreases. We have conducted many experiments on a and b, and the empirical values obtained are 1.2 and 5, when a = 1.2 and b = 5, the effect is better.
The use of the grayscale linear transformation method to preprocess the input image can enhance the contrast, reduce the effect of tank occlusion and highlight the image details; the image is performed Gaussian smoothing, which can reduce the image noise point, effectively reduce the number of BRISK feature point detection, and improve the efficiency of feature point detection and matching. Figure 1a-d after preprocessing is shown in Fig. 3 and the detection effect of brisk feature points is shown in Table 1.
According to the table above, after image preprocessing, the number of BRISK feature points detected has decreased by 31.8%, 17.8%, 27.9%, and 20.8%, respectively, on the basis of without preprocess and the time of detecting BRISK feature points was reduced by 26.8%, 17.9%, 17.7%, and 17.2%, respectively, without pretreatment. Therefore, BRISK feature point detection after preprocessing can effectively reduce the number of feature points and detection time.

Determination of image transformation order
This paper proposes a method to determine the order of image transformation, because the viewing angle range is fixed, and there are some areas that are often not overlapping areas, so the ROI region is set for the image according to the viewing angle range, and the BRISK algorithm is used for ROI are detected for feature points, sorted according to the number of matching pairs of BRISK feature points, however, too many feature points appear due to individual images, resulting in it and other images matching logarithms are more, so only refer to the matching logarithm, the consideration is not comprehensive enough, in order to avoid the problem of low matching rate of the remaining images, this article proposes matching Factors are used for the sequential arrangement of image transformations.
The match factor is calculated as where a is the reference image, b is the image to be registered, J j=1 a j is the sum of the matching logarithms of the reference image and other images, a M is the matching logarithm of a and b.
The transformation order is determined using the method for the four images in Fig. 1a-d as follows: 1. The BRISK algorithm is used to detect the feature points of the ROI, as shown in Fig. 3a-d. 2. Use the BF algorithm to match the feature points, and use the European distance culling error match with a threshold of 0.8. The logarithm of feature point matching between the two images is obtained. 3. Calculate the matching factor, as shown in Table 2.
Figures 3c and 4d have the largest matching factor, which is 0.595, so it is preferential to match Fig. 4c, d, and then remove the matching factor containing Fig. 4c or d, and pick the maximum value from the remaining matching factors, and the maximum value of the remaining matching factor that is about Fig. 4a, b is 0.555, so the subsequent match to Fig. 4b with Fig. 4a

Image overlapping area feature point detection
In this paper, when the image transformation sequence is determined, the BRISK feature points have been extracted and accurately matched, on this basis, the transformation matrix of the image is calculated, the overlapping area of the image is estimated, and the SIFT feature point detection is performed on the overlapping area. The overlapping area SIFT feature points are shown in Fig. 5.
In order to compare the difference between the estimated overlapping regions using the ORB algorithm, the SURF algorithm, and the BRISK algorithm, this paper does the following experiments, as shown in Table 3, the first method is to directly perform the ROI of the picture SIFT feature point      1. The SIFT method extracts the most feature points, but some of them are non-canned spot area feature points, which will affect the time-consuming of subsequent steps and the accuracy of projection transformation results, which is not conducive to the results of splicing; 2. In the three methods of SURF-SIFT, BRISK-SIFT, ORB-SIFT, in general, BRISK-SIFT extracts more feature points, and in the lower right image, The number of feature points in these three methods is almost the same, indicating that the estimated overlap regions tend to be consistent and the average time-consuming analysis is extracted according to the feature points: BRISK-SIFT is the fastest of the four methods.
According to the above, the ability of BRISK-SIFT to extract effective feature points is strong, which helps to improve the accuracy of feature matching; the time for feature extraction is reduced, which improves the calculation efficiency. BRISK-SIFT algorithm has better effect.

SIFT feature point matching and image transformation
After the SIFT feature point, extraction and description of the overlapping area are completed, the FLANN matcher is used to match the feature points of the fixed image and the image to be matched. The KNN algorithm is used to roughly judge the similarity of the fixed image and the feature descriptor of the matched image to obtain a series of feature point pairs, and then the RANSAC algorithm is used to purify the feature point pairs to eliminate the wrong point pairs, improve the effective feature point pair ratio, and obtain the optimal perspective transformation matrix. In the KNN algorithm, the Hamming distance is set as a distance measure, and the calculation formula is as follows where D(l, r ) is the Hamming distance; l, r is the feature descriptor sets of the fixed image and the image to be matched, respectively; M is the total number of feature descriptors; m i , n i are a certain feature descriptor of the fixed image and the image to be matched, respectively. Because the image overlap area is often a tank spot Area, so that the accurate transformation of the overlapping area is particularly important, the algorithm in this paper is to estimate the overlapping area, calculate the local optimal transformation matrix for the overlapping area, thereby reducing the ghosting and deformation of the cannula area during splicing.
The optimal perspective transformation matrix is a 3 × 3 matrix with 8 unknowns. If the optimal perspective transformation matrix is K, the pixel coordinates in the fixed image are, and (x 1 , y 1 ) the pixel coordinates in the image (x 2 , y 2 ) to be matched are, the pixel coordinate transformation formula where k 0 , k 1 are the coefficients for changing the image scaling transformation; k 2 , k 5 are the coefficients for controlling the image perspective transformation; k 3 , k 4 are the coefficients for changing the image rotation angle; k 6 , k 7 are the coefficients for changing the image translation distance.

Image fusion
After completing the transformation of the pixel coordinates of the image to be matched, the overlapping area is fused by the hat function-weighted fusion algorithm, the weight of the pixel at the center of the overlapping part of the fixed image and the image to be matched is set as the maximum weight value, and the pixel weight is gradually reduced from the center position to the sides to achieve progressive weighted fusion between the images. The weights of the overlapping pixels of the image are where x j , y j is the coordinate of pixel j; W , H are the image width and height, respectively. In order to unify the units of calculation, ω x j , y j the normalization of the pairs is carried out to obtain a weighted where I 1 , I 2 are the fixed image and the image to be matched, respectively; (x, y) are the pixel coordinates; ω 1 , ω 2 are the weighted weights of the fixed image and the image to be matched in the overlapping part, respectively. Using formula (7) for image stitching can eliminate cracks and black line appearances, making the transition at the splicing more natural and the stitching image clearer.

Experiment
In this section, we conducted experiments on multiple scene images. All experiments were implemented in pycharm2021.3.1 on a computer which has a 64-bit Windows 10 system with 16 GB of memory and a processor clocking at 3.20 GHz. Also, we use cupy which adopts cuda acceleration was recently released instead of numpy partial functions. First, we conduct experiments and analysis in Fig. 1a-d. Then, we conducted experiments and analysis on the other two groups which come from the paper of AANAP [21]. Their original pictures are shown in Figs. 9 and 11 and the comparison results are shown in Figs. 10 and 12.
One of the most important of existing panoramic stitching methods is AutoStitch [23], an excellent way to build panoramas from an image collection, and there is a standard pipeline that includes camera calibration, feature tracking, motion estimation, image packaging, beam adjustment, and image blending for filtering visual artifacts. In this paper, the experimental results were compared with those of AutoStitch stitching, and the results were tested for canny edges [24]. The experimental flow chart is shown in Fig. 6; Comparison of stitching results is shown in Fig. 7; Comparison of canny algorithm results is shown in Fig. 8.
According to multiple experiments, the average time of AutoStitch for stitching multi-view images of cupping spots was 6.56 s, while the average time of the proposed algorithm was 3.34 s. What's more, while maintaining accuracy, the proposed method is significantly better than AutoStitch in terms of splicing efficiency and splicing effect.
According to the detection results of the canny algorithm, the back contour of the image spliced with AutoStitch is incomplete, while the back contour of the image spliced by the algorithm in this paper is complete, which is conducive to the subsequent automatic acupuncture point positioning.
After BRISK feature point matching, it is calculated that the matching factor between Fig. 9b and c is 0.871 which is largest. Then, remove the matching factors including Fig. 9b, c. Among the remaining matching factors, the matching factor between Fig. 9a, d is 0.734 which is largest. Therefore, we first stitch Fig. 9a-d, respectively, and then stitch their stitches again (Fig. 10).
After BRISK feature point matching, it is calculated that the matching factor between Fig. 11b, d is 0.792 which is largest. Then, remove the matching factors including Fig. 11b, d. Among the remaining matching factors, the matching factor between Fig. 11a, c is 0.364 which is largest. Therefore, we first stitch Fig. 11a-d, respectively, and then stitch their stitches again.
It can be seen from the above results that, compared with AANAP, the algorithm in this paper still maintains the advantages of natural transition and unobvious stitching in other multi-view stitching scenes (Fig. 12).

Conclusion
1. For the multi-perspective scenario of can spot during cupping, this paper is based on binary tree structure, BRISK algorithm, SIFT algorithm, combined with grayscale linear transformation method KNN algorithm, RANSAC algorithm, hat function-weighted fusion algorithm, etc., proposed a fast stitching method for multi-view images of cupping spots. 2. Experimental results show that the method has strong ability to extract effective feature points, high computational efficiency, natural image splicing transition, no cracks, black line phenomenon, tank spot features, and back contour are complete, no obvious deformation, compared with AutoStitch, the proposed algorithm takes less time and no ghosting.