There has been a tremendous impact of welding on the manufacturing industries worldwide. Metalwork is indispensable to the automotive, aviation, and other process industries. The physical demands of welding are very high. It has become increasingly common for robots to be deployed in fabrication industries to meet industrial demands for fast and accurate welding by reducing human intervene. Different manufacturing sectors use robotic welding for its precision and accuracy. In addition, it reduces labor costs, avoids welders working in hazardous environments, and increases the productivity and quality of welds. Previously, and even now, the majority of robotic welding relied on operator’s instructions. In spite of this, these methods do not take into account the uncertainties associated with welding. Programming is done well in advance of welding on teach-and-play robots, where the weld paths must be predefined, and the robot must be reprogrammed to perform welding in different conditions. It drains an ample amount of time and increases the cost. This reduces process efficiency, constrains welding shapes, and makes it inflexible to meet the needs of the industry. Real-world problems involving unpredictable environmental conditions cannot be solved by it. For the purpose of addressing these challenges, intelligent robotic technology has been boosted by incorporating computer vision into robotic systems.
The concept of weld seam tracking was evolved through years. Researchers used several techniques for achieving accurate seam tracking. Chronologically, weld seam tracking can be divided into three generations [1]. A two-pass weld movement was used in the first generation, with the first pass obtaining seam geometry, and the second pass welding. During this period, contact type sensors predominated. Touch sensors and tactile sensors are examples of sensors used for seam tracking. Tactile signals obtained from sensors are very useful for acquiring location features [2] at different portions of the area of interest. Knowledgebase containing possible features for weld line was created for segregating weld line features from other locations. Collected features are generally matched with the knowledge frame for accurate seam tracking. These sensors had been used for both two- and three-dimensional edge tracking [3, 4]. These contact type sensors seem to be useful for electronic industries, where minute welds are being performed on circuits. These types of sensors are not suitable for medium or large welding, because they require searching a large area for feature acquisition, which is time-consuming.
Welding in the second generation includes real-time tracking of seams and welding together. A variety of non-contact sensors started being used, such as arc sensors, ultrasonic sensors, and electromagnetic sensors. Arc sensor [1, 5] has been used mostly among mentioned ones. Measured voltage changes, when distance between torch-tip and workpiece changes. Based on this principle, stable tracking has been achieved [6]. Nonetheless, arc sensors are used for larger diameter pipe welding in shipbuilding industries, but they are only effective for up to 0.5 mm thick plates [7]. For high thickness workpieces like 10–12 mm, ultrasonic sensors are used [8–11], which measure the distance from any sensed obstacle. The major disadvantage of using ultrasonic sensor is that the speed of sound is dependent on temperature, which limits its application in real-time sensing. In addition, any unwanted obstacle can also be detected by ultrasonic sensor, which can lead to wrong estimation of weld line information [1].
As a result of the limitations mentioned above, third generation weld seam tracking evolved relying on vision-based technology. In the past, highly expensive infrared and laser-assisted sensors have been used for flawless seam tracking [12, 13]. Radiation from the weld zone is captured by the infrared sensors, which gives thermal distribution of that zone as an image data. Those are used for tracking of weld seams in real time by image processing algorithms. Additional benefits of this kind of imaging are that, it can detect weld gap as well as the depth of penetration. However, this method is not widely used due to its high cost. Also, very less gap in weld lines can cause insignificant difference between thermal distributions at the edges. In order to deal with these challenges, high quality vision sensors like CCD and CMOS cameras have been started to be used [14–16]. But due to environmental noise, appropriate weld edge detections were tough to achieve. That is why laser assisted vision sensing was introduced. A laser vision system has inbuilt CCD sensor and laser producer, which is projected on weld seam [16–19]. CCD sensor scans the projected laser for concentrated seam tracking. Though, it was an effective setup for successful seam tracking, but cost wise it was not economical. This has led to the use of normal optical cameras and improvisation of algorithms for tracking.
Machine vision has revolutionized the industry by making it faster, cheaper and simpler to use. In the robotic welding, the robotic arm has 6 degrees of freedom and moves on a 3-dimensional axis. The robotic arm’s end consists of the welding torch that enables the welding process. In a reported work, the workpiece was assumed to be at the center to simplify the image processing methods [20]. A search window was used to segment the foreground from the background by thresholding the intensity values in RGB space. To detect the start and end points of the straight path, stereo matching had been performed but points on the template should be known beforehand. Vision sensor was also used for seam tracking in GMAW process [15]. Two focused windows were introduced that covered the weld pool and seam. An improved Canny detection algorithm was developed to predict the threshold values automatically and used edged fitting to detect the edges of the weld lines. However, these algorithms are just limited to detecting straight weld seams. As an alternative approach proposed, the boundary subtraction method [21] compares the intensity value on either side of the line detected after Hough transformation. If the pixel intensity difference was greater than the threshold, these were considered as boundary lines and eliminated in further processing steps. To a certain extent, the algorithm works, but it fails in images of low contrast. An attempt has been made to track the weld path in an enclosed region of interest [22]. It focused on ensuring the most efficient seam path line based on parametric shapes like straight, zig-zag lines, and semi-circle. But the shape of the weld lines should be known prior to performing the welding. An improved canny edge detection method was used to detect the weld path for straight butt joints by performing edge fitting using a focused window [15]. In an automatic welding for lap joints, a method for finding the initial and endpoints of a lap joint was described and an algorithm was developed to link the broken weld lines and recognize the T-junctions in weld seam through image processing [23]. The lap joints are in high contrast with respect to the background, making the edge detection more accessible and not satisfying all the weld assemblies set up. In another study, as alternative to use traditional image processing methods, image segmentation, CNN and features point extraction were performed to process the noisy V groove weld images using laser sensors, which were used to extract the center point of the groove [24]. Since image processing techniques depend on their surroundings, the Fuzzy Decision-making theory helped to determine edge lines and applied linear regression to obtain the coordinates of weld seam contours for straight, kinked and curved lines [25]. Though this method works well at different light conditions and performs seam tracking, it does not provide us any information on the weld parameters like its gap and length from the captured image. The coordination of process parameters like current and voltage with the weld gap is essential for producing a sound weld. In the above papers, researchers have considered welding with the assumption of a constant weld gap. However, depending on the seam shapes, the seam gap may change at every seam points. Hence to control the weld penetration, the variation of weld gap must be taken into account. In various edge detection techniques reported [26], using the vision toolbox in LabView, edge fitting technique was applied to obtain the edge lines of the weld seam. The midpoint and seam gap of the corresponding edge points were measured by the caliper of the vision assistant toolbox. In the proposed technique expensive CCD camera with frame grabber was used for image acquisition and LabView software platform for processing. Moreover, no detailed information was mentioned on the applicability of the method on finding corresponding points for gap measurement in curve shaped weld profile or at the corners. The weld seam and weld pool in the images had been extracted using a series of image processing techniques and fitted the edges of the weld seam using the least square method (LSM) [27]. The difference between the upper and lower lines yields the weld gap. However, the set-up used to capture the images during welding is expensive and not industrially economic.
A predefined region was introduced in most research work to lower the computational cost. The weldment is considered at the center, which is not feasible in all cases, or the algorithm is not flexible enough to detect any arbitrary shape of the weld seam. In spite of the fact that these parameters affect weld quality, scanty research exists on how to automatically determine the gap and length of the weld. This paper’s significance lies in the developed algorithm which automatically detects the weld assembly (RoI) without any prior knowledge about the weld seam and reduces the irrelevant information. A predefined region is not required to subtract the weldment from the background. The paper describes performing seam tracking on any given weld shape and finding the gap and length of the weld seam by applying image processing methods. Moreover, the camera used in the present work for image acquisition is a low-cost webcam. In addition, both the image acquisition and processing are completed within a minimal wait-time (2 seconds) of the robot arm at the ‘Home’ position of the robot. Therefore, the proposed process of weld path detection and measurement does not impact the process downtime. In contrast, the real-time vision-based seam tracking systems need expensive CCD or CMOS camera with various types of optical filters and dimmer glasses to capture the image near weld pool. However, a certain type of filter cannot accommodate different lighting conditions at varying arc intensity [28]. In an alternative approach, the seam information was approximated from the center of the weld pool captured by a CCD camera [29]. Nevertheless, due to disturbances in arc light or radiation from the weld pool, the false predicted points were eliminated by setting a threshold value from user experience, which makes the method unsuitable for very curvy paths or sharp corners. As example, if a weaving path is considered, dynamic torch movement will affect the seam tracking. Also, gradual change in weld gap measurement is not possible for weld seam tracking in arc welding if the torch is moving fast or in a highly non-linear path. Present solution also overcome this issue by giving prior knowledge about weld gap. The presented algorithm in this paper can meet the requirements of industrial production with good reliability. The developed model is cost-effective, robust and industrially deployable.