Development of structured light 3D-scanner with high spatial resolution and its applications for additive manufacturing quality assurance

Digital three-dimensional (3D) scanning is a cutting-edge metrology method that can digitally reconstruct surface topography with high precision and accuracy. Such metrology can help traditional manufacturing processes evolve into a smart manufacturing paradigm, which can ensure product quality by automated sensing and control. However, due to limitations with the spatial resolution, scanning speed, and size of the focusing area, commercially available systems cannot be directly used for in-process monitoring in smart manufacturing. For example, a metal 3D printer requires a scanner with second-level sensing, micron-level spatial resolution, and a centimeter-scale scanning region. Among the 3D scanning technologies, structured light 3D scanning can meet the scanning speed criteria but not the spatial resolution and scanning region criteria. This work addresses these challenges by reducing the field of view of a structured light scanner system while increasing the image sensor pixel resolution. Improvements to spatial resolution and accuracy are achieved by establishing hardware selection criteria, integrating the proper hardware, designing a scale-appropriate calibration target, and developing noise reduction procedures during calibration. An additively manufactured Ti-6Al-4V part was used to validate the effectiveness of the proposed 3D scanner. The scanning result shows that both melt pool geometry and overall shape can be clearly captured. In the end, the scanning accuracies of the proposed scanner and a professional-grade commercial scanner are validated with a nanometer-level accuracy white light interferometer using high-density point cloud data. Compared to the commercial scanner, the proposed scanner improves the spatial resolution from 48 to 5 μm and the accuracy from 108.5 to 0.5 μm. Compared to the white light interferometer, the proposed scanner improves the scanning and processing speed from 2 to 20 s.


Background
In advanced manufacturing, automatic quality control is a key enabler for defect detection and mitigation based on sensor technologies. Additive manufacturing (AM) provides a novel way to fabricate parts by layer-wise addition of material [1]. However, its sustainability is constrained by the inherent limitations of layer-by-layer fabrication, leading to numerous defects such as balling, porosity, and distortion [2]. Therefore, online layer-wise monitoring is an important research area since defects that occur during printing may severely deteriorate the product quality [2]. The 3D surface topological information for each layer usually includes critical quality information, such as melt pool size, surface roughness, pores, other defects, or unexpected process alterations. For example, Parry et al. [3] indicated that the melt pool size directly correlates with penetration depth, residual stress, and overall geometry precision. The 3D surface topological information can be obtained by 3D scanning, which is a group of sensor techniques that subvert the traditional point-topoint measurement by providing 3D point cloud data to evaluate the geometrical and dimensional quality of the manufactured parts. These techniques have already been applied to industries such as construction, entertainment, and medical instruments [4][5][6]. But their use for online process monitoring and control in advanced manufacturing is very limited. This is primarily due to the insufficient spatial resolution and the slow scan speed of current 3D scanning technologies [7,8]. Figure 1 shows a high-resolution image of a metal AM part surface printed using Ti-6Al-4V Titanium alloy by the EBM (GE Arcam Q10 Plus) printer with a microscopic view of the solidified melt pool. Figure 1 b shows that the solidified melt pool can be identified by the surrounding wrinkles, which have around 20-μm width. To accurately locate and describe these wrinkles, 3D scan data with a spatial resolution of 5 μm or higher is needed, which is difficult to achieve under the stringent scanning speed requirement of online AM printing.
Given AM involves many layers of printing, the scanning speed should be within the scale of several seconds in order to make the 3D scanner feasible for online process monitoring. For example, the Zygo NewView 8200 white light interferometer has a very high spatial resolution (< 2 μm), but it is extremely time-consuming (> 2 h) for a 15×15-mm 2 area scan. This makes it impossible to use for online monitoring of surface topography in metal AM.
Among all types of 3D scanning techniques, the structured light 3D scanner (SLS) is chosen in our work because it has great potential to achieve the goals mentioned above due to its adjustable field of view (FOV), fast scanning, and relatively simple structure. Despite the potential, neither the commercialized SLS systems nor the current state of the art research can reach the 5-μm spatial resolution requirement. For example, the HP 3D SLS Pro [9] (the benchmark scanner in this work) and EinScan-Pro+ [10] are professional-grade commercial scanners but can only achieve 50-μm spatial resolution. Rao et al. [11] achieved 15-μm spatial resolution, and Liu et al. [12] have achieved 10 μm but are still not precise enough for EBM melt pool monitoring. Challenges exist in 3D scanner system design and the calibration procedure, which motivated the research reported in this paper.

Contribution
This paper aims to design and implement a new SLS that can meet the in-process monitoring needs for metal AM. More specifically, the scanner should have 5-μm spatial resolution, second-level scanning speed, low cost, and compact size. This level of spatial resolution, to our best knowledge, has not been achieved by any SLS in the market or literature (currently, the best performance in the literature is 10 μm [12]). The developed scanner has a very fast scanning speed at the secondlevel, making it feasible for online monitoring of AM processes. Note that instead of scanning the entire print bed, the SLS developed in this work is intended to scan critical local regions of the part which have stringent quality requirements and thus need very high spatial resolution scan data to analyze the surface topological features (such as the wrinkles in Figure 1b). This scanner fills the gap in micron-level resolution scanning and could be implemented in areas such as biomedical scanning (for example, bone tissue scanning), inprocess quality control in precision instrument manufacturing (for example, dental device, watch, and gas turbine blade), and online process monitoring of additive manufacturing as addressed in this paper. This paper is organized as follows. Section 2 introduces the current 3D scanning technologies regarding their applicability to metal AM in-situ monitoring along with the challenges related to high spatial resolution SLS design, and calibration. Section 3 proposes the system design procedure and the hardware selection of SLS. Section 4 devises a new calibration procedure and noise reduction technique to achieve high spatial resolution SLS. In Section 5, the details of system integration and fixture design are discussed. The accuracy of the system is validated qualitatively and quantitatively in Sec. 6. Fig. 1 a A high-resolution optical camera image on an EBM process printed Ti-6Al-4V part surface with a letter R on it (the surface has been treated with meld sandblast for reflection removal. b A microscopic view of a melt pool at the cropped area 2 Review of related technologies and researches Different types of scanners have their own suitable applications according to their sizes, prices, working distances, spatial resolutions, accuracies, and scanning areas. In Sec. 2.1, the performances of common 3D scanning techniques are reviewed. Section 2.2 introduces the working and calibration principle of SLS and also reviewed the current design and calibration methods for high spatial resolution SLS. Section 2.3 introduced the existing SLS accuracy validation methods and identified the research gaps that are addressed in this work.

Review of 3D scanning techniques
Starting from the contact probe method [13] since the 1980s, 3D scanners have evolved into many forms such as photogrammetry [14], time-of-flight [15], laser triangulation [16], and interferometry [8]. This section discusses the potential of these techniques to be improved for in-process monitoring applications in metal AM.

Photogrammetry
Photogrammetry uses computer vision and computational geometry algorithms to reconstruct 3D geometries from 2D pictures. Reconstruction algorithms rebuild object geometries by matching surface markers found in pictures taken from different angles and locations. This method is cost-effective since only one camera is needed. However, it suffers from low accuracy (mm level) and slow reconstruction time (hour level), so it is primarily used for building, landscape, or hobby imaging. It is not realistic to implement this method in the additive manufacturing process since over a hundred images from different angles are required for a high-quality scan.

Time-of-flight (TOF)
The time-of-flight scanner is a laser range finder. It calculates the distance between the measuring surface and the TOF machine by counting the round-trip time for the laser to travel. This method is only ideal for large-scale object measurement because the accuracy is low (mm level). The measurement time directly scales with the area of interest since only one data point can be obtained at a time.

Interferometry
Interferometry is a type of high accuracy (nm level) surface metrology method. It measures height by using the interference pattern of two coincident light beams. The interference pattern characteristic is dependent on the distance from the light source and the object. Interferometry has extremely high accuracy, but the system has a large physical size. Furthermore, the scanning speed is very slow since it only covers a very limited area for each scan. Even using a small magnification lens (× 3), an area of 1 × 5 15 mm 2 takes 2 h to scan.

Laser triangulation
Triangulation uses three measurement points to determine the surface geometry: the light source, a light dot on the part surface, and the light sensor. Different distances between the measuring surface and the scanner will result in the light dot appearing at different sensor locations. When the object height has surface height variation, the corresponding location of the laser dot on the sensor will have a corresponding in-plane displacement. Laser triangulation can have either line or dot scanning mode. The former generates a line of data for each scan, and the latter generates only one data point. The accuracy of these two methods is limited by line width and laser spot size (sub-mm level), respectively. The laser triangulation system is also relatively expensive since the laser itself is a highend product. The common spatial resolution for laser triangulation is over 50 μm, and the scanning frequency is 100 Hz. Even though line scanning is more efficient than dot scanning, it is still not fast enough for use in AM since only one line of data is obtained at a time.

Structured light scanner (SLS)
SLS is another triangulation method. Instead of shooting a dot or line as in laser triangulation, SLS uses a projector to project fringe patterns onto the measuring surface. For a single scan (several seconds), it can capture the entire projected area. This enables rapid data collection compared with the laser triangulation method. The covered area can be adjusted by refocusing the camera and projector to the desired FOV. However, due to the hardware size and shape limitation, the FOV is limited to tens of centimeters, and the resulting spatial resolution is limited to the sub-mm level. A smaller FOV will yield a higher spatial resolution scan but creates new challenges in system design and calibration.
In summary, photogrammetry has relatively large errors, and it is not realistic to move the camera around during a laser printing process; TOF cannot measure a rich amount of data in one time and also cannot accurately measure melt pool geometry suffered by low accuracy; Interferometry has high accuracy, but the scanning speed is very slow, and it is too large to be installed inside a metal AM machine; Laser triangulation has relatively high accuracy, but it also has a high cost and can only do line scan which is not sufficient for rapid area scanning. The most suitable solution for AM in-situ monitoring is SLS because it has the advantages of low cost, compact size, relatively high accuracy, and fast scanning speed. However, it still needs a higher spatial resolution to capture the very small features such as the wrinkles in Figure 1b. This work aims to address these limitations by proper hardware integration and improvements to the calibration process.

Introduction of SLS working principle and review of the calibration methods
A dual-camera SLS typically consists of a projector and two cameras [17,18]. As illustrated in Figure 2a, the projector creates sinusoidal black and white fringe patterns on the target surface. The fringe pattern is distorted on the sample surface due to variations in surface height, which can be precisely captured by the cameras. SLS utilizes the triangulation principle to calculate the relative position of the measuring points and the center of the scanning system [19]. A triangle (the bold red triangle shown in Figure 2a) is formed by the point of interest on the object and the two cameras in the system. The triangle geometry can be completely determined by (1) the distance L between cameras, and (2) the angles α 1 and α 2 formed by the line connecting the two cameras and the lines connecting each camera to the measurement point. The aforementioned angle and distance information can be acquired during the calibration process. Typically, the spatial relationship between the two cameras is calculated from 20 to 30 pairs of images of the calibration target taken at different angles and positions.
A calibration target is a flat board that contains a black and white checkerboard pattern (see Figure 2b). The positions where the black squares intersect are called reference points.
By comparing the locations of these reference points on the image taken by different cameras, the translational and rotational information between their coordinates can be obtained. Due to the imperfection of the lens, the images may have different levels of distortion in the measuring space. By analyzing the reference points within each image, the lens distortion can be calibrated and compensated in the measurement. Thereby, the quality of the calibration target image is a crucial factor that determines the accuracy of the calibration. The image quality is influenced by both the calibration target and the image capturing procedure.
The key challenges of improving the spatial resolution are the system design and calibration of a small FOV (below 50 × 50 mm). The system design requires a balance of the specification of hardware components (camera, lens, and projector), for example, the tradeoff between coverage area and spatial resolution. The calibration needs to focus on the quality and size of the patterns in the calibration target, in addition to noisy image-taking environments that may result from non-ideal lighting. Some optimization techniques and mathematical methods to precisely calibrate the scanner have been reported in the literature. For example, Li and Zhang [21] used telecentric lenses with Levenberg-Marquardt optimization to calibrate microscopic 3D scanners to a 10 × 8 × 5 mm 3 -volume. However, the spatial resolution of this system is 10 μm, and the accuracy is 1.8 μm. This is not precise enough to capture the geometry of a melt pool. Rao et al. [11] refined the existing calibration algorithm with an image deblurring function and was able to calibrate the scanner to a 23.7 × 17.78 mm 2 -area with 15-μm spatial resolution and improved the accuracy to 5 μm. Liu et al. [12] used a combination of affine camera factorization and a bundle adjustment algorithm to calibrate the scanner to a larger volume (34.6 × 29 × 6 mm 3 ). This method does not require a high-precision calibration target. However, the spatial resolution of their system is 14 μm, and the accuracy is 10 μm.

Review of SLS accuracy validation methods
Generally, the accuracy of an SLS can be determined by the root mean square error (RMSE) and the standard deviation (σ) of the measurement on a flat surface and the fitted plane based on that measurement [7,22,23]. This method has two limitations. First, the color and finish of the standard target might differ from the surface in the application. Second, the errors are assessed by comparing with the fitted plane, which is different from the ground truth surface. Li and Zhang use a high-accuracy linear stage to move an object and use the 3D scanner to track the marker point on the object compare with the actual distance and direction moved [21]. This method covers a larger volume and has a ground truth reference (the linear stage), but it verifies the accuracy through onedimensional (z) measurement, and only limited numbers of points were tracked. In summary, among all different types of 3D scanning technologies, SLS has the greatest potential to be adapted for AM online monitoring. But none of the existing SLS can fulfill the requirements of the high spatial resolution of 5 μm for solidified melt pool surface topology, which also demands a new calibration method and a new accuracy validation method. These challenges are addressed in this paper.

SLS system design and hardware selection
In this section, the system design and hardware selection criteria for the new SLS scanner are presented in Sec. 3.1. The selection criteria of the camera, lens, and projector are discussed in Sec. 3.2.

SLS system design
This work uses the dual-camera form of SLS, which has improved measuring accuracy compared with the single-camera form. The design of the system follows the conventional dualcamera system layout (Figure 2a), which consists of two cameras, two lenses, and a projector. The spatial resolution requirement (5 μm) is used as the initial constraint for the hardware selection, and it is determined by both the spatial resolutions of the cameras and the projector as follows: where SR represents spatial resolution.

Camera spatial resolution
As Figure 3a shows, a camera can capture images because its lens can pass the light reflected from the targeting object onto its internal image sensor. The image sensor consists of an array of small photosensors, each of which produces a pixel in the resulting image. The total number of photosensors is termed pixel resolution, and the physical dimension of each photosensor is called pixel size. These are the two major specifications of an image sensor, and they directly determine the sensor size as follows, The camera spatial resolution is the physical distance between two adjacent pixels in the image. The smaller the distance is, the higher the camera spatial resolution will be. It is determined by both the internal image sensor and the camera lens as follows [24]: where SR Camera stands for the spatial resolution of a camera; FOV Camera is the field of view of the camera (the area camera can cover under the working distance); u is the working distance of the camera (the distance between lens and object); and f is the focal length of the lens (the distance between the lens and the sensor). Illustrations of these terms are shown in Figure 3a. Based on Eqs. (1)-(4), it can be seen that the pixel size and pixel resolution are proportional to the camera spatial resolution, and the focal length is inversely proportional to the camera spatial resolution. They also have the same type of influence on the camera FOV. The following basic methods can be used to improve the camera spatial resolution, as illustrated in Figure 3b. All of these methods are integrated into the proposed work and are presented in detail in Sec. 3

.2.
& If the focal length of the camera lens is increased, then the FOV will be smaller, and consequently, the spatial resolution will be improved (see Figure 3(b2)). & If the sensor pixel size is reduced, then the FOV will be smaller, and consequently, the spatial resolution will be improved (see Figure 3(b3)). & If the sensor resolution is increased, then the cameral spatial resolution can be improved directly (see Figure 3(b4)).

Projector spatial resolutions
The projector shares a similar principle with the camera in terms of spatial resolution. The two limiting features are the lens and microdisplay. Here the microdisplay is analogous to the sensor in the camera, but it is used to project the image onto the object. In general, the resolution of a projector microdisplay (1280 × 720 pixels) is much lower than that of a camera sensor (3000 × 4000 pixels). Therefore, the projector is generally considered as the bottleneck for improving the SLS spatial resolution. However, this issue is addressed by software and implementation techniques so that the resolution of the projector will not affect that of the SLS system. Specifically, two techniques from literature are adopted in our design to eliminate the impact of projector resolution to that of the 3D scanner, namely, the phase-shifting method and the defocusing technique [25,26]. Instead of a single image projection, the phase-shifting method [25] projects multiple patterns (in this research, six patterns were used) with equally divided 2π/6 phase shifts and using the combination of these six grayscale readings to distinguish adjacent points. Secondly, the defocusing technique [26] is utilized, which helps remove grayscale discontinuity (see the comparison between Figures 4a and b). Thus, by adopting these two methods, the spatial resolution of our SLS is not affected by the projector but determined by the camera only, as long as the projector can focus on a similar FOV with the cameras. Correspondingly, Eq. (1) can be simplified as Eq. (5).
As exemplified in Figure 1, in AM process monitoring, some small areas such as 15 × 15 mm 2 should be covered by the FOV of the SLS. According to Eq. 3, the pixel resolution of the camera needs to be at least 3000 × 3000 to satisfy the SLS spatial resolution requirement (5 μm), which is the starting point of camera and lens selection.

SLS components selection
A dual-camera SLS consists of two cameras, two lenses, and a projector. The selection criteria for these components are discussed as follows.

Camera selection
A good SLS camera for use in metal AM in-situ monitoring should have a compact size, high frame rate, high pixel resolution, low noise, and small pixel size. The challenge here is to balance these requirements. Since the pixel resolution is set by the spatial resolution requirement as discussed in Sec. 3.1, so the selection begins with the sensor pixel size. A smaller pixel size will improve the spatial resolution, given all other criteria are fixed. However, if the pixel size is too small, the noise level will be high. The sensor with a 3.45-μm pixel size was chosen for our system because it can capture melt pool details during the online monitoring without sacrificing imaging quality. To fit the cameras in the small FOV configuration required for metal AM, a machine vision camera (FLIR GS3-U3-123S6M-C) was selected due to its compact size and high pixel resolution (3000 × 4000), which can ensure a large coverage area without sacrificing the spatial resolution. Its frame rate is 30 Hz, which satisfies the scanning speed requirement mentioned in Sec. 1.1 (<5 s for 45 images). The resulting FOV is 15 × 20 mm 2 due to the aspect ratio of the Fig. 3 a Illustration of the relationship between the FOV, the working distance (u), the focal length ( f ), the pixel size, the sensor size, and the spatial resolution. b Methods for improving spatial resolution: (b1) the original setting before any adjustment; (b2) by reducing FOV, spatial resolution improved but sacrifice coverage area; (b3) by reducing pixel size, spatial resolution improved but sacrifice coverage area; (b4) by reducing pixel size and improving pixel resolution, spatial resolution improved without sacrifice coverage area sensor. This FOV is called "the desired FOV" in this paper and results in a camera (also SLS) spatial resolution of 5 μm.

Lens selection
To avoid the damage caused by the heat from the metal AM part surface, a relatively long working distance (u>80 mm) needs to be maintained. According to Eqs. (3) and (4), given the pixel size (3.45 μm), the pixel resolution (3000 × 4000), the working distance (80 mm), and the FOV (15 × 20 mm 2 ), the focal length f needs to be at least 54 mm. Moreover, the lens should have an appropriate resolving power, which is the minimal distance between two lines or points that can be distinguished by the lens. It is determined by the optical polishing quality of the lens. In this work, the resolving powers of eight different lenses over 55-mm focal lengths are determined by using the 1951 USAF Resolving Power Test Target [27], as shown in Figure 5a. The conversion between the group and element number and resolving power can be acquired by using Table 3 in the appendix. The lenses are all set to 65-mm focal lengths. Figure 5 b and c show an example of the comparison between high and low resolving power lenses. A 55-75-mm zoom semi-telocentric lens is selected since it has the highest resolving power. It meets the group 7 Element 2 standard and has a resolving power of 3.48 μm (Table 3). This level of resolving power is very close to the determined sensor size of 3.45 μm. Therefore, it does not have a substantial influence on the spatial resolution of the whole system.

Projector selection
As for the projector, the one with the smallest micro-display (AAXA P2) was selected due to its compact size. The selected projector has a 1280 × 720 resolution, and the lens was modified with an additional condenser lens to shift the projection area from 15 × 20 cm 2 to 18 × 24 mm 2 , which is similar to the desired FOV. Even though the projector has a lower spatial resolution than the selected cameras, it will not affect the spatial resolution of the 3D scanner since the phase-shifting method and the defocusing technique were adopted in our work, which are discussed in Sec. 3.1.

Design, fabrication, and testing of the calibration target
Commonly used calibration targets are slightly smaller than the expected FOV (300 × 400 mm 2 to 600 × 800 mm 2 ). However, the desired FOV in this work is quite small (15 × 20 mm 2 ). Therefore, there is no commercial calibration target available to fulfill the needs. Additionally, the surface quality of regular substrate material such as papers or plastics is inadequate for high-precision calibration targets. The checkerboard pattern on the target (see Figure 2b) is typically made by either an inkjet or a laser printer, and their print quality is normally up to 1200 dots per inch (DPI), which is equivalent to 21 μm between two adjacent dots [28]. This level of spatial resolution is significantly lower than the requirement of our proposed SLS (5 μm). As shown in Figure 6a, the shape being printed consists of many small ink drops (represented by the gray circles in the top left). When the DPI is low, the ink drops are large. This leads to the printed region (marked in black) larger than the designed shape (marked in white dash), and consequently, the reference points will be very difficult or even impossible to identify. As for the substrate, fine surface paper or matte surface plastics are commonly used materials. Neither is smooth enough to prevent undesired reflections. The relatively rough surface will cause misalignment of the reference points, and the reflection nonuniformity will cause image noise. These two issues will adversely affect the accuracy of calibrations.
In this work, a new calibration target is designed with a pattern that can fit in the small FOV (15 × 20 mm 2 ) needed for the online monitoring of EBM (Figure 7a). The same target pattern as the commercial scanner is used in this work, and different approaches to improve the target pattern printing and substrate quality have been executed. The microscopic views of the red-circled region in Figure 6a are shown in Figure 6 bd, which correspond to different substrate materials and pattern printing methods. The pattern shows in Figure 6d is printed by the physical vapor deposition (PVD) technology with metal chrome on a smooth ceramic substrate. By comparing with the patterns in Figure 6b, c, it is clear that the improved target shown in Figure 6d has the most uniform surface and sharpest reference point intersections at this measuring scale. The ceramic surface is chemically treated to give a matte finish with 0.4-0.7-μm surface roughness (Ra) (compared with paper's 3-10-μm Ra [29]). The final calibration target used in our proposed 3D scanner is shown in Figure 7b

Noise reduction in the calibration image taking process
From Figure 6d, it is clear that the new calibration target has a smoother surface, but this still cannot ensure a successful calibration for the desired FOV. An ideal calibration image should have high contrast in the black and white region and be free of noise, as Figure 8(a) illustrates. However, the current calibration image has a very high noise level, as shown in Figure 8b, which will result in a lower accuracy or even failure in the calibration. There are two types of imperfections in the calibration image of Figure 8b. The first one is the numerous bright spots in both white and black, and the second one is the brightness variation in the white and black boundaries. Therefore, two noise reduction techniques for calibration image processing are discussed in the following sections to address these two types of problems mentioned above.

Noise reduction by using an external parallel light source
The first type of imperfection is caused by the imperfect lighting source during the calibration. To help with the imagetaking process, the calibration target needs to be illuminated, which is typically done by the projector. However, this approach cannot be applied to the small area (15 × 20 mm 2 ) calibration because the tiny spherical areas on a matte calibration target surface (Figure 8c) will reflect the illumination light Fig. 6 a Illustration of the effect of insufficient printing resolution. Microscopic partial view on red circle region of calibration target made by: b fine paper and 1200-dpi printer, c 4000 grit polished plastic and 1200-dpi printer, and d chemically treated ceramic and PVD (adopted in this work) and creates bright spots. To avoid this, the light beams from the light source need to be parallel to each other, and the direction of these beams needs to be as parallel to the target surface as possible. Since the projector is a point light source, a separate parallel light source is used. Unlike the projector, which is pointed perpendicular to the calibration target, the parallel light source is pointed as parallel as possible to the calibration target. Additionally, a polarizer is added to the light source to enhance its beam parallelism. Compared to Figure 8b, Figure 9a shows the reduced intensity of bright spots after changing the pointing direction of the light source from perpendicular to near-parallel, and Figure 9b illustrates the reduced number of bright spots after enhancing the beam parallelism by adjusting the polarizer.

Noise reduction by overexposure
The second type of imperfection of the calibration image is caused by axial chromatic aberration [30], which means that the lens is not able to focus on different colors present on the same plane (see Figure 9c). Axial chromatic aberration is a typical problem for long focal length (f > 50 mm) lenses which are required by small FOV (such as 15 × 20 mm 2 ) focus. Since SLS only requires monochrome images, the color information is not important, and the aberration effect can be reduced by overexposing the white region. Traditionally, all the existing calibration methods require the exposure of both the black and white region within the camera's dynamic range. In general, either overexposure or underexposure needs to be avoided. However, with properly controlled overexposure, the axial chromatic aberration can be significantly reduced, which will result in very sharp contrast calibration images. Overexposure can be achieved by properly adjusting the exposure time. The exposure time is determined by the time at which all the white regions are overexposed while the black regions are not.
An example of the resulting calibration image after applying these two noise reduction methods is shown in Figure 10b, in which the two types of calibration imperfections have been removed significantly compared with Figure 10a. The scanning results of a flat surface before and after applying the calibration noise reduction are shown in Figure 10c, d, respectively. By comparing these two images, it can be seen that a significant scanning accuracy improvement has been achieved.

Calibration procedure
In this section, for calibration of a small area (15 × 20 mm 2 ), a special calibration procedure is implemented, which includes all the noise reduction methods presented in Sec. 4.2. The   Fig. 7 a Calibration target design (unit in mm). b A real 6×4.5-mm calibration target sample Fig. 8 a A section of a theoretical perfect calibration image. b A section of an experimental calibration image with imperfections. c An illustration of spherical area reflection from the projector to the camera procedure consists of three steps, as shown in Figure 11, and the detailed explanations are provided as follows.

Step 1: Camera position adjustment and lens focusing
In step 1.1, the two cameras are positioned from the calibration target so that they are at the desired working distance (calculated by Eq. 4, u=80 mm in this work) from the sample. In step 1.2, at least a 10°angle (the minimum angle required by SLS triangulation calculation) is set up between these two cameras. A larger angle might cause problems in calibration because the left and right sides of the image from each camera could be out of focus. In step 1.3, the camera aperture is set to the maximum under ambient room lighting. This will create the shallowest depth of field and can assist in focusing the cameras. Next, the focuses of cameras are adjusted until the middle of the calibration target is clear, and both left and right are equally blurred. Steps 1.2 and 1.3 are repeated until both cameras are pointing at the center of the calibration target and properly focused.

Step 2: Over-exposure calibration
In Step 2.1, the aperture f-number is set to 16 on each lens to improve the depth of field for calibration and measurement, which is the smallest aperture that can avoid intensive chromatic aberrations. Then, the lighting source is set up, and the polarizer is adjusted as discussed in Sec. 4.2 until both cameras receive the dimmest light input. In Step 2.2, the overexposure technique is used as discussed in Sec. 4.2 to set the proper exposure time. These two methods can significantly reduce the noise in the calibration images. Once the camera and lens adjustment are finished, step 2.3 is used to take a number of calibration images with the calibration target at different locations and angles. Then, the calibration program is executed.

Step 3: Projector positioning and focusing
In Step 3.1, the projector is placed 65 cm (the shortest focusing distance of the projector after the lens modification presented in Sec. 3.2) away from the sample surface, and the projector lens is finely adjusted to focus on the measuring plane. In Step 3.2, once the projector is focused, the projector is moved towards the camera for 0.5 mm. This will result in defocus (explained in Sec. 3.1) and improve fringe pattern smoothness.

System integration and fixture design
In this section, a dual-camera SLS [18] utilizing all the techniques discussed in Secs. 3 and 4 is developed (see Figure 12). The system consists of multiple XYZ and RZ stages for fine- Fig. 9 a After using the parallel lighting source, the intensity of the bright spots has been reduced. b After adjusting the polarizer, the number of bright spots on the surface is largely reduced. c An illustration of axial chromatic aberration due to the lens not able to focus different colors on the same plane [30] Fig. 10 a An experimental calibration image before applying any noise reduction methods. b An experimental calibration image shows a significant reduction of color impurity after applying the noise reduction methods. c The scanning result of a flat surface which based on high-noise calibration images like a. d The scanning result of the same surface by using the improved calibration images like b tuning the position and direction of each hard component. The two cameras are mounted on a slider rail, which gives each camera an additional two degrees of freedom in the X and RZ direction. The rail and the stages for the camera are connected by a ball joint to ensure both cameras are properly leveled. Each stage has a 10-mm travel range on XYZ direction with 10 − μm accuracy. In the RZ direction, the accuracy is 0.01°. The scanner is calibrated to a 15 × 20-mm 2 FOV, and the resulting spatial resolution is 5 μm, which is determined by Eqs. (3) and (5), given the camera pixel resolution is 3000 × 4000.

Qualitative and quantitative accuracy validation
In this section, a Ti-6Al-4V part manufactured by the EBM was used as the standard object (see Figure 1a) for measurement. A white light interferometer (Zygo NewView 8200) was used to reconstruct the surface and is treated as ground truth since the interferometer is known to have superior spatial resolution and accuracy (nm level). Then, the scanned result of the proposed scanner was compared with this ground truth for accuracy testing. Together, the accuracy of another professional-level commercial SLS scanner (HP 3D SLS Pro S3) is used as a benchmark and verified by the same method to justify the high accuracy of the proposed system. The performance of both SLS scanners has been both qualitatively and quantitatively compared. The results are shown in Secs. 6.1 and 6.2, respectively.

Point cloud visualization
The standard object (Ti-6Al-4V part) has a 15 × 10-mm 2 area and has a letter "R" on the surface to denote the use of a random hatch pattern for fabrication. Such a printing strategy will leave many solidified melt pools (see Figure 1b). The details of the scanning process of each SLS, as well as interferometry, are discussed as follows.
The result measured by the Zygo NewView 8200 white light interferometer is considered the ground truth and shows the highest detail of the surface. The scanning process takes 2 h and yields a 1.63-μm spatial resolution result. The scanning is made by a × 10 objective lens and × 3 CSI mode. The Z-direction searching distance is 300 μm. The entire area is divided into 400 sub-regions, with 4% overlapping.
The point cloud data measured by our proposed SLS has 3.5 − μm spatial resolution, and the scanning time is 4 s with 1S5 s to process. The processing including the triangulation calculation, which transforms the raw image data to point clouds, and the mesh translation, which converts the discrete point clouds into 3D polygon meshes to build a closed surface. The raw mesh data contains 16,020,264 faces and 8,266,016 vertices. The data set has been applied a smooth filter and reduced the mesh density to 10% for easier data analytics. Even though the number of meshes is reduced, the melt pool shape can still be well preserved.
The benchmark 3D scanner takes a similar scanning and mesh generation time compared with the proposed scanner, Fig. 11 System calibration procedure flowchart Fig. 12 a Proposed 3D scanner fixture design. b Detailed view of the modified tiny-area projector; the focal length of the original lens was increased by adding a spacer but it generates about 200 times fewer data points. It is calibrated to the highest manufacturer standard by a 60-mm size calibration target and yields a 47-μm spatial resolution. This spatial resolution is too low to obtain accurate mapping, as illustrated by the poor reconstruction of the letter R.
The scanning result from the interferometer, the proposed 3D scanner, and the benchmark 3D scanner is visualized and shown in Figures 13a-c, respectively. Based on the detailed view of all three results, the proposed 3D scanner can successfully capture the melt pool geometry and the surface letter "R" with minor loss in detail compared to the interferometer. Even though the benchmark 3D scanner is calibrated to the smallest FOV, it still cannot clearly show the melt pool geometry, and there is almost no trace of the letter "R" on the surface due to insufficient spatial resolution.

Mean curvature visualization
To further visualize the performance of the different scanning methods, a curvature analysis method is used for the three sets of point cloud data from Sec. 6.1.1. This method is based on mean curvature information given a user-defined kernel radius, as mentioned in the recent work of Law et al. in powder bed point cloud segmentation [31]. The kernel radius is defined as the   Figure 14. They show that the proposed 3D scanner can clearly capture the circular melt pool geometry while the benchmark only shows random patterns. Additionally, the color distribution that indicates the magnitude of the curvature from the proposed scanner matches well with the interferometry result. This type of surface topography characterization can help establish a relationship between part quality and the processing parameters [32,33].
The microstructure distribution of the material also has a strong correlation with the processing condition [34][35][36]. This gives the potential to correlate the surface topography with the microstructure distribution, and the characterization method introduced in this section can contribute to that process. 3D scanning-based curvature analysis can also easily isolate the region of each melt pool no matter how the local height deviates.
6.2 Quantitative analysis and comparison 6.2.1 Accuracy test by the multiscale model to model cloud comparison (M3C2) method To determine the accuracy, the point cloud data scanned by the proposed scanner and benchmark scanner are compared to that of the white light interferometry (used as ground truth due to its ultra-high accuracy). The difference in the comparison can represent the measurement error, which is referred to as accuracy. Prior to the comparison, the point cloud data of the proposed scanner and the benchmark scanner are registered into the 3D space of the point cloud data measured by the white light interferometry using the Iterative Closest Point (ICP) technique [37]. In Figure 15, four 5 × 5-mm 2 square regions are picked for comparisons.
In the literature, there are several methods used to compare the difference between two point clouds. Most of them have drawbacks in the sensitivity to variation factors like roughness, outliers, and point density in clouds which have been demonstrated in the work of Lague et al. [13]. The point cloud measurement approach used in this paper is the Multiscale Model to Model Cloud Comparison (M3C2), a signed distance computation method with the ability to show confidence  The difference is computed by the M3C2 distance comparison method, and the green color means small errors while blue and red indicate large errors intervals on point cloud measurement and registration error [13,37]. In summary, M3C2 computes the distance between two point clouds along the surface normal direction. For more details of M3C2, please refer to Lague et al. [13].
The first row of Figure 16 shows the difference between the point cloud data sets measured by the proposed scanner and the white light interferometry at each selected region marked in Figure 15, while the bottom row shows the difference between the point cloud data sets measured by the benchmark scanner and the white light interferometry. They are both computed by the M3C2 method. As Figure 16 indicates, the results on the top row are primarily green. This means the differences, which correspond to the errors, are small or close to zero. While the results in the bottom row have large portions of red and blue areas, indicating higher surface measurement errors inherited from the benchmark scanner. A smaller color variation on the proposed scanner results also demonstrates the consistency of the accuracy on different angle surfaces compared with the benchmark scanner results. The larger surface measurement error on the benchmark system is because of poor spatial resolution as well as systematic noise dominated over measurements. The error of all the points in each region is following the Gaussian distribution. The absolute mean and the standard deviation of them are shown in Table 1. These two statistics can be directly used for accuracy assessment. The four-region averaged absolute mean error of the proposed scanner is 0.0549 μm, which greatly outperforms the benchmark scanner's 10.9-μm error by three magnitudes. As for the four-region averaged standard deviation, the proposed scanner has 4.97 μm, which is significantly smaller than the benchmark scanner's 23.5 μm.

Local surface correlation analysis
In addition to global point cloud comparison, the local surface roughness is another good indicator of scanner accuracy. To calculate the surface roughness, Townsend et al. recommended aerial parameters over profile parameters for surface characterization because the nature of surface metrology is three-dimensional, so the analysis of two-dimensional profiles will provide an incomplete description of the surface [33]. Therefore, the aerial parameter Sa is used in estimating local surface roughness, which defines as the arithmetic mean of the absolute value of the height within a sampling area [33]. For local surface roughness analysis, ten 0.5 × 0.5-mm 2 sub-regions were picked randomly within each of the previously selected four 5 × 5-mm 2 square regions. The sub-regions and square regions are shown in Figure 17. The Sa is calculated at each sub-region from the proposed scanner's result and compared with the result calculated based on the interferometry data. The same analysis is also performed for the benchmark scanner. The root mean squared error (RMSE), relative errors, and the standard deviations are calculated and shown in Table 2. To further illustrate and visualize the proposed 3D scanner has better performance, correlation analysis has been performed for the Sa in each square region consider the Sa calculated from the interferometry data is the ground truth. The correlation plots are shown in Figure 18.
In regard to surface area roughness analysis, the proposed scanner clearly outscores the benchmark scanner in terms of RMSE, relative error, standard deviation, and correlation. The proposed scanner has an RMSE of 3.00 μm with 2.61 μm standard deviations compared with 14.24 μm RMSE with 10.28 μm standard deviations for the benchmark scanner. In terms of relative error, it is 14.23% with a 9.82% standard deviation compared with 91.25% with a 66.95% standard deviation. From Figure 18, it is clear that there is a significant improvement in correlation. In all of these plots, positive linear correlations are observed. However, the correlation of the proposed scanner is significantly higher (closer to 100%) than the benchmark scanner with the ground truth. The correlation  Fig. 17 The location of four sets of randomly selected sub-regions for Sa calculation lines and data of the proposed scanner are almost matching with the 45°ground truth line in each region, while the lines and data from the benchmark scanner are less matched. The average Sa correlation score between measurements from the interferometer and the proposed scanner is 95.5%, compared to the benchmark scanner, which only has a correlation score of 40.8% (due to low spatial resolution and accuracy). From a quality control perspective, the surface roughness can be a good indicator of AM part quality. For example, studies [32,38,39] show that the surface roughness has a strong correlation with fatigue performance because micro-notches associated with partially melted powders act as stress concentrators resulting in earlier crack initiation. The analysis in this section shows that only the proposed scanner can yield a reliable surface roughness result while the benchmark scanner cannot.  Fig. 18 The results of the Sa correlation with interferometer measurement for both proposed and benchmark scanners at four different square regions. Where the black 45°angle dash represents the ground truth; blue circles and red triangles represent the proposed scanner and benchmark scanner, respectively

Conclusions and discussions
In this study, a high spatial resolution 3D SLS is successfully built and calibrated. This scanner is able to meet the speed (second level) and resolution (micron level) requirements necessary for in-situ monitoring during metal AM applications. The scanner can measure the surface with 5-μm spatial resolution so that it can resolve key surface features. It can also give high accuracy results with a 0.0549-μm average error. Additionally, a calibration method and a set of calibration targets are developed for the small FOV required in this work. Furthermore, it takes as few as 4 s for measurement, which is sufficient for layer-by-layer characterization and uses 15 s of computational time to provide over 16 million data points.
In the case study session, the efficacy of the proposed scanner is demonstrated by the characterization of surface roughness, curvature, and melt pool topography of an EBM manufactured part. The performance of the proposed 3D scanner is validated by comparison with the benchmark scanner, which is a commercial high-standard SLS. The validation result concludes that the proposed 3D scanner has superior performance and can resolve more detailed melt pool features. In addition to the spatial resolution improvement (5 μm compared with 50 μm), the accuracy of the proposed scanner is 0.0549 μm compared to 10.6 μm by the benchmark scanner. The surface roughness result of the proposed scanner has a 95.42% average correlation with the ground truth compared to 46.9% by the benchmark scanner.
The efficiency of this scanner can meet the L-PBF online sensing requirement. During the scanning, only the 4-s image capturing time will delay the printing, but it is insignificant compared to the printing time of each layer, which is typically over 20 s. Regarding the 15-s computational time, it can be performed simultaneously with the printing of the next layer. Utilizing the printing time of the next layer to perform calculations will result in a one-layer delay of the defect mitigation. However, this delay will not significantly influence the effectiveness of defect mitigation since the thickness of one layer (typically only 50 to 100 μm) is negligible.
In the future, several techniques can be further developed to improve the scanners efficiency. Firstly, the efficiency can be improved by partial scans at each layer. For example, scans may be performed in critical regions such as over-hang structures and complex geometries where the processing conditions are highly non-equilibrium. The partial scan needs a reduced FOV and thus will reduce the computational time (currently 15 s) accordingly. Additionally, the scan could be performed to sample several layers of deposit at a time. Due to the thin-layer-printing property of L-PBF, it may not be necessary to scan every single layer unless an abnormality has been detected or a critical region is being printed. Properly defining the sampling strategy will further improve the scanner's efficiency.