Human awareness is extremely sensitive to the variations in the details and edges of an image. The blur effect is one of the most frequent degradations that occur in digital images. Blur reduces the visual quality of an image, making it difficult to perceive the important details properly.1
Image sharpening is an enhancement helps increase edge acutance and significantly improves the overall sharpness of processed images.2 Sharpness denotes any improvement technique that enhances the visual quality of edges and details in an image. Radiographic images have been widely processed with some degree of sharpness implemented.3
Digital images in medical applications can help doctors provide faster and more efficient diagnoses to patients. The quality of medical images directly influences the consequences of diagnosis. Nevertheless, in the acquisition process, medical images present degradations such as poor details or low contrast from time to time.4 In order to have better visual quality and/or acquire more medical information in images, sharpening is applied to highlight the intensity transitions in an image.2
Zohair recently developed a new filter to improve the sharpness of Mammograms, CT and MR medical images with good performance.1 Mil´an et al. evaluated the sharpness and quality in coronary CT angiography (CCTA) datasets. They claimed that image sharpness is an important index in CCTA for the assessment of coronary artery disease (CAD).5 For the dental radiology, including the maxillofacial region, image sharpening is of enormous importance in diagnosis.6 Image sharpening applied to different types of dental radiology images will significantly improve the level of diagnosis and appropriate treatment. However, image sharpening significantly increases overshoot artifacts that adversely affect radiographic misdiagnosis and, as a consequence, could result in improper treatment.3, 6
Many techniques have been proposed for image de-blurring or sharpening for enhanced image quality. Algorithms like unsharp masking (UM)7 and Laplace filtering, whose details can be found in report,8 have been proposed for spatial domain usage. The same effect can be achieved by adding the high frequency components to the original input signal.7 In addition, for frequency domain, techniques like Fourier series and wavelets have gained a lot of popularity due to their better results.9, 10 Edge detail improvement in an image is a process of extracting high frequency information from the image and then adding this details to the blurred image.11
The UM applied to different types of radiological images, will significantly improve the level of diagnosis and appropriate treatment.6 This manipulation may in fact degrade the image quality by introducing artifacts such as overshoot that can lead to misdiagnosis. Clark et al. suggested that all digital dental images should be processed with some degree of UM.3 While UM sometimes improves radiographic quality but the production of an overshoot artifact from UM can adversely influence diagnosis in detecting dental caries. The fit between a finish line and a restoration margin is sometimes inaccurate. Increased awareness of diagnostic errors from overshoot artifacts will improve diagnostic acuity and result in better patient care and oral health outcomes.3 Nevertheless, image sharpening is presence of various challenges, including noise amplification, and over-sharpening effects.2, 8, 12
To decrease the overshoot image noise, adaptive UM and nonlinear modules were proposed.2, 7, 9, 10 These algorithms sharpen images effectively and do not amplify the noise concurrently. Hence, the development of such adaptive nonlinear modules for image enhancement may be a better choice.2 Lin and Chen2 recently suggested a novel adaptive image sharpening scheme. In their work a norm map was produced by measuring the first-order gradient in an image. These gradient norms, sorted and accumulated, can obtain a CDF (cumulated distribution function). Taking the second derivative of this CDF and then finding the point equal to 0 locates the inflection point. The inflection point of this CDF curve is used as a threshold to distinguish edges or not. A simple sharpness filter, named nimble filter, is applied to the locations in the image where the gradient norm is greater than T to adaptively determine sharpness.1, 2
The uncorrelated part from an image is extracted for de-noising a residual image (RESI). Each pixel grey-level in the image is assumed a combination of signal and noise parts. The subtraction between two correlated pixels should yield only the uncorrelated part which is considered to be noise. A residual image is defined as the value of the subtraction between an original image and its smoothed version. The subtraction can effectively remove the signal part and leave the noise part.13
Many RESI based state-of-the-art de-noise algorithms were proposed in past decades. Baloch et al. developed a RESI correlation regularization de-noising scheme that minimizes the correlation between neighboring RESI patches.14 Roychowdhury et al. estimated noise in chest CT image data with varying image quality using RESI.15 To estimate white Gaussian noise in images, a work surveyed six methods and found that the noise estimation using the standard deviation measurement in RESI was most reliable.16
The RESI should possess the statistical properties of contaminating noise. Nevertheless, it is very likely that the residual patch contains remnants from the clean image patch.17 As a result, the RESI usually contains structures from the clean image patch; thus, it does not contain only contaminating noise. Brunet et al. applied a statistical test on the RESI and found that RESI did not contain only pure noise; as there were structures present.17 They de-noised RESI with an adaptive Wiener filter first and then parts of the cleaned RESI were added back into the de-noised original image. Their iterative scheme produced gains in both PSNR and SSIM image quality indices.
A RESI can be defined as RESI = ORGI − SMI using the subtraction value between an original (ORGI) image and its smoothed (SMI) version.
A nonlinear module, named the Moran statistics, was proved corresponds well with the variation in image spatial properties.18, 19 Some works applied these statistics to medical images to distinguish noise or edges and then adaptively processed the images. They showed that the RESI using a 5×5 window average filter had higher structural information than using a 3×3 window. This effect indicated that a bigger filter size resulted in more structural information in a RESI or more blurred in SMI. These works also showed that the RESI of a smaller window has more noise information than a big window. This is also consistent with general signal procession knowledge.20, 21
The detection filter scale is also an important issue in edge detection. Small-scaled filters are sensitive to edge signals but also prone to noise, whereas large-scaled filters are robust to noise but could filter out fine details.12 A sensitive filter manipulates only five pixels in procession when applied to the image.1 This filter had been proved as a good sharpening filter in a work of adaptive image procession.2
In first-derivative-based edge detection, the gradient image should be the threshold to eliminate false edges produced by noise. With a single threshold “Th”, some false edges may appear if “Th” is too small and some true edges may be missed if “Th” is too large.14 The proper choice of threshold value is very important and not easy work.
We propose an adaptive image sharpening scheme inspired by the work of Zohair1, Lin and Chen2 and others7–10 on image sharpening scheme development. Since the RESI contained not only noise but with edges. The bigger the filters size the more the edges are left in RESI. There is a relationship between two RESIs using different filter sizes. The simple linear regression was measured using these RESIs to produce a regression equation and deviation (±σ). We assumed that a point in image contained only noise if the differences of these two residual images are within one ± σ in the equal position. In addition, the effects of variant threshold values were tested.
In this work images are first added with noise using the Gaussian distribution and then smoothed using two filter sizes. The RESIs were obtained from the subtraction between an original image and its smoothed images. The RESIn was obtained by averaging the four nearest neighboring pixels and RESI3 was by an average filter with 3×3 window. The RESI3 is an expansion that compares four nearest neighboring pixels. The variations in each RESIn vs. RESI3 position should be small if they are on a smooth region or this region contained only noise. To model the relationship between RESIn vs. RESI3 a simple linear regression analysis was applied. We can use RESIn to predict RESI3. The location belongs to the edge if the variation in RESI3 vs. RESIn is off more than a value (i.e. one standard deviation ± σ). This pixel will be processed using a nimble sharpening filter. Those positions that belong to noise (RESI3 < ±σ) are skipped. Measurements show that the similarity of adaptive sharpening is better than the global scheme and previous works compared using Pratt’s Figure of Merit (P).22, 23, 24
The proposed method will be further developed and applied to improve image acuity, de-noising and image quality improvement. This method can be used to sharpen images with good noise reduction performance and de-noising with edge-preservation in the future.