Sparse Representation Based Inverse Halftoning with Boosted Dictionary1

: Halftoning image is widely used in printing and scanning equipment, which is of great significance for the preservation and processing of these images. However, because of the different resolution of the display devices, the processing and display of halftone image are confronted with great challenges, such as Moore pattern and image blurring. Therefore, the inverse halftone technique is required to remove the halftoning screen. In this paper, we propose a sparse representation based inverse halftone algorithm via learning the clean dictionary, which is realized by two steps: deconvolution and sparse optimization in the transform domain to remove the noise. The main contributions of this paper include three aspects: first, we analysis the denoising effects for different training sets and the redundancy of dictionary; Then we propose the improved a sparse representation based denoising algorithm through adaptively learning the dictionary, which iteratively remove the noise of the training set and upgrade the quality of the dictionary; Then the error diffusion halftone image inverse halftoning algorithm is proposed. Finally, we verify that the noise level in the error diffusion linear model is fixed, and the noise level is only related to the diffusion operator. Experimental results show that the proposed algorithm has better PSNR and visual performance than state-of-the-art methods. version by using error diffusion methods with different halftoning filters respectively. In the ordered dithering method, the halftoning image is generated by comparing the pixels between the original image and the filter matrix correspondingly. The error diffusion technique compares the image pixel value with a fixed threshold value, and diffuses the quantization error into the neighbor pixels according to the weight. Floyd and Jarvis proposed


Introduction
Halftone technology is a method to convert a continuous tone image into the binary version. Image halftoning can be seen as a map from to 0,1 , where is the dimension of image sizes. Image halftoning algorithm is widely used in the scanning and printing process of newspaper, book, magazine and fax. Ordered dithering and error diffusion is the main methods of halftoning [1] [2] , Fig. 1 (a) shows the halftoning image that generated from the continuous tone image with size of 256 256 by using ordered dithering method. While Fig.1 (b) and (c) are the halftoning version by using error diffusion methods with different halftoning filters respectively. In the ordered dithering method, the halftoning image is generated by comparing the pixels between the original image and the filter matrix correspondingly. The error diffusion technique compares the image pixel value with a fixed threshold value, and diffuses the quantization error into the neighbor pixels according to the weight. Floyd and Jarvis proposed the error diffusion algorithms with different filter generators [3] [4] , which the filters are shown in the right part of fig.   1(b) and (c) respectively. Kite et al. proposed the linear model that simulate the error diffusion filter, and pointed out that error diffusion halftone image can be regarded as a convoluted continues tone image plus the noise. It is need to inverse the halftoning image to continuous tone for the halftoning version does not contain detail information of image and with bad visual defect. And the halftoning image is not easy to using or processing. For example, the halftoning image should be inverse halftoned in the scanning process of scanner. In this paper, we proposed an effective inverse halftoning algorithm, which reconstructs the continuous tone image form the halftoning version. still the normally approaches. Gaussian low-pass filtering inverse halftone [5] is a simple and fast method, but it cannot retain image edge and texture information effectively. The wavelet based methods reconstruct the inverse halftoning image by transforming and processing the image in the wavelet domain [6] [7] , in which the literature [7] divides the inverse halftoning into two steps: firstly the continuous tone image with noise is get through the deconvolution of halftoning version, then denoise the image by using wavelet transform. Literature [8] proposed the nonlocal regularization inverse halftone method, which contains two regularization procedure: the first one is total variation (TV) regularization based on the BM3D denoise algorithm; the second one is the post-processing by using the non-local regularization. The literature [9] de-screens the image through training the de-screening model, which contains two nonlinear operators: one is resolution synthesis-based denoising (RSD); and the other is SUSAN filtering. Literature [10] uses trained double dictionaries to reconstruct the continuous tone image. The two dictionaries are trained by the continuous image and the halftoning version respectively, which using the same coefficient. The coefficient getting by the halftoning image multiplies the dictionary training by the continuous image, which can directly output the inverse halftoning image. With the successful application of deep learning in image processing, the neural-network based methods have also been proposed and used in image inverse halftoning reconstruction [11][12] , which are in full using the prior information of inner images. These methods directly train a map from halftoning image to continuous tone image and then use the map to inverse halftone the given halftoning image. The approaches of machine learning and deep learning can directly inverse halftone the halftoning image without known the halftoning filters. Although the results of these methods are well, but the training procedure is usually time consuming.
For the above problems, we first look for a more effective algorithm to remove the halftoning noise. Kite proposed the linear model of error diffusion and points out that the error diffusion halftoning images can be seen as the combination of continuous tone image and add noises [1] [13] . Then the continuous tone image is obtained after denoising. There are three contributions in this paper listed as follows.
Firstly, we proposed an adaptive sparse representation denoising algorithm based on dictionaries training. By removing the noise of training image patches, the cleaning dictionaries can be output. In this way, both the cleaner dictionary and image can be got for several times training.
Secondly, we proposed an inverse halftoning algorithm based on sparse representation. Similarly with the literature [7], we deconvolute the halftoning image and output the continuous tone image with noise. Then we denoise the image based on the sparse representation algorithm by using the updated dictionaries in an iterative way.
Thirdly, the gain in the linear error diffusion model proposed by Kite is a constant, which only relative with the operator of the linear model. We point out and verify experimentally that the noise level (denoted as ) of the halftoning is also a constant, which only be constructed with the operator.
The possible reasons for the better performance of our method can be explained that the clean image patches can get clean dictionary with good sparse representation ability compared with wavelet transformations [14] [15] . And the output continuous image is the combination of the dictionary atoms. It is obviously that cleaner dictionary can be result in clean image. Here we use the patches of the denoised image to train the dictionary adaptively, where the patches size is 8 8 (the dimension size of the atom is 64). We improve the denoising performance for the image by removing the noise in the dictionary in each iteration.

Error diffusion linear model and deconvolution
In this paper we mainly discuss the inverse halftoning algorithm for error diffusion halftoning image. We firstly introduce the error diffusion linear model and the deconvolution model.

Error diffusion model
Error diffusion halftone method is to quantify the gray image into halftone image with only two gray levels.
The error diffusion model proposed by Floyd [3] is shown as Fig. 2 [3] and Jarvis [4] when , , ， from equation (3), we can get the equation (4) , , , inverse Fourier transforming eq. (4), we can get the eq. (5) where the , and , are the responses of  and in the frequency domain, which are the transformed functions of the signal and the noising respectively. , , and , respectively. Giving the error diffusion methods, such as the ℎ , , is constant for different images [13] . Such as 2.03 for the Floyd error diffusion method and 4.45 for Jarvis method. We will verify that the noise , and the noise level are both constant for the error diffusion method.
Inverse halftone can be regarded as the deconvolution of halftone image. The deconvolution algorithm of halftone image includes the following two steps.
Step1 Convolution operation According to Eq. (1) and (3), the estimate noise image , of the input , can be got.
Step2 Denoising in the transform domain The Energy of the discontinuous edge of the image is distributed in many Fourier transform coefficients, while the deconvolution technology based on Fourier transform will result in ringing and blurring artifacts. This is the uneconomical representation of Fourier transform, where the coefficients are easy to confusion with noise. And the wavelet transform can provide a compact image edge sharpening representation. Therefore Neelamani et al. [7] used the Wavelet transform to denoise. The sparse representation based on the compact dictionary, which is more economical comparing the wavelet one [14] [15] . We inverse the halftoning image by using the sparse representation method based on the dictionary learning by using the K-SVD algorithm. The inverse halftoning model can be shown in Fig. 3. Fig. 3 The inverse halftoning model

Image denoising based on sparse representation
Elad et al. [15] proposed an image denoising approach based on the representation of sparse and redundant. In this section, we firstly overview the sparse based method, which as the basis of our method. We will improve the denoising method, as a part of our inverse halftoning approach, would be proposed in the next section.

Sparse and redundant representation of images
In recent years, the theory and application of sparse and redundant representation have been developing continuously [16] [17] . Sparse representation has achieved good results in such aspects as image restoration [18] , denoising [15] , super-resolution [19] , face recognition [20] and so on. After transformation under the basis function, the signal can be expressed as a few non-zero coefficients, while most of the coefficients are zero or close to zero, then the signal is sparse or sparsely represented. The basis functions include Fourier transform (FFT), discrete cosine transform (DCT) and various wavelet transform. The method of optimal directions (MOD) has drawn renewed attention [21] , which training a dictionary by using the signal vectors. The K-SVD algorithm [14] can train a overcomplete dictionary adaptively and has a good sparse signal representation capability. Some methods such as sparse coding proposed by H. Lee [22] , online dictionary learning proposed by Julien [23] and so on, can improve the calculation speed while maintaining good sparse capability。

Image block denoising based on sparse and redundance
The literature [15] The above optimization problem is an NP hard problem. Matching and Baise Pursuit algorithms [21] can be very efficient in obtaining an approximate solution. Since Orthonormal Matching Pursuit (OMP) algorithm [24] is simple and effective, this paper chooses OMP algorithm to solve it.

Image denoising based on sparse and redundance
Based on the model in above section the image with size of √ √ , ≫ , can be denoised by The image denoising problem can be solved through optimizing the following equation with three penalty terms [15] , , ‖ ‖ ∑ ∑ where the first term requests that the observing image is approximate equal with the denoised image , and the second term requests that the number of non-zero coefficients, and is the pre-defined weight of each image patch, and the last term request that each reconstructed patch can be denoted by the dictionary and the coefficients , where the matrix denotes the picked patch in the , of the image. The denoised image can be got by optimizing Eq. (10).
The dictionary is important for the efficiency of the algorithm. The literature [15] proposed a denoising algorithm based on the image patch sparse representation, which described as follow.

Image denoising and inverse halftoning
This section discusses the three main contributions of this paper. We first introduce the proposed improved denoising algorithm, then present the proposed inverse halftoning method, and finally verify that the noise level in the error diffusion model is only depended on the filtering operator.

Image denoising based on adaptive sparse and redundant
Since the choice of dictionary is very important to the performance of the algorithm, we consider to improve the performance of the denoising algorithm by improving the "quality" of the dictionary. Based on the algorithm in the previous section, an improved sparse and redundant denoising algorithm is proposed. Adaptive dictionary learning refers to the use of noisy images patches as a training set to learn the dictionary [15] . The learned dictionary contains noise because the atoms of the initial dictionary are directly coming from the image training set.
We remove the noise in the training samples in a feedback way, and constantly improve the quality of the dictionary, so as to obtain a good image denoising effect. The quality of the dictionary is gradually approaching or reaching the level of ground truth training dictionary (GTD). For the Lena image with noise level of 50, the dictionaries trained by several type image set are shown as Fig. 7.
The sizes (or the redundancy) of the dictionary and the training set directly impact on the image denoise effect.
For the image training set without noising, with the increasing of dictionary redundancy the denoising performance tends to be enhanced. It can be seen in Fig. 6 the two curves of denoising performance for GTD and global dictionary (GD) methods respectively. For the image training set with noise, the bigger size of dictionary the more noise in the dictionary. When the redundancy of dictionary increases to a certain balance point, the performance of denoising tends to decrease. It can be seen in Fig. 6 the curves of denoising performance for adaptive dictionary (AD) method.
In the improved denoising algorithm we proposed, the noise in the training set is removed iteratively in order to clean training set, which can approximate the ground truth (GT) image training set. As the same time, At the same time, we increase the dictionary redundancy in the algorithm to improve the noise removal effect. Using the noise image to training the dictionary Abaron et al. proposed the K-SVD algorithm [14] to sparsely encode the training samples and update the atoms of the dictionary. The trained dictionary can be generated by the following formula.
The training process is divided into two stages: sparse coding stage and dictionary atomic update stage. The proposed algorithm employs the noise image as the training set to train the dictionary, and then using the dictionary to denoise the image. The denoised image as the new noise image to training the new dictionary and so on.
where and are the image and dictionary in the th training and denoising. The chart of the algorithm is shown in Fig. 4. Experiments results show that the quality of the dictionary is boosted and gradually stable in the first iterations. The performance of image denoising is also booted with the updated dictionary. The algorithm is shown below.

Image inverse halftoning based on adaptive dictionary training
The algorithm proposed in this paper is divided into two stages. The diagram of the algorithm is shown in Fig.   3, where and are the same as , and , respectively. Here is the nose image generated by the deconvolution of halftoning image , .
where Π Θ , is the additive white Gaussian noise. The reason is that the , is the additive white Gaussian noise (assuming zero mean),Πand Θ are the LTI. Hence we can use the proposed method to inverse halftone the image, which the algorithm is shown below Task: Giving image Y with addition Gauss white noise, where the standard variance of noise is .

Parameters:
is the size of image patch, and is the number of iterations, is the iteration number, and is the Lagrange operator, and is the gain noise, and is the initial redundance dictionary.
1. Initialization: is the fixed number of samples randomly selected from the sample set after normalization. . 2. for 1 ⋯ do 2.1 dividing the image into patches as the training set , is formed by randomly selecting sample and regularizing the dictionary. Training the dictionary using K-SVD and getting the dictionary . Setting . A. Sparse coding: computing the sparse coefficients of the image patch in training set.

‖ ‖
B. Update the dictionary : suppose is fix, in each time one of atoms in the dictionary and the corresponding correspond coefficient is updated.

Experimental results and analysis
In this section we first demonstrate that the proposed algorithm can remove noise more effectively. For well comparing with the adaptive dictionary learning denoising algorithm in the literature [15], zero-mean additive Gaussian white noise is used as the same as literature [15]. Then the proposed algorithm is applied to image inverse halftoning and compared with other benchmarks.
Task: Inverse halftoning image from the giving the error diffusion halftoning image , . Parameters：The error diffusion filter ℎ , , Gain block Step1: deconvolution The estimated inverse halftone image with noise is obtained by using Fourier transform and inverse transform, which can be described as followed form using the Eq. (6). (20) where the additive Gaussian white noise is Π Θ , , and the ground truth image is , , the inverse halftoning image with noise by the halftoning image is , .
Step2: denoising based on sparse representation Using the proposed sparse based denoising algorithm to adaptively training the dictionaries iteratively. Output: is the output inverse halftoning image.

Experimental results of proposed method.
Literature [15] use the nature images without noising and the noising image itself to training the dictionaries respectively. The former one is named as global trained dictionary (GTD), and the other one is adaptive trained dictionary (AD). It as shown in Fig. 6, we select three different training images without noise to train the global dictionary. We name the dictionary trained by ground truth image as ground truth dictionary (GT), while the dictionary trained by our proposed method is called improved dictionary (ID). The dictionaries are shown in Fig. 7.
For convenience, we set the redundance of all the dictionaries as 9, and the noise level is set as 50. From the Fig. 7 we can see that the quality of ID is objectively better than the GTD and AD. The possible two reasons are declared as follows. In one hand the algorithm iteratively removes the noise of the image training set while the dictionary is the combination of the selected image patches in the initial training. Hence the dictionary quality will be boosted as the noise being removed iteratively. Then the performance of denoising is gradually improved. In the other hand, the redundance of the dictionary will influence the denoising performance, so we increasing the redundance in each iteration. Fig. 8 shows the denoising performance for several images with noise level 50, which can be seen that the proposed algorithm can improve the denoising performance in the first several iterations and then tends to stable for Lena, Peppers and Boat images. For House image the performance of ID is better than the other three methods although the performance is not boosted with the iteration number increasing. Compared with the GTD, the proposed algorithm can improve the PSNR values of denoised image. Fig. 8 The performance of denoising

Experimental comparisons with benchmarks
Using the representative Lena and Peppers images with size 512 512, we compared the proposed algorithm with the benchmarks. We first halftoned the images with Floyd [3] and Jarvis [4] as the test halftoning image, where we set the gain block as 2.03 and 4.45 respectively. The deconvolution step is followed by the spare representation based denoising, and the PSNR results of image inverse halftoning are shown in Fig. 9 for the halftoning images, where the noise level is set to 40 and 18 respectively. The objective results are shown in Fig.   10, where the proposed method performance well than the benchmark. The subjective results are shown in Tab. 1, where the texts of bigger PSNR values are bold. Fig. 9 The performance of proposed method.
(a)WIHD [7] (b)WIHD [7] (c)proposed method (d)proposed method For color image, we also compared out method with the method of sparse representation with doubledictionaries learning [10] , where the image size is 512 512. In our method the redundance of our dictionary is 8. Son et al. [10] used the Floyd error diffusion halftoning image. In order to compare with literature [10] fairly, the redundance of our dictionary is set as 8 and the noise level is set as 40 in our method. It can be seen that the proposed method performance better than the benchmark proposed by Son et al subjectively, which is shown in Tab.2 and Fig. 11.

Conclusion
In this paper, we first proposed an effective image denoising method by boosting a learning dictionary iteratively. Then we proposed an image inverse halftoning approach for error diffusion halftoning image by using the deconvolution and the denoising algorithm. Last we discussed the noise level of the halftoning image for error diffusion. Experiment results show that our algorithm is superior to other benchmarks.

Compliance with Ethical Standards statements
Ethical approval: This article does not contain any studies with human participants/animals performed by any of the authors.

Conflict of interests:
The authors declare that they have no conflict of interest regarding the publication of this article.14 Jun Yang et al.