An Efficient Compression of Gray Scale Images Using Wavelet Transform

In the field of digital image distribution and reproduction, the standardization of digital photography and the recent advancement in technology have led to explosive growth. This new impetus has increased the need for introducing scientifically, well designed image compression methods that would help to handle, store and transmit a huge volume of image data. Today, digital image compression is an important role in the fields of multimedia. DWT in corporation with threshold and quadtree decomposition of absorb new designed algorithm is the objective of the current study. In respect of quality execution of this design, it is observed in parity of the EZW image compression technique at the similar bit rating, however, any other habitual standard image does not encourage compression method.


Introduction
The recent trend on the widespread use of computers, internet teleconferencing and satellite communication has inspired modern researchers to focus their attention on digital image compression and maintain a standard quality in multimedia applications. This has created a demand for developing and promoting resourceful and sophisticated techniques that would help to achieve an optimal level of compression and fulfill the requirement of users. Commonly, compressing data [1] saves storage capacity, transfers files quickly and reduces the cost involved in storage hardware and network bandwidth.
Data redundancy in an image pixel is one of the fundamental components of digital image compression. Basically, these are of three types viz. Coding redundancy, interpixel redundancy and psycho-visual redundancy. In coding redundancy, informations are presented in form of codes. This type of redundancy can be discarded by introducing appropriate encoding method. Inter-pixel redundancy, on the other hand, is defined as the failure to identify and utilize data relationships. These include Inter-pixel spatial redundancy and Inter-pixel temporal redundancy. Inter-pixel spatial redundancy, which depends upon the resolution of the image, occurs due to a comparability among adjacent picture elements into an image. Inter-pixels short term repetition is the statistical connection betwixt pixels of consecutive mount in video series. Conversely, Psycho-visual redundancies come to the forefront when the human perception of information in an image fails to receive quantitative exploration of each pixel or brightness level.
Image compression is attained if any or additional representation are decreased or removed.
It comprises of Lossless Compression and Lossy Compression. In case of former, the reconversed is the similar copy of the foremost, with no information lost during the compression process. On the other hand, in the latter, the reconstructed image bears no similarity to the original image.
Image transforms are popularly adopted for decorating the pixels in image compression. To decrease dependency among pixels, some standard image compression methods are employed. Few of these transformation tools include Karhonen Loeve transform (KLT) [2], Discrete Cosine transform (DCT) [3,4], Discrete Wavelet transform (DWT) [5,6] and like.
Of all transformation tools available so far, DCT is most appropriately applied in the popularly recognized JPEG [7] image compression standard. However, JPEG2000 [8] is based on DWT. Studies reveal that DWT has few advantages over DCT. Firstly, DCT is applied on block images and causes block artifacts, thereby indicating loss of information. On the other hand, DWT provides much better compression ratio, and barely loses information. This is because the latter does not work on image blocks and its coefficient is also localized.
In recent years, the use of wavelets as a method for decomposing signals, have become increasingly popular. Entropy coding is administered on the DWT coefficients to compress the image and encourage efficient storage. Once over, the image is then allowed to pass through a chain of analysis filter bank by using DWT. Survey cleaner bank comprises to low-pass and high-pass filter that extracts both the coarse and detailed information. Once the processing is complete, image coefficients are divided into Approximation and Detail sub bands. Synthesis filter bank is used by DWT, to obtain constructed image from these sub-bands.
Against this backdrop, we attempted to adopt a novel scheme by introducing DWT in combination with thresholding and quad tree decomposition with an aim to reduce symbol of coefficient for efficient compression of images. The result parameters of the proposed algorithm has been compared with Improved Embedded Zerotrees of wavelet transforms (IEZW) [9] based on the value of peak signal-to-noise ratio (psnr) at different low bit-rate. The proposed model, in other words, attempted to establish an alternative measure to EZW [10] without incorporating any other single conventional standard image compression methods.
This article has been presented in six sections as mentioned hereunder: Section 1 is the introductory portion of the study. Section 2 deals with previous literature surveys related to the present study. Section 3 explains the preliminaries. To offered algorithms are designed in Sect. 4. Section 5, relates to the outcomes obtained and Sect. 6 is the concluding part of the study.

Related Work
Wallace was the first to propose an Image compression standard known as JPEG algorithm [7,11]. This tool is widely used for compression of gray scale images. It uses a lossy form of compression based on DCT. DCT is applied on 8 × 8 rectangular blocks of data. The signal is packed into few DCT coefficients after capturing the spatial redundancy by itself. DCT has a major disadvantage as it is subjected to blocking artifacts and significantly differs from DWT, for the said reason. Besides, the statistical qualities of the wavelet modification have been widely explored. Today, and today, the wavelet based image coding techniques have been considered as the latest, sophisticated and most useful development in the field of image compression. Studies reveal that the pyramids or dyadic wavelet decomposition [12,13] offers high energy compaction with relatively high quality reconstructed images and is essentially useful in image compression [13,14]. For these reasons, in recent past, DWT has become effectively operational and better suited for compression of the digital images [14][15][16] than DCT.
Studies confirm that embedded zero tree wavelet (EZW) [17] is most popular and preferred among all image compression methods that are based on DWT approach. This newly proposed algorithm, first given by J.M. Shapiro in 1993, is primarily depended on four key notions. It should firstly be a discrete wavelet transform. Next, it should be forecasted nonappearance of notable information over scales by exploiting the self-affine essential in images; thirdly, it should possess entropy-coded successive-approximation quantization (SAQ), and finally, it should have the ability to obtain universal lossless data compression through adaptive arithmetic coding. Nevertheless, it has some limitations. On few high frequency sub-bands, redundancies are noted. Besides, for every SAQ iteration, the EZW coder examine all wavelet coefficients in each sub-band with regard to a given threshold. To minimize this problem, Kang [18] and others offered an alternative solution by proposing a modified technique, known as improved embedded zero tree wavelet coder (IEZW). Here, the suggested IEZW coder scans only the coefficient of notable sub-band, the bitrepetition, significantly. This noticeable defect in EZW motivated Zhong [19] to design and implement yet another technique that was depended on quantized coefficient furcation by using morphological functioning. In this mechanism, the method of encoding the coefficients in every sub band line-by-line was overruled; rather, the regions having the most significant quantized coefficients were taken out by morphological dilation and encoded at first. Next, the remaining space consisting mostly zeros were encoded with the use of zerotrees. Findings confirmed that the suggested algorithm was undoubtedly far better to the EZW. The obtained results support that the said compression algorithm adopted, can compete favourably by any of the most efficient wavelet-based image compression algorithms outlined so far. In [20], Quafi et al. proposed "A Modified Embedded Zero tree Wavelet (MEZW) Algorithm for Image Compression", where the authors modified Shapiro's EZW model of proposed algorithm. In their approach, the authors used six symbols in place of four (as in EZW) to distribute entropy and optimized the coding with employing dual groupage of elements prior to coding. Results reflected remarkable progress on the PSNR and compression ratio gained by Shapiro, without having any impact upon computation time. Brahimi et al. [21] proposed a technique for reducing the scanning and symbol redundancy of the existing EZW based on the use of six symbols instead of four. The main motive of this methodology was to keep away from the process of encoding the children of each notable coefficient, by assuming that there was no notable successor. This [22] improved the standard of reconstructed image after decoding. In a newly formulated technique, Chen et al. made use of the Compressed Sensing (CS) theory and EZW. Much later, extensive research on these areas prompted Brahimi et al. [21] to propose a far better and effective model. The method involved reducing the count of zerotrees as well as scanning and symbol redundancy of the existing EZW, based on the newly significant and well represented symbol map.
In this paper, to exploit the properties of the DWT, Quad tree decomposition and thresholding techniques have been combined to develop a new image compression algorithm which is expected to yield a much better performance in the light of PSNR at the same bitrate, without using the zero-tree concept of EZW. Here, no single standard image compression methods like EBCOT [23], SPIHT [12] and JPEG 2000 [8] have been employed. Moreover, the application of this method has been experimentally verified on different images and reflected consistent, promising results.

Preliminaries
In order to understand our proposed study better, it is essential to discuss the concepts behind the operational implementation of Discrete wavelet transform (DWT) and Quad tree decomposition methods in the field of data image compression. The former is applicable to decompose the image and reduce redundancy; whereas the latter is used to achieve high compression ratios and preserve edge integrity.

Discrete Wavelet Transform (DWT)
The term DWT, best known for its application in wavelet decomposition method, is used as a lossy coding compression. Its high energy compaction is expected to provide a welldesigned coding technique that is likely to reduce repetitiousness in an image and enable to attain optimum compression. It works in a recursive manner and divides image into low pass and high pass elements.
The two-dimensional DWT functions through a two-Channel wavelet filter bank. The images are initially scanned in horizontal direction, then goes through a filter to create low pass and high pass recurrence information. Once this output image data is generated, it is then vertically scanned to create different sub bands. The low frequency LL sub-band conveys remarkable guidance of the primary image and is commonly called the Approximate Image, while the LH, HL and HH sub-bands denote the Details of the Image. Every sub band is reduced to 1/4th the size of actual image [1] [24][25][26].
The basic principal behind this mechanism is best understood by the mathematical equation and expression denoted below: where c j0 (k) and d j (k) are scaling and detail coefficient respectively [27].
Suppose, function f (x) is expanded and sequence of numbers of f (x) is in discrete series, then resulting coefficient leads to the discrete wavelet transform. Most interestingly, DWT works excellently even when images are refined at numerous resolutions.
The working perspective DWT is manifest in Fig. 1. In the given figure, outturn demonstrations are known as DWT coefficient, where h (−n) and h Ψ(−m) are scaling and wavelet vectors with low pass and high pass decomposition filters respectively. The out-turn of Fig. 1 is computed as: In Fig. 1 [28], four lower scale components are recorded after input decomposition. h (−n) and h Ψ(−m) represent approximation coefficient and w is the outcome of these two. The expression w i Ψ fori = H, V, D denotes as detail coefficients. The main application of DWT is in image compression where DWT decomposes images into lower and higher sub bands, the clear picture of decomposition process is given in Figs. 2 and 3. Theoretically, though images can be decomposed up to the level of infinity, most researchers prefer to limit decomposition up to 2-level sub band [29,30].

Quadtrees
Quadtrees are defined as tree data structure in which each internal node has four children as shown in Fig. 4. They are two-dimensional analog of octrees, generally used to divide a two-dimensional space by recursively rubricate it into four sections. The shape of subdivided section may be squared rectangular or arbitrary. Quadtree decomposition uses the qtdecomp function, which serves by first splitting a square image into four equal-sized square blocks, and then examining every block to observe if it satisfies the criterion of homogeneity. Here, when a block encounters the criterion, it is no longer further is replicated. However, if it fails to do so, it is rubricated again into four blocks, and the test criterion is enforced on those blocks. This procedure is replicated iteratively until each block encounters the expected criterion. The outcome might have blocks of many nonidentical sizes [31].
The commonly uses of quad trees are in Image representation and Image Compression.

Basic Approach
The main purpose of the present perspective is to use the features of DWT transformations. In the first place, DWT decomposed the key image. Then, Huffman Encoding was employed as an additional input, to improve the compression performance. We used a 2-level wavelet (bior4.4) transform for decomposing the 8-bit key images of size 512 × 512 pixels.

Compression of DWT Coefficients
DWT details have zero mean and small variation. By using Huffman Coding, only the most significant DWT coefficients are taken into consideration, the rest of them are ignored. The suggested algorithm is manifested beneath: Algorithmic Steps: Step 1: Apply DWT on the gray scale image, I(M × N) so that it is decomposed into lower and higher sub bands.
Step 2: Generate the median of the approximate coefficients, and use median as a base of log and calculate the logarithmic coefficients accordingly. The purpose behind this step is to convert the higher value into lower value for enhancing the compression ratio. Preprocess the detail coefficients and convert into the nearest integer.
Step 3: Apply entropy based smoothing on higher sub band according to their textual features to accept significant bits and discard the insignificant ones.
Step 4: Apply quadtree decomposition to reduce symbols of approximate and detail coefficients. Quadtree decomposition for approximate image is optional. It is used for higher bit-rate and lower PSNR value.
Step 5: Encode the approximate and detail coefficients by using Huffman coding.
Step 6: Derive common compressed bit streams.
Steps involved in Decoding Algorithm.
Input: Common compressed bit streams. Output: Reconstructed approximate image.
Algorithmic steps: Step1: Obtain compressed lower and higher sub band coefficient bit streams from common compressed bit streams.
Step2: Find reconstructed lower and higher sub band coefficient from compressed approximate and detail coefficient bit streams using reverse Huffman coding.
Step3: Other process is just the reverse of the encoding process.
Step4: To get reconstructed image using inverse DWT.

Huffman Coding Based Compression
In the proposed technique, a 2-level 'Bior 4.4 Wavelet' transform decomposes the gray scale image. The initial step involves preprocessing approximate and detail Images and deriving the said coefficients. Once this operation is over, both coefficients are encoded by Huffman code to get a sequence of digital data. Again, the reconstructed image is formed by decoding the binary data. For this, the encoding process is just reversed. The concept behind this newly designed method is best illustrated by the algorithm and flowchart given below:

Mechanism of Proposed Method
At first, input image (Gray Scale Image of size 512 × 512) is taken into account and DWT is applied on it up to 2 levels using bi-orthogonal wavelet (Bior 4.4) to decompose the image. There are four sub bands available. One is approximate Image (LL) and others are detail images (LH, HL, HH). Figure 5 illustrates how the proposed method functions. Approximate coefficients are the compressed representation of an original image and the most sensitive data in DWT transform that needs to be handled with utmost care. If any mishandling occurs, reconstruction gets greatly affected, thereby causing significant reduction in PSNR value. The value of approximate image ranges between 50 and 600 s. The obtained data is well distributed and uniformly random. Therefore, log transform is utilized to compress the high range input data to low range output data. The log is taken with base as median of approximate image. This confirms that the integral part of all log transform value is one. Further, the integral part is discarded while the value up to 10 or 100 place after decimal is kept for Huffman encoding. The inverse log recreates approximate image with very acute changes. In addition, these acute changes form uniformity on visual and mathematical aspects.
For minimizing the number of symbols, Entropy based smoothing and adaptive quantization are used on detail images. In Entropy base smoothing, the detailed coefficients are smoothened according to their textural feature. The mechanism that works here is based on the fact that, when entropy in the current block is fewer in number, it is smoothened by the median, while, if larger in number, then it is left intact. This consequently reduces the number of insignificant symbols. In adaptive quantization, the range of coefficients is represented by considerably low range value, which again reduces the number of symbols. Once decoded, the dequantization recreates the symbols again in reversible manner.
In addition, unlike EZW standard image compression method, our proposed scheme has overruled the idea of adopting zero tree data structure. This is because, findings deduced that coding the coefficient with the help of zero tree data structure is highly complex, and takes more time in computation. Our intention here is to offer a simple, and a far better scheme that would reduce unnecessary computational complicacies. In our proposed method, EZW, a Multipass algorithm, has not been considered. This brings a novelty to the proposed method and adds to its beauty. Finally, our scheme provides comparable PSNR values at the same bit rate. We assume that all these factors would make the scheme far superior to the existing EZW method.

Performance Evaluation
The achievement of lossy compression approaches can be evaluated with the help of definite indicators mention beneath.
a. Peak-Signal-to-noise-ratio (psnr): psnr is the usual scale to compute the compressed image worth. For the general case of 8 bits per picture element of key image, the peak snr (psnr) can be expressed as [32,33] where the 'worth' value of 255 indicates the utmost possible worth that can be achieved by an image signed. The mse in (1) denotes the 'mean squared error of the image' and which is explained Mathematically: here 'n' is the whole quantity of pixels, F (i, j) indicates the pixel value in the compressed image and f (i, j) expresses a pixel value in the original image. b. Compression Competence(CR):Compression ratio [34] is a scale that measures compression competence, which is determined Mathematically as follows: where S original signifies the the dimension of the original image data and S compressed is the dimension (number of bits) of the compressed image data. The Bit-per-pixel (BPP) for grayscale image is expressed as BPR =8/CR.
The image compression algorithms proposed in this study have been applied on different grayscale images.

Performance Evaluation of Proposed Image Compression Procedure
The function of the suggested Image Compression technique has been analyzed using such parameters as PSNR (Peak Signal to Noise Ratio) and BPP (Bit-per-pixel). In order to maintain consistency, the same test images used in [9] were employed. Table 1 confirms that the obtained results are very close to those obtained by IMP1EZW [9]; without using any of the standard image compression algorithm such as EZW, SPIHT, EBCOT and JPEG2000. The wavelet (bior4.4) [35] has been preferred for decomposing the images using DWT.
From Fig. 8, we have noticed some constructive improvement on images, and its quality as shown in Fig. 7a-f. The program has been established in Matlab. The result findings presented in Table 1, confirms that for all test images, the outcomes are in accordance to our expectations and reflects a far more promising model, at almost every bit-rates within a given range than findings obtained from IMPIEZW (Fig. 8). Figure 9 shows the view of images at different stages of proposed method.

Conclusion
The proposed algorithm is a newly developed approach that aimed to decompose the image data based on DWT and Bi-Orthogonal wavelet transforms. To understand the basic principal underlying this proposed mechanism, the image was divided into two parts. The first one being the approximate image, which played a key role since approximate coefficients possessed one of the most sensitive data in DWT transform. Approximate image, a compressed representation of the original one was handled with care to avoid any impact upon the reconstructed image that might have otherwise greatly reduced the PSNR value. The second part included preprocessed image where the detailed coefficients were smoothened based on their textual feature and their outcomes were quantized. This reduced the number of insignificant symbols. The quadtree decomposition reduced the data size and worked on smaller data. The outcome of the proposed method was compared with one of the latest image compression methods. The quantitative and visual results proved their superiority over the existing modern methods. Thus, we may finally conclude that the newly introduced algorithm can awfully be favorable in the process of deposition and communication of image data.

Availability of Data Material Not applicable.
Code Availability Not applicable.

Conflict of interest
The authors declare that they have no conflict of interest.

Dr. Prabhat Kumar is an Associate Professor in Computer Science and
Engineering, Department at National Institute of Technology Patna, India. He is also the Professor-In-charge of the IT Services and Chairman of Computer and IT Purchase Committee, NIT Patna. He is the former Head of CSE Department, NIT Patna as well as former Bihar State Student Coordinator of Computer Society of India. He has over 100 publications in various reputed international journals and conferences. He is a member of NWG-13 (National Working Group 13) corresponding to ITU-T Study Group 13 "Future Networks, with focus on IMT-2020, cloud computing and trusted network infrastructures". His research area includes Wireless Sensor Networks, Internet of Things, Social Networks, Operating Systems, Software Engineering, E-governance, Image Compression etc.