The quantum realization of image linear gray enhancement

Linear gray enhancement is a spatial domain image enhancement technique commonly used in classical computers, mainly including image negative, image contrast stretching, and piecewise linear gray transformation. In order to realize these three linear gray enhancement techniques in the quantum computers, this paper proposes three types of linear gray transformation schemes for quantum images based on the generalized model of novel enhanced quantum image representation (GNEQR), and the quantum circuits that realize these three transformation methods are constructed according to the schemes. The proposed circuits take advantage of efficient quantum arithmetic operations and parallel Controlled-NOT modules to factor classical transformations into basic unitary operators such as the Controlled-NOT gates and the Toffoli gates. The feasibility of the schemes is verified on IBM platform by using a small size quantum image. The complexity analysis shows the linear gray enhancement algorithm for quantum images is better than the classical algorithm in both spatial complexity and time complexity. Furthermore, the effectiveness of the proposed scheme is also demonstrated by a clarity evaluation method.


Introduction
The quantum computing model has become a hot topic in recent years, and it was first proposed by the American physicist Feynman in 1982 (Feynman 1982). The famous Moore's Law states that computer performance will double every 2-3 years (Nagy and Akl 2006). However, Moore's Law cannot hold forever with the electronic components cannot shrink indefinitely. The emergence of quantum computation provides the possibility to solve this problem, and its properties such as superposition and entanglement that can make computation faster and more efficient (Kanamori et al. 2006). Researchers have also proposed quantum algorithms that can be used in practice, such as the famous Shor's integer factoring algorithm (Shor 1994) and Grover's database searching algorithm  (Grover 1996), which further provide strong support for proving that quantum computation has stronger computing power than classical computation. With the rapid development of quantum computation and quantum information, the research on quantum computation has gradually deepened into the image processing (Beach et al. 2003). Quantum image processing (QIP) has become an important branch of quantum computation research. QIP focuses on performing classical image processing tasks in quantum computers to improve processing efficiency, especially in large-scale image processing tasks (Iliyasu 2013). At present, the two major directions of QIP are quantum image representation and quantum image processing algorithms.
In recent years, many classical models of quantum image representation have been proposed. In 2003, the first quantum image representation model, called the qubit lattice representation, was proposed by Venegas-Andraca and Bose (2003). More recently, Le et al. proposed a flexible representation for quantum images(FRQI)  using quantum superposition state to store the colors and the corresponding positions of an image. In 2013, Zhang et al. proposed a novel enhanced quantum representation model (NEQR) ) based on FRQI, and years later the generalized model of NEQR(GNEQR) was proposed by . Comparing GNEQR with NEQR, the former not only has high storage efficiency in color image representation but also can represent the rectangular image. In 2015, Mastriani proposed a new technique for the internal image-representation in a QPU, called QBIP (Mastriani 2015). In addition, these quantum image representations have been reviewed and discussed in some literature (Mastriani 2017(Mastriani , 2020Ruan et al. 2021;Li et al. 2020). At the same time, many quantum image processing algorithms based on these representation models have been proposed, such as quantum image geometric transformation Fan et al. 2016;Wang et al. 2015), quantum image scaling , quantum image compression , quantum image watermarking (Mogos 2009;Iliyasu et al. 2012), quantum image scrambling Jiang et al. 2014;Zhou et al. 2015), quantum image encryption (Zhou et al. 2013), quantum image edge detection (Fu et al. 2009), quantum image enhancement (Yuan et al. 2017).
Image enhancement is basically improving the interpretability or perception of information in images for human viewers and providing "better" input for other automated image processing techniques. The principal objective of image enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer. There exist many techniques that can enhance a digital image without spoiling it. The enhancement methods can broadly be divided into the following two categories: spatial domain methods and frequency domain methods (Maini and Aggarwal 2010).
In spatial domain techniques, we directly deal with the image pixels. The pixel values are manipulated to achieve desired enhancement. On the other hand, frequencybased domain image enhancement is a term used to describe the analysis of mathematical functions or signals with respect to frequency and operate directly on the transform coefficients of the image, such as fourier transform, discrete wavelet transform (DWT), and discrete cosine transform (DCT). Image enhancement techniques like spatial domain methods can again be classified into two broad categories: point processing operation and neighborhood enhancement operation. However, compared with classical image processing, quantum image processing technology is still in infancy, there is not much research about the spatial domain methods in the quantum field. Several detection schemes of quantum image edges based on the Sobel operator (Fan et al. 2019a), Prewitt operator , Kirsch operator (Xu et al. 2020), and Laplacian operator (Fan et al. 2019b) have been proposed in previous years, but these algorithms belong to the category of quantum image neighborhood enhancement operation. Point processing operation is the simplest spatial domain operation as operations are performed on single pixel only. Pixel value of the processed image depend on pixel value of original image. The point processing approaches can be classified into two broad categories: linear gray transformation and nonlinear gray transformation (Gonzalez et al. 2007).
Linear gray transformation for quantum image is the focus of our research. This paper proposed three types of linear gray transformation schemes for quantum images based on point processing operation in quantum computers. We provide quantum circuits for realizing image negative, image contrast stretching, and piecewise linear gray transformation by using high-efficiency quantum arithmetic operations. As we know, this work has not been studied yet.
The rest of the paper is organized as follows. A brief background on the GNEQR representation, three types of linear gray transformation, quantum gates, and quantum arithmetic operations is presented in Section 2. The quantum circuit architecture of these linear gray transformation is discussed in Section 3. This is followed in Section 4 by the theoretical analysis of network complexity and simulation on the IBMQ platform. Finally, a conclusion is given in Section 5.

The generalized model of NEQR (GNEQR)
GNEQR is the generalized model of NEQR . For a gray image with the size 2 2 and gray range 2 , GNEQR encodes the position information into (n+m) qubits and encodes the gray intensity into q qubits. According to the GNEQR, a quantum image can be written as the form shown below.
where the binary sequence is the gray value of the corresponding pixel coordinate . Figure 1 shows an example of a 2 2 gray image and its GNEQR representation.

Linear gray transformation
Linear gray transformation is to adjust the gray of the target image by establishing gray mapping, and to linearly expand Fig. 1 An example of a 2 1 2 2 gray image and its GNEQR representation or compress the gray of the image (Gonzalez et al. 2007). The gray mapping relationship can be described as follows. (4) where T is a transformation that maps the original image into the transformed image, (x,y) is the pixel coordinates, I(x, y) is the original image, and O(x, y) is the transformed image. Figure 2 shows the schematic diagram of mapping the original image into the transformed image. It can be seen from Fig. 2 that the pixel value has changed after the transformation. In classical point processing operation, there are three types of linear gray level transformation: image negative, image contrast stretching and piecewise linear gray transformation. Each gray transformation method has a different adaptation scenario. For different images, it is necessary to use the appropriate transformation method to achieve a better enhancement effect. Figure 3 shows the principle diagrams of three types of gray transformation, where represents the pixel value of the original image, represents the pixel value of the transformed image, and the color depth of the image is q bits.
The most basic and simple operation in digital image processing is to compute the negative of an image. The pixel gray values are inverted to compute the negative of an image. For example, if is the gray value of the original image, the gray value of the negative image can be computed as 2 1 .
( 5 ) Figure 3a shows the principle diagram of image negative. Image contrast stretching is to expand the gray value of all pixels in the same proportion to increase the pixel Fig. 2 The process of image gray transformation difference between adjacent pixels and achieve the purpose of contrast enhancement (Singh and Mittal 2014). Suppose the pixel gray value range of the input image I(x, y) is [ ], the gray value range of the transformed image O(x, y) is . The image contrast stretching can be described as (6)   (7) where is the gray value stretching coefficient, representing the slope of the oblique line in Fig. 3b.
If the gray range of the original image is divided into two or more segments for linear transformation, we call it piecewise linear gray transformation. Piecewise linear gray transformation can stretch the gray details of specific regions of the target image and relatively suppress uninteresting regions according to the actual situation (Singh and Mittal 2014). Figure 3c shows the schematic diagram of piecewise linear gray transformation. Suppose the segmentation points are and , linear gray transformation can be described as where 1 , 2 , and 3 are the slope of the three broken lines in Fig. 3c. In order to show the different transformation effect of the three types of linear gray transformations, we selected the pollen image as an example and obtain different transitioned image by using three types of transformations respectively, as shown in Fig. 4.

Quantum gates
The computational basis states 0 , 1 and their dual states 0 , 1 can be expressed in the row and column vectors as:  is the Toffoli gate. The corresponding controlled gates are presented in Fig. 6.
The parallel controlled-NOT module consists of n CNOT gates, as illustrated in Fig. 7. This module is used to make a copy of n qubit sequence information into the auxiliary qubits 0 .
And the quantum gates covered in this paper also include 1 2  (Amy et al. 2013). Figure 8 shows the circuit and logical relationship of PG gate and TR gate. In order to further promote the development of quantum image, Li et al. designed fault-tolerant implementations of the TR gate, Peres gate, and variants with better performance than that from the Toffoli gate in 2020. Therefore, based on the faulttolerant implementations of the TR and Peres gates, they implemented fault-tolerant quantum arithmetic operation circuits for quantum image processing by using the TR gate, PG gate, and their variants. For a more comprehensive survey of TR1, TR2, PG1, PG2, and other variants of TR and PG gates, readers can refer to Li et al. (2020).

Quantum arithmetic operations
From Eqs. 5, 6, and 8, we can see that linear gray transformation mainly use addition operation, subtraction operation, multiplication operation, and comparison operation. Hence, the relevant quantum arithmetic operations are introduced in this section.

Quantum adder and quantum modular subtracter
Li et al. used the PG1 gate and the TR2 gate to design an efficient quantum adder with an auxiliary bit, and the quantum cost of this quantum adder is only (13n-10) (Li et al. 2020). Figure 9 shows the implementation circuit for the quantum adder. Suppose that 1 2 0 , 1 2 0 , and 1 0 , the quantum adder implemented the operation as follows: . (11) Since the quantum subtractor is not yet mature, the quantum modular subtractor is used instead of the quantum subtractor in the design of related circuits in this paper, this is because subtraction operation is equivalent to modulo subtraction operation without borrow. Li et al. (2020) note that quantum modular adder is realized by modifying the circuit of the quantum adder, and then we can obtain the circuit of the quantum modular subtractor by substituting TR1 and PG2 for PG1 and TR2 in the circuit of the quantum modular adder. The quantum modular subtractor implemented the operation as follows: where 1 2 0 . The simplified circuit diagram of the quantum adder and quantum modular subtractor are shown in Fig. 11b.

Quantum comparator
Comparator is used to compare two positive integers, which occupies an important position in the quantum image processing. Li et al. (2020) note that the quantum comparator circuit is implemented using the TR1 and PG1 gates as shown in Fig. 10 and its symbol in Fig. 11c. The output qubit c is used to denote the comparison result, i.e., 0 1 .
This paper uses the comparators to divide the pixel value of an image into multiple intervals.

Quantum multiplier
Li et al. designed a controllable adder based on a quantum adder, and then a quantum multiplier can be realized by stacking multiple controlled quantum adders. The circuit of the quantum multiplier is shown in Fig. 12. The multiplier implemented the following: where 1 2 0 , 1 2 0 and 1 1 0 .

Image linear gray transformation's GNEQR representation
In this paper, we use GNEQR to represent quantum images. Because gray transformation is the operation which focuses on manipulating the gray value of every pixel in the images, we only need to change the gray value in Eq. 1. We define the gray negative operation, the contrast stretching operation and the piecewise linear transformation operation as 1 , 2 , and 3 , respectively. Assume is the original image with the size 2 2 and gray range 2 , and is the transformed image.
represents the pixel of the Equations 15, 16, and 18 give three types of linear gray transformation's quantum representation, respectively. In the following, the circuits we will give are based on them.

Quantum circuit architecture of linear gray transformation
Because the operation 1 , 2 , and 3 are independent of each other as shown in Eqs. 15-18, we will give several circuits to realize them, respectively.

Image negative network
The image negative network that realizes 1 . We use the quantum modular subtracter to replace the subtracter to construct the network structure. By contrasting Eq. 15 with Eq. 12, we can replace b, a in (12) with 2 1, , respectively. The quantum image gray negative network is shown in Fig. 13. The final output of the network is the gray information 1 of negative image.

Quantum image contrast stretching network
According to Eq. 16, we will use quantum modular subtractor, quantum multiplier, and quantum adder to design The first step corresponds to the quantum subtracter modulo 2 , the second one corresponds to quantum comparator and the third one corresponds to the quantum adder. We can cascade the three types of arithmetic operations to realize 2 as shown in Fig. 14. The output of the whole network, mod 2 , is the gray information 2 of the transformed image.

Piecewise linear gray transformation network
Unlike image negative network and image image contrast stretching network, piecewise linear gray transformation network is more complex and requires more arithmetic operations. In order to show a clear details of this network, the whole workflow of our scheme, as well as the quantum circuit realization, will be discussed in this subsection. The flow chart is shown in Fig. 15, which is divided into 4 stages more specifically. Next, we will give the details of our proposed network. 2. Quantum circuit realization of piecewise linear gray transformation According to the flow chart, Fig. 16 gives the piecewise linear gray transformation network. The following four steps will be described the network's details. The four steps are the preparation of quantum state, copying the pixel information, dividing pixel values into three intervals and performing the operation.
Step 1. Preparation Input the gray image into a quantum computer and we use GNEQR to store the image . Hence, the quantum state is as follows: (22) Step 2. Auxiliary qubits store the pixel information In this step, we first prepare 0 auxiliary qubits to store the pixel value information of the original image by using the parallel controlled-NOT modules. This step can be described as Step 3. Divide pixel values into three intervals According to the principle of piecewise linear gray transformation in Fig. 3c, this step we need to use two quantum comparators to divide the pixel value into three intervals. For the convenience of the following description, we mark the left comparator as COMP1 and the right one as COMP2. Use and as the input for COMP1 and 1 is its output. According to the Eq. 13, we can draw the following conclusions. Finally, we divide the pixel value into three intervals by judging the output of the two comparators. In addition, the output of the two comparators play an important role in the network. It is necessary to make further explanation here. In the whole network, we use the output of the comparators to determine the input of the quantum arithmetic operations. For example, if 1 2 = 0 1 , the input for the quantum modular subtractor is , and if 2 0 , its input is . This rule also applies to quantum adder and multiplier in the network, so we will not repeat it. Note that x is the number of auxiliary qubits 0 , and its value depends on stretching coefficient k.
Step 4. Perform operation 3 If 1 1 , then the current pixel value interval is 0 , the input for the quantum multiplier is 1 . Use quantum multiplier to expand the gray value of this interval by 1 times, and 1 , If 1 2 = 0 1 , then the current pixel value interval is , the input for the quantum modulo subtractor is , the input for the quantum multiplier is 2 , the input for the quantum adder is . Use these three arithmetic operations to realize the following operations on the gray value of this interval, where 2 . mod 2 2 mod 2 2 2 mod 2 2 mod 2 2 mod 2 .
The operation process is the same as the process of Eq. 21. If 2 0 , then the current pixel interval is 2 1 , the input for the quantum modulo subtractor is , the input for the quantum multiplier the operation process is the same as that of Eq. 21. Finally, ignoring the garbage output, we will get the resulting quantum image.

A simple example
After the quantum preparation of an image, let us consider the simple 2 2 2 2 GNEQR image with the gray range 2 shown in Fig. 17b as an example. We first assign an initial value to the gray of each pixel and mark each pixel with the number 1-16. Here, we regard the gray reversal circuit and the image contrast stretching circuit as part of the piecewise gray linear transformation circuit. Therefore, we only apply piecewise linear gray transformation to sample image in this section. According to the network, the change of gray information of the sample image is shown in Table 1. we prepare 0 to store the pixel information of the original image in step 2, which is shown in Fig. 17c. In step 3, we divide the gray value into three intervals, and then each interval is expanded or compressed by a different proportion. Due to the current limitation of quantum floating-point arithmetic operation, gray expansion or compression coefficient k need to be set to integer. We set the two points to (30, 30) and (127, 224), that is to say 30, 30, 127, 224, the 16 pixels are divided into three parts: (1-4), (5-12), and (13-16) by this way. According to Eq. 21, the slopes of the three broken lines in Fig. 3c are as follows 1 = 1 2 = 2 3 2 1 2 1 = 1. (31) In step 4, we perform linear gray transformation on the pixels of the three intervals according to Eqs. 27-29, and this process is completed by judging the output of two comparators. According to the gray information after  Table 1, Fig. 17e shows the final resulting image. Compared with the original image, we can find that the gray contrast of pixels (5-12) of the resulting image is stretched, and the gray contrast of pixels (1-4) and (13-16) remain unchanged.

Simulation analysis
To test the proposed linear gray transformation schemes, the IBMQ platform has been used by using its Qiskit framework. In this experiment, we demonstrate that the proposed circuits of the aforementioned schemes can be implemented on the IBMQ platform despite the qubits constraints. In this paper, the experiments were performed to run the proposed three types of transformation's circuits in the IBMQ simulator named "ibmq qasm simulator," As in Section 3.3, we still choose the piecewise linear gray transformation network as the object of our experiment, because it is the most representative of the three networks, and the relevant reasons have been given in Section 3.3. Due to the limited number of qubits, we have to choose a small size image as the sample image. However, this is also convincing enough for the simulation and verification of the proposed network circuit.
We choose Fig. 17b as the original image, and the gray value of each pixel of this image has been given in Table 1. Then, we construct a specific simulation circuit for Fig. 17b on the IBMQ platform according to piecewise linear gray transformation's quantum network. In order to make full use of precious qubits, we reuse some qubits when constructing the simulation network, and the actual simulation results will not be affected. Figure 18 shows the simulation circuit we construct on the IBMQ platform, and Fig. 19 describes the details of the original image and the resulting image. Note that the above analysis is based only on observations of the human visual system, so Fig. 19 also shows the histogram information of the two images to observe the change of gray information. It is not difficult to find that  the sample image contrast is stretched by comparing the information of the two histograms, which is consistent with the results of numerical analysis in Section 3.3. To further strengthen our argument that image clarity has indeed been improved, we will use the Brenner function (Li et al. 2002) for quantitative analysis.
Brenner function is a common indicator that measures image clarity. Given an image , the definition can be found by the following equation: is the gray value of point . The algorithm calculates the difference of gray value between adjacent pixels on the vertical direction. It has the merits of less calculation and high velocity. According to Eq. 32, we calculated the definition of the original image and the output image respectively on MATLAB, as follows 180534 233441 So we can conclude that the clarity of the sample image is improved through our proposed network.

Circuit complexity
The quantum image linear gray enhancement scheme relies on the parallelism of quantum computing, which not only makes it better than the classical method in terms of storage space, but also is more efficient in computing efficiency. In this section, we analyze the complexity of the three types linear gray enhancement schemes designed in this paper from two aspects of space and time.

Spatial complexity
The space complexity means the memory cells an image needs when it is stored in a computer. In a classical computer, a 2 2 image with 2 gray range needs 2 2 bits because it has 2 2 pixels and each pixel needs bits to represent. In a quantum computer, according to GNEQR, the same image only needs qubits. Table 2 gives three common images as examples. After comparison, we find that the space complexity of storing images in a quantum computer is significantly lower than that in a classical computer.

Time complexity
Time complexity means the time an algorithm consumes when it is executing. In a classical computer, the time complexity depends on the number of operation steps to execute the algorithm, while in a quantum computer, the complexity of the quantum circuit network is determined by the number of elementary quantum gates, which is called the quantum cost. Assuming that the original image is a 2 2 image with 2 gray range, then we discuss its time complexity in a classical computer and a quantum computer, respectively.
In classical computers, linear gray transformations require operating on each pixel to change the gray value. For each pixel in the image, one modular subtraction operation  needs to be completed in image negative; modular subtraction, modular multiplication, and modular addition need to be completed in image contrast stretching; and four gray information copying operations,two comparison operations, one modular subtraction operation, one multiplication operation, and one addition operation in piecewise linear gray enhancement. Therefore, in classical computers, the complexity of the three linear gray transformations is 2 , 3 2 , and 9 2 + . In a quantum computer, the complexity of the basic quantum gate is 1, including the X gate, the CNOT gate, and any 2 2 unitary operator (Nielson and Chuang 2000). Here, we take the CNOT gate as the basic unit for evaluating the circuit complexity. The circuit complexity designed in this paper mainly depends on the quantum arithmetic operation. Ref. [28] gives the quantum cost of each operator, we list them in Table 3. According to the circuit of Figs. 13, 14, and 16, we can obtain the number of operators in each circuit. The quantum circuit for image gray negative contains a quantum modular subtractor, so its complexity is 13 22 . The quantum circuit of image contrast stretching contains 1 quantum adder, 1 quantum modular subtractor, and 1 quantum modular multiplier. So its complexity is 13 10 13 22 17 2 12 17 2 13 32 . The quantum circuit of piecewise linear gray transformation includes 8 parallel CNOT gate modules, 1 modular subtractors, 1 adders, 1 multipliers, and 2 comparators, so its complexity is 8 13 22 13 10 2 12 8 17 2 12 17 2 38 48 . According to the calculation complexity method given above, Table 4 gives three examples to further show the superiority of the processing speed of the quantum image gray enhancement algorithm.

Conclusion
Based on the GNEQR quantum image, this paper proposes three types of linear gray transformation schemes and gives the corresponding quantum circuit. Through numerical analysis and simulation on IBMQ platform, the effectiveness and feasibility of the proposed schemes are verified. The complexity analysis shows that the quantum linear gray enhancement algorithm has better performance than the classical algorithm in both space complexity and time complexity, especially in large-scale image processing tasks, the efficiency advantage is more obvious. At the same time, the realization of quantum image linear gray transformation will also become the basis for other QIP, such as quantum image segmentation and quantum image feature extraction.
Due to the limitation of quantum transcendental function operations, the design of quantum image nonlinear transformation has not been studied yet. In the future research work, we will try to study the design of quantum transcendental function operator, which will also lay the foundation for the realization of nonlinear gray enhancement transformation of quantum images.
Funding This work is supported by the Science and Technology Project of Guangxi under grant no. 2020GXNSFDA238023.
Code availability Simulation data and source code is available upon reasonable request.

Conflict of interest
The authors declare no competing interests.