The experiment is performed on a laptop with CPU AMD64 4 processors, 4GB RAM, Windows 10, and Java 15. The dataset is a set of ten original 180x250 images and three 3x3 convolution filters such as blur filter 1/9{{1, 1, 1}, {1, 1, 1}, {1, 1, 1}}, sharpening filter {{0, − 1, 0}, {–1, 5, − 1}, {{0, − 1, 0}}, and edge-detection filter {{–1, − 1, − 1}, {–1, 8, − 1}, {–1, − 1, − 1}} are tested. After these convolutional filters are executed, images cannot be recovered well except blur filter because filtered images are seriously modified and three times smaller. Therefore, filtered images are zoomed three times, which will be compared with deconvoluted images produced by the technique of reverse image deconvolution in this research. Exactly, let MAE0 be mean absolute error of a filtered image and an original image and let MAE be mean absolute error of a deconvoluted image and an original image.
$$\begin{array}{l}\text{M}\text{A}\text{E}0=\frac{1}{N}\sum _{i}\frac{1}{{n}_{i}}\sum _{j}\left|\text{i}\text{m}\text{a}\text{g}\text{e}\text{F}\text{i}\text{l}\text{t}\text{e}\text{r}\text{e}\text{d}\left[j\right]-\text{i}\text{m}\text{a}\text{g}\text{e}\left[i\right]\left[j\right]\right|\\ \text{M}\text{A}\text{E}=\frac{1}{N}\sum _{i}\frac{1}{{n}_{i}}\sum _{j}\left|\text{i}\text{m}\text{a}\text{g}\text{e}\text{D}\text{e}\text{c}\text{o}\text{v}\left[j\right]-\text{i}\text{m}\text{a}\text{g}\text{e}\left[i\right]\left[j\right]\right|\end{array}$$
4
Where notation |.| denotes absolute value, N is the number of images N = 10, and ni is the number of pixels of the ith image. Obviously, image[i][j] denotes the jth pixel of the ith image with note that image, imageFiltered, and imageDecov are original image, filtered image, and deconvoluted image, respectively. For each filter, a so-called loss ratio r between MAE and MAE0 is compared. The smaller the loss ratio r is, the better the deconvolutional task is.
$$r=\frac{\left|\text{M}\text{A}\text{E}-\text{M}\text{A}\text{E}0\right|}{\text{M}\text{A}\text{E}0}$$
5
The test is done with 19 learning rates γ = 1, 0.9,…, 0.1, 0.09, 0.001 because stochastic gradient descent (SGD) algorithm is affected by learning rate. Table 1 shows MAE, MAE0, and loss ratios of the three filters with regard to ten learning rates from 1 down to 0.1.
Table 1
Loss ratios of filters regarding learning rates from 1 down to 0.1. By summarizing Table 1, average loss ratios are listed in Table 2.
|
|
MAE
|
MAE0
|
Loss
|
γ = 1
|
Blur
|
0.2881
|
0.0727
|
296.1781%
|
Sharpening
|
0.2200
|
0.1758
|
25.1110%
|
Edge
|
0.5563
|
0.5482
|
1.4685%
|
γ = 0.9
|
Blur
|
0.1988
|
0.0727
|
173.3775%
|
Sharpening
|
0.2232
|
0.1758
|
26.9450%
|
Edge
|
0.5538
|
0.5482
|
1.0093%
|
γ = 0.8
|
Blur
|
0.3629
|
0.0727
|
398.9925%
|
Sharpening
|
0.2579
|
0.1758
|
46.6521%
|
Edge
|
0.5541
|
0.5482
|
1.0713%
|
γ = 0.7
|
Blur
|
0.1246
|
0.0727
|
71.2817%
|
Sharpening
|
0.2201
|
0.1758
|
25.1523%
|
Edge
|
0.5558
|
0.5482
|
1.3837%
|
γ = 0.6
|
Blur
|
0.0950
|
0.0727
|
30.5891%
|
Sharpening
|
0.2329
|
0.1758
|
32.4306%
|
Edge
|
0.5555
|
0.5482
|
1.3313%
|
γ = 0.5
|
Blur
|
0.1300
|
0.0727
|
78.7666%
|
Sharpening
|
0.2020
|
0.1758
|
14.8955%
|
Edge
|
0.5570
|
0.5482
|
1.6023%
|
γ = 0.4
|
Blur
|
0.1024
|
0.0727
|
40.7506%
|
Sharpening
|
0.1976
|
0.1758
|
12.3684%
|
Edge
|
0.5588
|
0.5482
|
1.9245%
|
γ = 0.3
|
Blur
|
0.0707
|
0.0727
|
2.8118%
|
Sharpening
|
0.1782
|
0.1758
|
1.3286%
|
Edge
|
0.5592
|
0.5482
|
1.9991%
|
γ = 0.2
|
Blur
|
0.0707
|
0.0727
|
2.8118%
|
Sharpening
|
0.1927
|
0.1758
|
9.5685%
|
Edge
|
0.5595
|
0.5482
|
2.0560%
|
γ = 0.1
|
Blur
|
0.0707
|
0.0727
|
2.8118%
|
Sharpening
|
0.1757
|
0.1758
|
0.0734%
|
Edge
|
0.5595
|
0.5482
|
2.0584%
|
Table 2. Average loss ratios of filters
|
MAE
|
MAE0
|
Loss
|
Blur
|
0.1514
|
0.0727
|
109.8372%
|
Sharpening
|
0.2100
|
0.1758
|
19.4525%
|
Edge
|
0.5570
|
0.5482
|
1.5904%
|
From table 3, it is easy to recognize that sharpening filter and edge detection filter obtain good results with small loss ratios (19.4525% and 1.5904%) where edge detection is the best one (1.5904%), which implies that the proposed reverse method is suitable to the convolutional filters that focus on discovering pixel differences inside image. However, this improvement is insignificant because sharpening filters only keep most important features, which increase information loss. For instance, given whereas the perfect edge detection is {{–1, –1, –1}, {–1, 8, –1}, {–1, –1, –1}}, the best filter estimations of edge detection whose average loss ratio is 1.5904% with learning rate γ = 0.9 for color channels such as red, green, and blue are:
Red
|
–2.9976
|
–3.6497
|
–2.4164
|
–2.6011
|
22.1964
|
–2.292
|
–2.974
|
–2.6711
|
–3.2228
|
Green
|
–2.7992
|
–3.236
|
–2.8844
|
–3.4417
|
24.4267
|
–3.4824
|
–2.9609
|
–2.4793
|
–2.6858
|
Blue
|
–4.8567
|
–5.7483
|
–4.8785
|
–4.5762
|
33.0547
|
–4.888
|
–4.3127
|
–5.1539
|
–4.0629
|
It is easy to recognize that the estimated filters relatively keep the proportions between weights, for instance, the ratio − 8 is relatively approximated. However, the magnitude of estimated filters is three times approximately larger than the magnitude of the perfect edge detection is {{–1, − 1, − 1}, {–1, 8, − 1}, {–1, − 1, − 1}}.