An inertial-type method for solving image restoration problems

We first establish weak convergence results regarding an inertial Krasnosel’skiĭ-Mann iterative method for approximating common fixed points of countable families of nonexpansive mappings in real Hilbert spaces with no extra assumptions on the considered countable families of nonexpansive mappings. The method of proof and the imposed conditions on the iterative parameters are different from those already available in the literature. We then present some applications to the Douglas–Rachford splitting method and image restoration problems, and compare the performance of our method with that of other popular inertial Krasnosel’skiĭ-Mann methods which can be found in the literature.


Introduction
Many results regarding the approximation of fixed points of nonexpansive mappings have been established in the literature (see, for example, Bauschke and Combettes 2011;Bauschke et al. 2011;Berinde 2007;Cegielski 2012;Krasnoselskii 1955;Mann 1953;Chang et al. 2002 and references therein).Recently, several iterative methods for constructing common fixed points of families of nonexpansive mappings have been proposed and studied.For instance, the results in Bauschke (1996), Kimura et al. (2005), O 'Hara et al. (2003), Shimizu and Takahashi (1997), Shioji and Takahashi (1997) concern approximating common fixed points of finite families of nonexpansive mappings, while those in Aoyama et al. (2007), Cho et al. (2008), Klin-eam and Suantai (2010), Nilsrakoo and Saejung (2009a), Nilsrakoo and Saejung (2009b), Sahu et al. (2013) aim at finding common fixed points of countable families of such mappings.Inertial Krasnosel'skiȋ-Mann iterations have been proposed in order to improve the convergence speed of the already available methods for finding fixed points of nonexpansive mappings.Some of these results can be found, for example, in Bot et al. (2015), Dong (2020), Dong et al. (2018), Dong et al. (2019), Dong et al. (2019), Maingé (2008), Vong and Liu (2018).Numerical examples concerning image reconstruction and signal processing have shown that the inertial Krasnosel'skiȋ-Mann iterative methods outperform the classical Krasnosel'skiȋ-Mann iterative methods (see, for instance, Bot et al. 2015;Dong 2020) through optimum choice of the inertia and iterative parameters.One of the major contributions of these results is that they indicate how to weaken and relax the conditions imposed on the inertial term.More recent results in this direction can be found in Dong (2020).One of the limitations of the inertial Krasnosel'skiȋ-Mann iterative methods in Bot et al. (2015), Dong (2020), Dong et al. (2018), Dong et al. (2019), Dong et al. (2019), Maingé (2008), Vong and Liu (2018) is that some strong assumptions are imposed on the inertial factor of the proposed methods.In addition, these inertial Krasnosel'skiȋ-Mann methods are only considered for a single nonexpansive mapping.As far as we know, no inertial-type Krasnosel'ski ȋ-Mann iterative methods have been proposed and studied for countable families of nonexpansive mappings so that results pertaining to such methods imply the results concerning known inertial Krasnosel'skiȋ-Mann methods as corollaries.This leads us to ask the following question: Question: Can we propose a new and useful inertial-type Krasnosel'skiȋ-Mann iterative method for countable families of nonexpansive mappings in Hilbert spaces?Our aim in this paper is to give an affirmative answer to this question.We now present a brief summary of our answer: • We propose an inertial-type Krasnosel'skiȋ-Mann iterative method for approximating common fixed points of countable families of nonexpansive mappings in Hilbert spaces with new conditions on the iterative parameters.• We establish a weak convergence theorem for our proposed method under weak conditions on the iterative parameters.
• We give some applications of our results to the inertial Douglas-Rachford splitting method.• We present numerical illustrations of our proposed method and compare our method with some important and recent inertial Krasnosel'skiȋ-Mann iterations which are available in the literature.Our numerical experiments show that our method has a faster convergence speed than these recent inertial Krasnosel'skiȋ-Mann iterations.
The organization of the paper is as follows.In Sect.2, we present the relationship of our results with existing ones in the literature and highlight the contribution of this paper to existing results.In Sect.3, we present several results which will be needed in our convergence analysis.The convergence analysis of our inertial-type Krasnosel'ski ȋ-Mann iterative method for countable families of nonexpansive mappings is presented in Sect. 4. We provide some applications of our results to solving monotone inclusion problems and the Douglas-Rachford operator splitting method in Sect. 5. Section 6 contains numerical implementations of our theoretical analysis.Concluding remarks are given in Sect.7.
Recently, convergence results for a combination of the Krasnosel'skiȋ-Mann iterative method (2.1) and an inertial extrapolation step (that is, inertial Krasnosel'skiȋ-Mann iterations) have been considered in the literature.It has been shown that the addition of an inertial extrapolation step to existing optimization methods does increase the convergence speed of the optimization methods (see, for instance, Alvarez and Attouch 2001, Attouch et al. 2000, 2014, Attouch and Czarnecki 2002, Attouch and Peypouquet 2016, Beck and Teboulle 2009, Bot and Csetnek 2016a, 2016b, Chen et al. 2015, Lorenz and Pock 2015, Maingé 2008, Ochs et al. 2015, Polyak 1964).In particular, the inertial Krasnosel'skiȋ-Mann iteration is of the form: where θ n ∈ [0, 1) is the inertial factor and α n ∈ (0, 1).Weak convergence results for (2.2) have been established in Bot et al. (2015), Dong (2020), Dong et al. (2018), Dong et al. (2019), Dong et al. (2019), Maingé (2008), Vong and Liu (2018).Several conditions have been imposed on {θ n } in order to obtain weak convergence of {x n } to a fixed point of T .For instance, the following conditions were imposed on {θ n } in several papers in order to obtain the convergence of (2.2), which follows the style of (Alvarez and Attouch 2001, Theorem 2.1) Although the summability condition (2.3) can be verified in practice by using a suitable on-line rule, it involves the iterates {x n } and {x n−1 } that are a priori unknown.Thus, it is desirable to give alternative conditions in which the knowledge of the iterates is not required.
Summing up, we highlight our contributions in this paper as follows.
• Clearly, conditions (2.4) and (2.5) are very complicated and restrictive.In particular, it is difficult to determine the upper bound θ of the inertial sequence {θ n }.To circumvent these difficulties, we provide new conditions on the inertial sequence {θ n } and the coefficients {α n } in (4.1).
In contrast to the above conditions, our proposed conditions in (4.1) are much more relaxed.For instance, in Propositions 4.13-4.15,we have provided several simple ways of choosing the inertial sequence {θ n } and the coefficients {α n } so that our proposed conditions in (4.1) are satisfied.• Our proposed algorithm does not involve the on-line rule 3).Instead, we propose another way of choosing the inertial parameters which is different from the one used in (Alvarez and Attouch 2001, Theorem 2.1).
• Furthermore, in the case where θ n = 0 for all n ≥ 1, the authors in Bot and Meier (2021) imposed some strong assumptions, which are difficult to satisfy in practice, on the countable families of nonexpansive mappings (see condition (iii) of (Bot and Meier 2021, Theorem 2.1)).
Our method (even with θ n not necessarily equal to 0) does not require such strong and restrictive conditions on the countable families of nonexpansive mappings.Moreover, to the best of our knowledge, no inertial-type Krasnosel'skiȋ-Mann iterative method has so far been proposed and studied for countable families of nonexpansive mappings.Thus, the techniques employed in this paper are different from the ones which have up to now been used in the literature.• Our results are applied to solving image restoration problems.

Preliminaries
In this section, we recall and present several results which will be needed in our convergence analysis.
Lemma 3.1 (Opial 1967).Let C be a nonempty subset of H and let {w n } be a sequence contained in H that satisfies the following two conditions: (i) lim n→∞ w n − y exists for any y ∈ C; (ii) every subsequential weak limit point of {w n } belongs to C.
Then, {w n } converges weakly to a point in C.
Definition 3.2 A mapping T is said to be demiclosed if whenever a sequence {w n } weakly converges to y and the sequence {T w n } strongly converges to z, it follows that T (y) = z.
The next result is well-known (Browder 1968).
Lemma 3.3 Let C ⊂ H be nonempty, closed and convex, and let T : C → C be a nonexpansive mapping.Then, I − T is demiclosed at 0.
We now specialize a result in Bruck (1973) . Then, we have for all n ≥ 1 (Attouch and Cabot 2020), which implies that Thus, the sequence {x n } converges if and only if Therefore, throughout this paper, we assume that Next, we define the sequence {t i } in R by with the convention that Remark 3.5 (See also Attouch and Cabot 2020).In view of assumption (3.1), the sequence {t i } given by (3.2) is welldefined and In the following proposition, we provide a criterion for ensuring assumption (3.1).

The corresponding finite sum expression of {t
otherwise. (3.4) Similarly to Remark 3.5, we see that {t i,n } is well-defined and Lemma 3.8 (Attouch and Cabot 2020, Lemma B.1).Let {a n }, {θ n } and {b n } be sequences of real numbers satisfying a n+1 ≤ θ n a n + b n for every n ≥ 1.
(a) For every n ≥ 1, we have where {t i,n } is as defined in (3.4).(b) Under (3.1), assume that the sequence {t i } defined by where [t] + := max{t, 0} for any t ∈ R. Lemma 3.9 (Attouch and Cabot 2020, Lemma 2.1).Let {x n } be a sequence in H and let {θ n } be a sequence of real numbers.Given a point z ∈ H , define the sequence where

Main results
We begin by listing the assumptions that we use in our convergence analysis.
We now introduce our new inertial-type method.
Algorithm 4.4 Choose sequences {θ n } and {α n } such that the conditions from Assumption 4.1 hold.For arbitrary x 0 , x 1 ∈ H , let the sequence {x n } be generated by We begin our analysis with the following lemmas.
Lemma 4.5 Let Assumption 4.1 hold and let {x n } be a sequence generated by Algorithm 4.4.Then, the following inequality holds: where n := Then, from (4.3) and Lemma 3.10, we obtain Using (4.4) in Lemma 3.9 and noting that Lemma 4.6 Let assumption (3.1) and Assumption 4.1 hold, and let {x n } be a sequence generated by Algorithm 4.4.Then, the following inequality holds: where t i is as defined in (3.2).
Lemma 4.7 Let assumption (3.1) and Assumption 4.1 hold, and let {x n } be a sequence generated by Algorithm 4.4.Then, Proof We may assume without any loss of generality that inequality (4.1) holds true for all n ≥ 1.That is, where s n = (1−α n ) α n .This implies that Also, using (4.9), we obtain Combining (4.10) and (4.11), we find that Now, replacing n with i and applying Lemma 4.6, we obtain (4.12) Taking the limit as n → ∞ in (4.12), we see that which yields the desired conclusion.
Lemma 4.8 Let assumption (3.1) and Assumption 4.1 hold, and let {x n } be a sequence generated by Algorithm 4.4.Then, we have the following assertions: Proof (a) From Lemma 4.5, it follows that ).Hence, {x n } is bounded, as asserted.(b) From (4.3) and Lemma 4.7, we obtain

123
(c) Applying Lemma 3.8(a) to (4.13), we obtain which implies by Lemma 4.7 that Since t i+1,n = 0 for i ≥ n, letting n tend to ∞, and applying the monotone convergence theorem, we get 1 2 Therefore, Using condition (a)(ii), we see that for each k ≥ 1, there exists n 0 ≥ 1 such that β k n 0 > 0. Hence, F(T n 0 ) ⊂ F(S k ) ∀k ≥ 1 by Lemma 3.4, and so Using condition (a)(iii), we obtain that Now, for l, m ≥ 1 with l > m, we get Taking the limit, we get where and so ∞ k=1 β k = 1.Next, define a mapping T by Passing to the limit, we see that As a matter of fact, it also follows directly from (4.17) that the sequence {T n x} is Cauchy for each x ∈ H . Indeed, let k, l ≥ 1 be such that k > l.Then, Hence, the sequence {T n x} is Cauchy and converges strongly to a point T x in H . Furthermore, Using Lemma 4.8 (b) and (c), we get Using Then, {x n } converges weakly to a fixed point of S.
We give some comments about our contributions in this paper as follows.
Remark 4.11 (a) In Condition (iii) of (Bot and Meier 2021, Theorem 2.1), some strong assumptions, which might be difficult to check in applications, are imposed on the countable families of nonexpansive mappings in order to obtain convergence.In our convergence analysis of Theorem 4.9, we do not impose any such extra conditions on the countable families of nonexpansive mappings in order to obtain weak convergence.(b) Our method of proofs in Lemmas 4. 5,4.6,4.7,4.8,Theorem 4.9 and the imposed conditions on the iterative parameters in Assumption 4.1 are different from the methods of proofs and the conditions imposed in Bot et al. (2015), Dong (2020), Dong et al. (2018), Dong et al. (2019), Dong et al. (2019), Maingé (2008), Vong and Liu (2018).
Remark 4.12 It is worth noting that many practical choices for the inertial sequence {θ n } and the coefficients {α n } satisfy (4.1).In fact, in the following propositions, we present several ways of choosing {θ n } and {α n } so that (4.1) is satisfied.
The proofs of Propositions 4.13-4.15can be derived by slightly modifying the proofs in Attouch and Cabot (2020).

Applications
In this section, we use Theorem 4.9 in order to solve monotone inclusion problems.
Recall that an operator A : and maximal monotone if it is monotone and its graph is not properly contained in the graph of any other monotone operator.
The resolvent operator J γ A associated with A and γ > 0 is the mapping J γ A : H → 2 H defined by (5.1) The resolvent operator J γ A is single-valued, nonexpansive and everywhere defined on H whenever A is maximal monotone (see, for example, Bauschke and Combettes 2011).Moreover, Let us consider the following monotone inclusion problem: where A is a maximal monotone operator defined on H .
Corollary 5.1 Let A : H → 2 H be a maximal monotone operator such that A −1 (0) = ∅.Suppose {x n } is generated by: x 0 , x 1 ∈ H, Assume that there exists ∈ (0, 1) such that for n large enough, we have Then, {x n } converges weakly to a point in A −1 (0). (5.4) Following similar arguments to those used above (from Lemma 4.5 to Theorem 4.9) and noting that A −1 (0) = F(J γ A ), we obtain the desired conclusion.
Next, we apply our results to the Douglas-Rachford splitting method for solving the following monotone inclusion problem involving two operators: where A, B : H → 2 H are maximal monotone operators on a Hilbert space H (see Douglas andRachford 1956, Lions andMercier 1979).Define for the corresponding reflections (also called Cayley operators), and observe that these reflections are nonexpansive mappings.
One can show that 0 ∈ Ax + Bx if and only if x = J γ B (y), where y is a fixed point of the nonexpansive mapping R γ A R γ B .The Douglas-Rachford splitting method for solving Problem (5.5) is as follows: (5.6) Modifying (5.6), we obtain the following weak convergence result.
Suppose that the sequence {x n } is generated by: x 0 , x 1 ∈ H, (5.7) Then, {x n } converges weakly to some point y ∈ H such that J γ B y ∈ (A + B) −1 (0), that is, x := J γ B y is a solution to the monotone inclusion problem (5.5).

Proof Define
Then, T is nonexpansive.Applying Theorem 4.9, we reach the desired conclusion.

Numerical experiments
In this section, focus is on the numerical implementation of our proposed algorithm regarding some of the applications which are presented in Sect. 5.This also involves comparing its performance with that of some algorithms in the literature.
The codes are written in MATLAB 2016 (b) running on a personal computer with an Intel(R) Core(TM) i5-2600 CPU at 2.30GHz and 8.00 Gb-RAM.
Example 6.1 In this example, we consider Problem (5.5) and use Algorithm (5.7) for the numerical experiments.
As for γ n , we adopt the method in Dong and Fischer (2010), Dong (2020) for updating γ n in the following manner.We assume, in addition, that A is continuous.At the n-th iterative step, with a known γ n , we first calculate and then, we update γ n by , c = 100, I is the m × m identity matrix, and and M i, j = 0 otherwise.Now, let A = A 1 + A 2 and consider the following complementarity problem which is also discussed in Dong (2020, Section 5); where ∂δ Q is the sub-differential of the indicator function defined by , where e 1 is the first column of the identity matrix of order N × 1 corresponding to the matrix A, we see that x * = e 1 is the unique solution to the complementarity problem.Thus, we set Then, The results of the numerical experiments are presented in Figs. 1 and 2, and in Tables 1 and 2.
Example 6.2 (Image Restoration Problem).Consider the following problem: where γ > 0, x ∈ R N is the original image to be recovered, b ∈ R M is the observed image and D : R N → R M is the blurring operator.In order to solve the above problem via numerical computations, we choose the regularization parameter γ = 5e −2 .Furthermore, we consider the 256×256 Cameraman image and the 128 × 128 medical resonance imaging (MRI) from the MATLAB Image Processing Toolbox.Moreover, we use the Gaussian blur of size 9 × 9 and standard deviation σ = 4 to create the blurred and noisy image (observed image).Also, we measure the quality of the restored image using the signal-to-noise ratio defined by where x is the original image and x * is the restored image.Note that the larger the SNR, the better the quality of the restored image.We also choose the initial values as Our results are reported in Figs. 3 and 4, and in Tables 3 and  4. Figures 3 and 4 show the original, blurred and restored images, while Tables 3 and 4 show the CPU time and the SNR values for each algorithm.

Final remarks
In this paper, we have presented a modification of the inertial Krasnosel'skiȋ-Mann iterative method for approximating a common fixed point of a countable family of nonexpansive mappings with new conditions on the iterative parameters.Using this method, we have obtained weak convergence results in Hilbert spaces.We have applied our results to monotone inclusion problems and to the Douglas-Rachford splitting method for finding a zero of the sum of two maximal monotone operators.We have also presented several numerical implementations which illustrate our theoretical analysis.

Assumption 4. 1
(a) {β k n } is a family of non-negative numbers with indices n, k ∈ N, where k ≤ n and the following conditions are satisfied: