Iterative algorithms for finding the zeroes of sums of operators
© Shi et al.; licensee Springer. 2014
Received: 25 February 2014
Accepted: 5 August 2014
Published: 8 September 2014
Let , be real Hilbert spaces, be a nonempty closed convex set, and . Let , be two bounded linear operators. We consider the problem to find such that (). Recently, Eckstein and Svaiter presented some splitting methods for finding a zero of the sum of monotone operator A and B. However, the algorithms are largely dependent on the maximal monotonicity of A and B. In this paper, we describe some algorithms for finding a zero of the sum of A and B which ignore the conditions of the maximal monotonicity of A and B.
1 Introduction and preliminaries
For convenience, we denote the problem by .
- (i)The Douglas/Peaceman-Rachford family, whose iteration is given by
- (ii)The double backward splitting method, with iteration given by
- (iii)The forward-backward splitting method, with iteration given by
where a sequence of regularization parameters.
Convergence results for the scheme (i), in the case in which is contained in a compact subset of , can be found in ; the convergence analysis of the double backward scheme given by (ii), which can be found in  and ; the standard convergence analysis for (iii) one can see . However, the convergence results are largely dependent on the maximal monotonicity of A and B. It is therefore the aim of this paper to construct new algorithms for problem which ignore the conditions of the maximal monotonicity of A and B.
The paper is organized as follows. In Section 2, we define the concept of the minimal norm solution of the problem (1.1). Using Tychonov regularization, we obtain a net of solutions for some minimization problem approximating such minimal norm solution (see Theorem 2.4). In Section 3, we introduce an algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem (1.1) (see Theorem 3.2). In Section 4, we introduce KM-CQ-like iterative algorithm which converge strongly to a solution of the problem (1.1) (see Theorem 4.3).
Throughout the rest of this paper, I denotes the identity operator on Hilbert space H, the set of the fixed points of an operator T and ∇f the gradient of the functional . An operator T on a Hilbert space H is nonexpansive if, for each x and y in H, . T is said to be averaged, if there exist and a nonexpansive operator N such that .
We now collect some elementary facts which will be used in the proofs of our main results.
Let X be a Banach space, C a closed convex subset of X, and a nonexpansive mapping with . If is a sequence in C weakly converging to x and if converges strongly to y, then .
Lemma 1.2 
Lemma 1.3 
Let , be bounded sequences in a Banach space and let be a sequence in which satisfies the following condition: . Suppose that and , then .
Lemma 1.4 
Moreover, if f is, in addition, strictly convex and coercive, then the minimization problem has a unique solution.
Lemma 1.5 
Let A and B be averaged operators and suppose that is nonempty. Then .
2 The minimum-norm solution of the problem
In this section, we propose the concept of the minimal norm solution of (1.1). Then, using Tychonov regularization, we obtain the minimal norm solution by a net of solution for some minimization problem.
and assume consistency of . Hence Γ is closed, convex, and nonempty.
that is to say, , .
Define by , . Then G has the matrix form , and , .
where is the regularization parameter. Denote by the unique solution of (2.2).
Definition 2.2 An element is said to be the minimal norm solution of SEP (1.1) if .
The following proposition collects some useful properties of the unique solution of (2.2).
is decreasing for .
defines a continuous curve from to .
It follows that . Thus is decreasing for .
Hence, is a continuous curve from to . □
Theorem 2.4 Let be the unique solution of (2.2). Then converges strongly to the minimum-norm solution of (1.1) with .
It follows that for all . Thus is a bounded net in .
All we need to prove is that for any sequence such that , contains a subsequence converging strongly to . For convenience, we set .
Moreover, note that converges weakly to a point , thus converges weakly to . It follows that , i.e. .
Finally, we prove that and this finishes the proof.
This shows that is also a point in Γ with minimum-norm. By the uniqueness of minimum-norm element, we get . □
Finally, we will introduce another method to get the minimum-norm solution of the problem .
(i.e. T is nonexpansive) and averaged;
if and only if x is a solution of the variational inequality , .
If , it is obviously that . Conversely, assume that , we have , hence then , we get . We have .
Theorem 2.7 Let be given as (2.4). Then converges strongly to the minimum-norm solution of the problem (1.1) when .
Hence is bounded.
and applying Lemma 1.1, we obtain .
Hence, if converges weakly to , then converges strongly to . That is to say is relatively norm compact as .
It turns out that . Consequently, each cluster point of is equals . Thus the minimum-norm solution of the problem . □
3 Iterative algorithm for the minimum-norm solution of the problem
In this section, we introduce the following algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem .
Now, we prove the strong convergence of the iterative algorithm.
Theorem 3.2 The sequence generated by algorithm (3.1) converges strongly to the minimum-norm solution of the problem (1.1).
where , by Lemma 2.5 , it is easy to see that is a contraction with contractive constant . Algorithm (3.1) can be written as .
It follows that . So is bounded.
Next we prove that .
By the demiclosedness principle ensures that each weak limit point of is a fixed point of the nonexpansive mapping , that is, a point of the solution set Γ of SEP (1.1).
Finally, we will prove that .
Note that , it follows that .
Take a subsequence of such that .
Since, , , we know that , by Lemma 1.2, we conclude that . This completes the proof. □
4 KM-CQ-like iterative algorithm for the problem
In this section, we establish a KM-CQ-like algorithm converges strongly to a solution of the problem .
Lemma 4.2 If , then for any x we have , where β and V are the same in Lemma 2.5(1).
Theorem 4.3 The sequence generated by algorithm (4.1) converges strongly to a solution of the problem .
Since is bounded, we see that , , and are also bounded.
Furthermore, is bounded, there exists a subsequence of which converges weakly to a point . Without loss of generality, we may assume that converges weakly to . Since , using the demiclosedness principle we know that .
Similar to the proof of Theorem 4.3, we can get the result that the following iterative algorithm converges strongly to a solution of the problem also. Since the proof is similar to Theorem 4.3, we omit it.
This research was supported by NSFC Grants No:11071279; No:11226125; No:11301379.
- Eckstein J, Svaiter BF: A family of projective splitting methods for the sum of two maximal monotone operators. Math. Program. 2008, 111: 173-1199.MathSciNetView ArticleMATHGoogle Scholar
- Eckstein J, Bertsekas D: On the Douglas-Ratford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55: 293-318. 10.1007/BF01581204MathSciNetView ArticleMATHGoogle Scholar
- Lions PL: Une méthode itérative de résolution d’une inéquation variationelle. Isr. J. Math. 1978, 31: 204-208. 10.1007/BF02760552View ArticleMATHGoogle Scholar
- Passty GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72: 383-390. 10.1016/0022-247X(79)90234-8MathSciNetView ArticleMATHGoogle Scholar
- Tseng P: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38: 431-446. 10.1137/S0363012998338806MathSciNetView ArticleMATHGoogle Scholar
- Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
- Geobel K, Reich S: Uniform Convexity, Nonexpansive Mappings, and Hyperbolic Geometry. Dekker, New York; 1984.Google Scholar
- Aoyama K, Kimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal., Theory Methods Appl. 2007,67(8):2350-2360. 10.1016/j.na.2006.08.032MathSciNetView ArticleMATHGoogle Scholar
- Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach space. Fixed Point Theory Appl. 2005, 1: 103-123.MATHGoogle Scholar
- Engl HW, Hanke M, Neubauer A Mathematics and Its Applications 375. In Regularization of Inverse Problems. Kluwer Academic, Dordrecht; 1996.View ArticleGoogle Scholar
- Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004,20(1):103-120. 10.1088/0266-5611/20/1/006MathSciNetView ArticleMATHGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.