- Research
- Open Access
Weak convergence of a hybrid type method with errors for a maximal monotone mapping in Banach spaces
- Ying Liu^{1}Email author
https://doi.org/10.1186/s13660-015-0772-7
© Liu 2015
- Received: 28 February 2015
- Accepted: 27 July 2015
- Published: 27 August 2015
Abstract
In this paper, we propose a hybrid type method which consists of a resolvent operator technique and a generalized projection onto a moving half-space for approximating a zero of a maximal monotone mapping in Banach spaces. The weak convergence of the iterative sequence generated by the algorithm is also proved. Our results extend and improve the recent ones announced by Zhang (Oper. Res. Lett. 40:564-567, 2012) and Wei and Zhou (Nonlinear Anal. 71:341-346, 2009).
Keywords
- maximal monotone mapping
- generalized projection
- resolvent technique
- normalized duality mapping
- weakly sequentially continuous
MSC
- 47H09
- 47H05
- 47H06
- 47J25
- 47J05
1 Introduction
Problem (1.1) plays an important role in optimizations. This is because it can be reduced to a convex minimization problem and a variational inequality problem. Many authors have constructed many iterative algorithms to approximate a solution of Problem (1.1) in several settings (see [1–11] and the references therein).
Recently, in [9], Wei and Zhou proposed the following iterative algorithm.
Algorithm 1.1
We note that, in Algorithm 1.1, we want to compute the generalized projection of \(x_{0}\) onto \(H_{n}\cap W_{n}\) to obtain the next iterate. If the set \(H_{n}\cap W_{n}\) is a specific half-space, then the generalized projection onto it is easily executed. But \(H_{n}\cap W_{n}\) may not be a half-space, although both of \(H_{n}\) and \(W_{n}\) are half-spaces. If \(H_{n}\cap W_{n}\) is a general closed and convex set, then it is not easy to compute a minimal general distance. This might seriously affect the efficiency of Algorithm 1.1.
In order to make up the defect, we will construct a new iterative algorithm in this paper by referring to the idea of [5] as follows.
In [5], Zhang proposed a modified proximal point algorithm with errors for approximating a solution of Problem (1.1) in Hilbert spaces. More precisely, he proposed Algorithm 1.2 and proved Theorems 1.1 and 1.2 as follows.
Algorithm 1.2
(i.e. Algorithm 2.1 of [5])
Step 0. Select an initial \(x_{0}\in\mathscr{H}\) (a Hilbert space) and set \(k=0\).
Theorem 1.1
(i.e. Theorem 2.1 of [5])
- (i)
\(\|e_{k}\|\leq\eta_{k}\|x_{k}-y_{k}\|\) for \(\eta_{k}\geq0\) with \(\sum_{k=0}^{\infty}\eta_{k}^{2}<+\infty\),
- (ii)
\(\{\beta_{k}\}_{k=0}^{+\infty}\subset[c,d]\) for some \(c,d\in(0,1)\),
- (iii)
\(0<\inf_{k\geq0}\rho_{k}\leq\sup_{k\geq0}\rho_{k}<2\),
Theorem 1.2
(i.e. Theorem 2.3 of [5])
We note that the set K is a half-space, and hence Algorithm 1.2 is easier to execute than Algorithm 1.1. But, since the metric projection strictly depends on the inner product properties of Hilbert spaces, Algorithm 1.2 can no longer be applied for Problem (1.1) in Banach spaces.
However, many important problems related to practical problems are generally defined in Banach spaces. For example, the maximal monotone operator related to an elliptic boundary value problem has a Sobolev space \(W^{m,p}(\Omega)\) as its natural domain of definition [12]. Therefore, it is meaningful to consider Problem (1.1) in Banach spaces. Motivated and inspired by Algorithms 1.1 and 1.2, the purpose of this paper is to construct a new iterative algorithm for approximating a solution of Problem (1.1) in Banach spaces. In the algorithm, we will replace the generalized projection onto \(H_{n}\cap W_{n}\) constructed in Algorithm 1.1 by a generalized projection onto a specific constructible half-space by using the idea of Algorithm 1.2. This will make up the defect of Algorithm 1.1 mentioned above.
2 Preliminaries
In the sequel, we use \(x_{n}\rightarrow x\) and \(x_{n}\rightharpoonup x\) to denote the strong convergence and weak convergence of the sequence \(\{x_{n}\}\) in E to x, respectively.
The duality mapping J from a smooth Banach space E into \(E^{*}\) is said to be weakly sequentially continuous if \(x_{n}\rightharpoonup x\) implies \(Jx_{n}\rightharpoonup Jx\); see [6] and the references therein.
Definition 2.1
([13])
- (i)
monotone if \(\langle x_{1}-x_{2},u_{1}-u_{2}\rangle\geq0\) for each \(x_{i}\in D(M)\) and \(u_{i}\in M(x_{i})\), \(i = 1,2\);
- (ii)
maximal monotone, if M is monotone and its graph \(G(M)=\{(x,u):u\in Mx\}\) is not properly contained in the graph of any other monotone operator. It is well known that a monotone mapping M is maximal if and only if for \((x,u)\in E\times E^{*}\), \(\langle x-y,u-v\rangle\geq0\) for every \((y,v)\in G(M)\) implies \(u\in Mx\).
- (A1)
\((\|x\|-\|y\|)^{2}\leq\phi(y,x)\leq(\|x\|+\|y\|)^{2}\),
- (A2)
\(\phi(x,y)=\phi(x,z)+\phi(z,y)+2\langle x-z,Jz-Jy\rangle\),
- (A3)
\(\phi(x,y)=\langle x,Jx-Jy\rangle+\langle y-x,Jy\rangle\leq\|x\|\|Jx-Jy\|+\|y-x\|\|y\|\).
Lemma 2.1
([7])
Lemma 2.2
([16])
Lemma 2.3
([16])
Let E be a uniformly convex and smooth Banach space. Let \(\{y_{n}\}\), \(\{z_{n}\}\) be two sequences of E. If \(\phi(y_{n},z_{n})\rightarrow0\) and either \(\{y_{n}\}\), or \(\{z_{n}\}\) is bounded, then \(y_{n}-z_{n}\rightarrow0\).
Lemma 2.4
([13])
Let E be a reflexive Banach space and λ be a positive number. If \(T:E\rightarrow2^{E^{*}}\) is a maximal monotone mapping, then \(R(J+\lambda T)=E^{*}\) and \((J+\lambda T)^{-1}:E^{*}\rightarrow E\) is a demi-continuous single-valued maximal monotone mapping.
Lemma 2.5
([9])
Lemma 2.6
([17])
Lemma 2.7
Proof
3 Main results
In this section, we construct a new iterative algorithm and prove two convergence theorems for two different iterative sequences generated by the new iterative algorithm for solving Problem (1.1) in Banach spaces.
Algorithm 3.1
Step 0. (Initiation) Arbitrarily select initial \(x_{0}\in E\) and set \(k=0\), where E is a reflexive, strictly convex, and smooth Banach space.
Step 3. Let \(k=k+1\) and return to Step 1.
Now we show the convergence of the iterative sequence generated by Algorithm 3.1 in the Banach space E.
Theorem 3.1
Proof
We split the proof into four steps.
Step 1. Show that \(\{x_{k}\}\) is bounded.
Step 2. Show that \(\{x_{k}\}\) and \(\{y_{k}\}\) have the same weak accumulation points.
Step 3. Show that each weak accumulation point of the sequence \(\{x_{k}\}\) is a solution of Problem (1.1).
Since \(\{x_{k}\}\) is bounded, let us suppose x̂ is a weak accumulation point of \(\{x_{k}\}\). Hence, we can extract a subsequence that weakly converges to x̂. Without loss of generality, let us suppose that \(x_{k}\rightharpoonup\hat{x}\) as \(k\rightarrow\infty\). Then from (3.13), we have \(y_{k}\rightharpoonup\hat{x}\) as \(k\rightarrow\infty\).
Step 4. Show that \(x_{k}\rightharpoonup\hat{x}\), as \(k\rightarrow\infty\) and \(\hat{x}=\lim_{k\rightarrow\infty}\Pi_{VI(E,M)}(x_{k})\).
Next, we show the convergence of the iterative sequence when \(\rho_{k}=0\).
Theorem 3.2
Let E be a uniformly convex, uniformly smooth Banach space whose duality mapping J is weakly sequentially continuous and \(M:E\rightarrow2^{E^{*}}\) be a maximal monotone mapping such that \(VI(E,M)\neq\emptyset\). Let \(\{x_{k}\}\) be the sequence generated by Algorithm 3.1 for \(\rho_{k}=0\).
If \(\{\lambda_{k}\}\), \(\{\beta_{k}\}\), and \(\{e_{k}\}\) satisfy \(\inf_{k\geq0}\lambda_{k}=\alpha_{1}>0\), \(0<\beta_{k}\leq 1\), \(\liminf_{k\rightarrow\infty}\beta_{k}>0\), and \(\lim_{k\rightarrow\infty}\|e_{k}\|=0\), then the iterative sequence \(\{x_{k}\}\) converges weakly to an element \(\hat{x}\in VI(E,M)\). Further, \(\hat{x}=\lim_{k\rightarrow\infty}\Pi_{VI(E,M)}(x_{k})\).
Proof
Remark 3.1
- (i)
When \(E=\mathscr{H}\) (a Hilbert space), Theorem 3.2 reduces to Theorem 2.3 of Zhang [5] (i.e. Theorem 1.2 of this paper). That is to say, Theorem 3.2 extends Theorem 2.3 of Zhang [5] from Hilbert spaces to more general Banach spaces. Furthermore, we see that the convergence point of \(\{x_{k}\}\) is \(\lim_{k\rightarrow\infty}\Pi_{VI(E,M)}(x_{k})\), which is more concrete than that of Theorem 2.3 of Zhang [5].
- (ii)
In Algorithm 3.1, the set \(C_{k}\) is a half-space, and hence it is easier to compute the generalized projection of the current iterate onto it than that onto a general closed convex set \(H_{n}\cap W_{n}\) or \(H_{n}\) constructed in [9, 18] to obtain the next iterate. Hence, Algorithm 3.1 improves Algorithm 1.1 and those algorithms in [18] from a numerical point of view.
In the following, we give a simple example to compare Algorithm 1.1 constructed in [9] with Algorithm 3.1 for \(\rho_{k}=0\).
Example 3.1
Let \(E=\mathbb{R}\), \(M:\mathbb{R}\rightarrow\mathbb{R}\) and \(M(x)=x\). It is obvious that M is maximal monotone and \(VI(E,M)=\{0\}\neq\emptyset\).
The numerical experiment result of Algorithm 1.1
Proof
The numerical experiment result of Algorithm 1.1
k | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | ⋯ |
---|---|---|---|---|---|---|---|---|---|
\(x_{k}\) | \(-\frac{ 1}{ 3}\) | \(-\frac { 7}{ 30}\) | \(-\frac{ 8}{ 45}\) | \(-\frac{ 9}{ 135}\) | \(-\frac{ 89}{ 1{,}650}\) | \(-\frac { 1{,}424}{ 32{,}175}\) | \(-\frac{ 127{,}448}{ 3{,}378{,}375}\) | \(-\frac{ 3{,}616{,}337}{ 114{,}864{,}750}\) | ⋯ |
The numerical experiment result of Algorithm 3.1 for \(\rho _{k}=0\)
Proof
The numerical experiment result of Algorithm 3.1
k | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | ⋯ |
---|---|---|---|---|---|---|---|---|---|
\(x_{k}\) | \(-\frac{ {1}}{ {3}}\) | \(-\frac { {2}}{ {15}}\) | \(-\frac{ 6}{ 105}\) | \(-\frac{ 8}{ 315}\) | \(-\frac{ 8}{ 693}\) | \(-\frac { 16}{ 3{,}003}\) | \(-\frac{ 16}{ 6{,}435}\) | \(-\frac{ 128}{ 109{,}395}\) | ⋯ |
Remark 3.2
Comparing Table 1 with Table 2, we can intuitively see that the convergence speed of Algorithm 3.1 for \(\rho_{k}=0\) is faster than that of Algorithm 1.1 constructed in [9].
Remark 3.3
In [19, 20], the authors proposed several different iterative algorithms for approximating zeros of m-accretive operators in Banach spaces. The nonexpansiveness of the resolvent operator of the m-accretive operator is employed in theses algorithms. Since the resolvent operator of a maximal monotone operator is not nonexpansive in Banach spaces, these algorithms cannot be applied to Problem (1.1).
Remark 3.4
In [21], the authors established viscosity iterative algorithms for approximating a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality for an inverse-strongly monotone mapping in Hilbert spaces by using the nonexpansiveness of the metric projection operator. However, the metric projection operator is not nonexpansive in Banach spaces. Therefore, the algorithms of [21] cannot be applied to Problem (1.1) of this paper in Banach spaces.
Declarations
Acknowledgements
This research is financially supported by the National Natural Science Foundation of China (11401157). The author is grateful to the referees for their valuable comments and suggestions, which improved the contents of the article.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976) MathSciNetView ArticleMATHGoogle Scholar
- Eckstein, J, Bertsekas, DP: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293-318 (1992) MathSciNetView ArticleGoogle Scholar
- He, BS, Liao, L, Yang, Z: A new approximate proximal point algorithm for maximal monotone operator. Sci. China Ser. A 46, 200-206 (2003) MathSciNetView ArticleGoogle Scholar
- Yang, Z, He, BS: A relaxed approximate proximal point algorithm. Ann. Oper. Res. 133, 119-125 (2005) MathSciNetView ArticleMATHGoogle Scholar
- Zhang, QB: A modified proximal point algorithm with errors for approximating solution of the general variational inclusion. Oper. Res. Lett. 40, 564-567 (2012) MathSciNetView ArticleGoogle Scholar
- Iiduka, H, Takahashi, W: Strong convergence studied by a hybrid type method for monotone operators in a Banach space. Nonlinear Anal. 68, 3679-3688 (2008) MathSciNetView ArticleGoogle Scholar
- Alber, YI, Reich, S: An iterative method for solving a class of nonlinear operator equations in Banach spaces. Panam. Math. J. 4, 39-54 (1994) MathSciNetGoogle Scholar
- Kamimura, S, Takahashi, W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13, 938-945 (2002) MathSciNetView ArticleGoogle Scholar
- Wei, L, Zhou, HY: Strong convergence of projection scheme for zeros of maximal monotone operators. Nonlinear Anal. 71, 341-346 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Solodov, MV, Svaiter, BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 87, 189-202 (2000) MathSciNetGoogle Scholar
- Yanes, CM, Xu, HK: Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal. 64, 2400-2411 (2006) MathSciNetView ArticleGoogle Scholar
- Mosco, U: Perturbation of variational inequalities. In: Nonlinear Functional Analysis. Proc. Sympos. Pure Math., vol. XVIII, Part 1, Chicago, IL, 1968, pp. 182-194. Am. Math. Soc., Providence (1970) View ArticleGoogle Scholar
- Pascali, D: Nonlinear Mappings of Monotone Type. Sijthoff & Noordhoff, Alphen aan den Rijn (1978) View ArticleGoogle Scholar
- Alber, YI: Metric and generalized projection operators in Banach spaces: properties and applications. In: Kartsatos, A (ed.) Theory and Applications of Nonlinear Operators of Monotonic and Accretive Type, pp. 15-50. Dekker, New York (1996) Google Scholar
- Alber, YI, Guerre-Delabriere, S: On the projection methods for fixed point problems. Analysis 21, 17-39 (2001) MathSciNetView ArticleMATHGoogle Scholar
- Matsushita, S, Takahashi, W: A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory 134, 257-266 (2005) MathSciNetView ArticleMATHGoogle Scholar
- Tan, KK, Xu, HK: Approximating fixed points of nonexpansive mapping by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301-308 (1993) MathSciNetView ArticleMATHGoogle Scholar
- Ceng, LC, Ansari, QH, Yao, JC: Hybrid proximal-type and hybrid shrinking projection algorithms for equilibrium problems, maximal monotone operators, and relatively nonexpansive mappings. Numer. Funct. Anal. Optim. 31, 763-797 (2010) MathSciNetView ArticleMATHGoogle Scholar
- Ceng, LC, Khan, AR, Ansari, QH, Yao, JC: Strong convergence of composite iterative schemes for zeros of m-accretive operators in Banach spaces. Nonlinear Anal. 70, 1830-1840 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Ceng, LC, Ansari, QH, Schaible, S, Yao, JC: Hybrid viscosity approximation method for zeros of m-accretive operators in Banach spaces. Numer. Funct. Anal. Optim. 33, 142-165 (2012) MathSciNetView ArticleMATHGoogle Scholar
- Ceng, LC, Khan, AR, Ansari, QH, Yao, JC: Viscosity approximation methods for strongly positive and monotone operators. Fixed Point Theory 10, 35-71 (2009) MathSciNetMATHGoogle Scholar