- Research
- Open Access
- Published:
Simple form of a projection set in hybrid iterative schemes for non-linear mappings, application of inequalities and computational experiments
Journal of Inequalities and Applications volume 2018, Article number: 179 (2018)
Abstract
Some relaxed hybrid iterative schemes for approximating a common element of the sets of zeros of infinite maximal monotone operators and the sets of fixed points of infinite weakly relatively non-expansive mappings in a real Banach space are presented. Under mild assumptions, some strong convergence theorems are proved. Compared to recent work, two new projection sets are constructed, which avoids calculating infinite projection sets for each iterative step. Some inequalities are employed sufficiently to show the convergence of the iterative sequences. A specific example is listed to test the effectiveness of the new iterative schemes, and computational experiments are conducted. From the example, we can see that although we have infinite choices to choose the iterative sequences from an interval, different choice corresponds to different rate of convergence.
1 Introduction
Throughout this paper, let X be a real Banach space with norm \(\|\cdot \|\) and \(X^{*}\) be the dual space of X. Let K be a non-empty closed and convex subset of X. Let \(\langle x, f\rangle\) be the value of \(f \in X^{*}\) at \(x \in X\). We write \(x_{n} \rightarrow x\) to denote that \(\{ x_{n}\}\) converges strongly to x and \(x_{n} \rightharpoonup x\) to denote that \(\{x_{n}\}\) converges weakly to x.
Suppose that A is a multi-valued operator from X into \(X^{*}\). A is said to be monotone [1] if for \(\forall v_{i} \in Au_{i}\), \(i = 1,2\), one has \(\langle u_{1} - u_{2}, v_{1} - v_{2}\rangle\geq0\). The monotone operator A is called maximal monotone if \(R(J+k A) = X^{*}\), for \(k > 0\), where \(J: X \rightarrow2^{X^{*}}\) is the normalized duality mapping defined by
A point \(x \in D(A)\) is called a zero of A if \(Ax = 0\). The set of zeros of A is denoted by \(A^{-1}0\).
Suppose that the Lyapunov functional \(\phi: X \times X \rightarrow (0,+\infty)\) is defined as follows:
Let T be a single-valued mapping of K into itself.
-
(1)
If \(Tp = p\), then p is called a fixed point of T. And \(\operatorname{Fix}(T)\) denotes the set of fixed points of T;
-
(2)
If there exists a sequence \(\{x_{n}\}\subset K\) which converges weakly to \(p\in K\) such that \(x_{n} - Tx_{n} \rightarrow0\), as \(n \rightarrow\infty\), then p is called an asymptotic fixed point of T [2]. And \(\widehat{\operatorname{Fix}}(T)\) denotes the set of asymptotic fixed points of T;
-
(3)
If there exists a sequence \(\{x_{n}\}\subset K\) which converges strongly to \(p\in K\) such that \(x_{n} - Tx_{n} \rightarrow0\), as \(n \rightarrow\infty\), then p is called a strong asymptotic fixed point of T [2]. And \(\widetilde{\operatorname{Fix}}(T)\) denotes the set of strong asymptotic fixed points of T;
-
(4)
T is called strongly relatively non-expansive [2] if \(\widehat{\operatorname{Fix}}(T) = \operatorname{Fix}(T)\neq\emptyset\) and \(\phi(p, Tx)\leq\phi (p,x)\) for \(x \in K\) and \(p \in \operatorname{Fix}(T)\);
-
(5)
T is called weakly relatively non-expansive [2] if \(\widetilde{\operatorname{Fix}}(T) = \operatorname{Fix}(T)\neq\emptyset\) and \(\phi(p, Tx)\leq\phi (p,x)\) for \(x \in K\) and \(p \in \operatorname{Fix}(T)\).
If X is a real reflexive and strictly convex Banach space and K is a non-empty closed and convex subset of X, then for each \(x \in X\) there exists a unique point \(x_{0} \in K\) such that \(\|x - x_{0}\| = \inf \{ \|x - y\|: y \in K\}\). In this case, we can define the metric projection mapping \(P_{K}: X \rightarrow K\) by \(P_{K}x = x_{0}\) for \(\forall x \in X\) [3].
If X is a real reflexive, strictly convex, and smooth Banach space and K is a non-empty closed and convex subset of X, then for \(\forall x \in X\), there exists a unique point \(x_{0} \in K\) such that \(\phi(x_{0}, x) = \inf \{\phi(y,x) : y \in K\}\). In this case, we can define the generalized projection mapping \(\Pi_{K}: X \rightarrow K\) by \(\Pi_{K} x = x_{0}\) for \(\forall x \in X\) [3].
Note that if X is a Hilbert space H, then \(P_{K}\) and \(\Pi_{K}\) are coincidental.
Since maximal monotone operators and weakly (or strongly) relatively non-expansive mappings have close connection with practical problems, one has a good reason to study them. During past years, much work has been done in designing iterative schemes to approximate a common element of the set of zeros of maximal monotone operators and the set of fixed points of weakly (or strongly) relatively non-expansive mappings. Among them, a projection iterative scheme is considered as one of the effective iterative schemes which almost always generates strongly convergent iterative sequences (see [4–8] and the references therein). Next, we list some recent closely related work.
Klin-eam et al. [5] presented the following projection iterative scheme for maximal monotone operator A and two strongly relatively non-expansive mappings B and C in a real uniformly convex and uniformly smooth Banach space X.
Then \(\{x_{n}\}\) generated by (1.1) converges strongly to \(\Pi_{A^{-1}0 \cap \operatorname{Fix}(B)\cap \operatorname{Fix}(C)}(x_{1})\).
Compared to (1.1), the following so-called monotone projection iterative scheme for maximal monotone operator A and strongly relatively non-expansive mapping B in a real uniformly convex and uniformly smooth Banach space X is presented [4].
Then \(\{x_{n}\}\) generated by (1.2) converges strongly to \(\Pi_{A^{-1}0 \cap \operatorname{Fix}(B)}(x_{1})\).
In recent work, Wei et al. [8] extended the corresponding topic to the case for infinite maximal monotone operators \(A_{i}\) and infinite weakly relatively non-expansive mappings \(B_{i}\).
Then \(\{x_{n}\}\) generated by (1.3) converges strongly to
Compared to traditional (monotone) projection iterative schemes (e.g., (1.1) and (1.2)), some different ideas in (1.3) can be seen. (1) Metric projection mapping \(P_{W_{n+1}}\) instead of generalized projection mapping Π is involved in (1.3). (2) The iterative item \(x_{n+1}\) can be chosen arbitrarily in the set \(U_{n+1}\), while \(x_{n+1}\) in both (1.1) and (1.2) and some others are needed to be the unique value of generalized projection mapping Π. (3) \(\{x_{n}\}\) in (1.3) converges strongly to the unique value of metric projection mapping P, while \(\{ x_{n}\}\) in both (1.1) and (1.2) converges strongly to the unique value of the generalized projection mapping Π.
A special case of (1.3) is presented as Corollary 2.13 in [8]. Now, we rewrite it as follows:
Based on iterative scheme (1.4), an iterative sequence is defined as follows after taking \(H = (-\infty,+\infty)\), \(Ax = 2x\), \(Bx = x\) for \(x \in(-\infty,+\infty)\), \(e_{n} = \alpha_{n} = \lambda_{n} = \frac{1}{n}\), and \(r_{n} = 2^{n-1}\):
A computational experiment based on (1.5) is conducted in [8], from which we can see the effectiveness of iterative scheme (1.4).
Inspired by the work of [8], three questions come to our mind. (1) In iterative scheme (1.3), in each iterative step n, countable sets \(V_{n+1,i}\) and \(W_{n+1,i}\) are needed to be evaluated. It is formidable. Can we avoid it? (2) \(x_{n+1}\) in either (1.3) or (1.4) can be chosen arbitrarily in a set, can a different choice of \(x_{n+1}\) in \(V_{n+1}\) lead to a different rate of convergence? (3) Which one is better, our new one or those in [8]? In this paper, we shall answer the questions, construct new simple projection sets in theoretical sense, and do computational experiments for some special cases.
2 Preliminaries
In this section, we list some definitions and results we need later. The modulus of convexity of X, \(\delta_{X}: [0,2] \rightarrow[0,1]\), is defined as follows [9]:
for \(\forall\epsilon\in[0,2]\). A Banach space X is called uniformly convex [9] if \(\delta_{X}(\epsilon)> 0\) for \(\forall \epsilon\in[0,2]\). A Banach space X is called uniformly smooth [9] if the limit \(\lim_{t \rightarrow0}\frac{\|x+ty\|-\|x\|}{t}\) is attained uniformly for \((x,y)\in X\times X\) with \(\|x\|= \|y\| = 1\).
X is said to have Property (H): if for every sequence \(\{x_{n}\} \subset X\) converging weakly to \(x \in X\) and \(\|x_{n}\| \rightarrow\|x\| \), one has \(x_{n} \rightarrow x\), as \(n \rightarrow\infty\). The uniformly convex and uniformly smooth Banach space X has Property (H).
It is well known that if X is a real uniformly convex and uniformly smooth Banach space, then the normalized duality mapping J is single-valued, surjective and \(J(kx) = kJ(x)\) for \(x \in X\) and \(k \in (-\infty,+\infty)\). Moreover, \(J^{-1}\) is also the normalized duality mapping from \(X^{*}\) into X, and both J and \(J^{-1}\) are uniformly continuous on each bounded subset of X or \(X^{*}\), respectively [9].
Lemma 2.1
([2])
Suppose that X is a uniformly convex and uniformly smooth Banach space and K is a non-empty closed and convex subset of X. If \(B: K \rightarrow K\) is weakly relatively non-expansive, then \(\operatorname{Fix}(B)\) is a closed and convex subset of X.
Lemma 2.2
([1])
Let \(A : X \rightarrow2^{X^{*}}\) be a maximal monotone operator, then
-
(1)
\(A^{-1}0\) is a closed and convex subset of X;
-
(2)
if \(x_{n} \rightarrow x\) and \(y_{n} \in Ax_{n}\) with \(y_{n} \rightharpoonup y\), or \(x_{n} \rightharpoonup x\) and \(y_{n} \in Ax_{n}\) with \(y_{n} \rightarrow y\), then \(x \in D(A)\) and \(y \in Ax\).
Lemma 2.3
([8])
Let K be a non-empty closed and convex subset of a uniformly smooth Banach space X. Let \(x \in X\) and \(x_{0} \in K\). Then \(\phi(x_{0}, x) = \inf_{y \in K} \phi(y,x)\) if and only if \(\langle x_{0} - z, Jx - Jx_{0}\rangle\geq0\) for all \(z \in K\).
Lemma 2.4
([10])
Let X be a real uniformly smooth and uniformly convex Banach space, and let \(\{x_{n}\}\) and \(\{y_{n}\}\) be two sequences of X. If either \(\{x_{n}\}\) or \(\{y_{n}\}\) is bounded and \(\phi (x_{n},y_{n}) \rightarrow0\) as \(n \rightarrow\infty\), then \(x_{n} - y_{n} \rightarrow0\) as \(n \rightarrow\infty\).
Lemma 2.5
([11])
Let X be a real uniformly smooth and uniformly convex Banach space and \(A: X \rightarrow2^{X^{*}}\) be a maximal monotone operator with \(A^{-1}0 \neq\emptyset\). Then, for \(\forall x \in X\), \(\forall y \in A^{-1}0\), and \(r > 0\), one has \(\phi(y, (J+rA)^{-1}Jx)+ \phi((J+rA)^{-1}Jx, x) \leq\phi(y,x)\).
Let \(\{K_{n}\}\) be a sequence of non-empty closed and convex subsets of X. Then the strong lower limit of \(\{K_{n}\}\), \(s\mbox{-}\liminf K_{n}\), is defined as the set of all \(x \in X\) such that there exists \(x_{n} \in K_{n}\) for almost all n and it tends to x as \(n \rightarrow\infty\) in the norm; the weak upper limit of \(\{K_{n}\}\), \(w\mbox{-}\limsup K_{n}\), is defined as the set of all \(x \in X\) such that there exists a subsequence \(\{K_{n_{m}}\}\) of \(\{K_{n}\}\) and \(x_{n_{m}} \in K_{n_{m}}\) for every \(n_{m}\) and it tends to x as \(n_{m} \rightarrow\infty\) in the weak topology; the limit of \(\{K_{n}\}\), \(\lim K_{n}\), is the common value when \(s\mbox{-}\liminf K_{n} = w\mbox{-}\limsup K_{n}\) [12].
Lemma 2.6
([12])
Let \(\{K_{n}\}\) be a decreasing sequence of closed and convex subsets of X, i.e., \(K_{n} \subset K_{m}\) if \(n \geq m\). Then \(\{K_{n}\}\) converges in X and \(\lim K_{n} = \bigcap_{n = 1}^{\infty} K_{n}\).
Lemma 2.7
([13])
Suppose that X is a real uniformly convex Banach space. If \(\lim K_{n}\) exists and is not empty, then \(\{P_{K_{n}}x\}\) converges weakly to \(P_{\lim K_{n}}x\) for every \(x \in X\). Moreover, if X has Property (H), the convergence is in norm.
Lemma 2.8
([14])
Let X be a real uniformly convex Banach space and \(r \in(0,+\infty)\). Then there exists a continuous, strictly increasing, and convex function \(\eta: [0, 2r] \rightarrow[0, +\infty )\) with \(\eta(0) = 0\) such that \(\|kx+(1-k)y\|^{2} \leq k\|x\|^{2} +(1-k)\| y\|^{2} - k(1-k)\eta(\|x - y\|)\) for \(k \in[0,1], x,y \in X\) with \(\|x\| \leq r\) and \(\|y\| \leq r\).
Lemma 2.9
([15])
Let X be the same as that in Lemma 2.8. Then there exists a continuous, strictly increasing, and convex function \(\eta: [0, 2r] \rightarrow[0, +\infty)\) with \(\eta(0) = 0\) such that \(\|\sum_{i = 1}^{\infty} k_{i}x_{i}\|^{2} \leq\sum_{i = 1}^{\infty} k_{i}\|x_{i}\|^{2} - k_{1}k_{m}\eta(\|x_{1} - x_{m}\|)\) for all \(\{x_{n}\}_{n = 1}^{\infty}\subset\{x \in X: \|x\| \leq r\}\), \(\{k_{n}\}_{n = 1}^{\infty}\subset(0,1)\) with \(\sum_{n = 1}^{\infty} k_{n} = 1\) and \(m \in N\).
3 Main results
In this section, our discussion is based on the following conditions:
- (\(I_{1}\)):
-
X is a real uniformly convex and uniformly smooth Banach space and \(J: X \rightarrow X^{*}\) is the normalized duality mapping;
- (\(I_{2}\)):
-
\(A_{i}: X \rightarrow X^{*}\) is maximal monotone and \(B_{i} : X \rightarrow X\) is weakly relatively non-expansive for each \(i\in N\). And \((\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i})) \neq\emptyset\);
- (\(I_{3}\)):
-
\(\{e_{n}\} \subset X\) is the error sequence such that \(e_{n} \rightarrow0\), as \(n \rightarrow\infty\);
- (\(I_{4}\)):
-
\(\{r_{n,i}\}\) and \(\{\lambda_{n}\}\) are two real number sequences in \((0,+\infty)\) with \(\inf_{n}r_{n,i} > 0\) for \(i \in N\) and \(\lambda _{n} \rightarrow0\), as \(n \rightarrow\infty\);
- (\(I_{5}\)):
-
\(\{a_{n,i}\}\) and \(\{b_{i}\}\) are two real number sequences in \((0,1)\) and \(\sum_{i = 1}^{\infty} a_{n,i} = 1 = \sum_{i = 1}^{\infty} b_{i}\) for \(n \in N\);
- (\(I_{6}\)):
-
\(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are two real number sequences in \([0,1)\).
Theorem 3.1
Let \(\{x_{n}\}\) be generated by the following iterative scheme:
If \(0 \leq \sup_{n}\alpha_{n} < 1\) and \(0 \leq \sup_{n}\beta_{n} < 1\), then \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }\operatorname{Fix}(B_{i}))\), as \(n \rightarrow\infty\).
Proof
We split the proof into seven steps.
Step 1. \(U_{n}\) is a non-empty closed and convex subset of X for each \(n \in N\).
Noticing the definition of Lyapunov functional, we have
and
Thus \(U_{n}\) is closed and convex for each \(n \in N\).
Next, we shall prove that \((\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\subset U_{n}\), which implies that \(U_{n} \neq\emptyset\).
For this, we shall use inductive method. Now, \(\forall q \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty }\operatorname{Fix}(B_{i}))\).
If \(n=1\), then \(q \in U_{1} = X\) is obviously true. In view of the convexity of \(\|\cdot\|^{2}\) and Lemma 2.5, we have
Moreover, from the definition of weakly relatively non-expansive mapping, we have
Thus \(q \in U_{2}\).
Suppose the result is true for \(n = k+1\). Then, if \(n = k+2\), we have
Moreover,
Then \(q \in U_{k+2}\). Therefore, by induction, \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\subset U_{n}\) for \(n \in N\).
Step 2. \(P_{U_{n+1}}(x_{1}) \rightarrow P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\), as \(n \rightarrow\infty\).
It follows from Lemma 2.6 that \(\lim U_{n}\) exists and \(\lim U_{n} = \bigcap_{n = 1}^{\infty} U_{n} \neq\emptyset\). Since X has Property (H), then Lemma 2.7 implies that \(P_{U_{n+1}}(x_{1}) \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow\infty\).
Step 3. \(V_{n} \neq\emptyset\), for \(N \cup\{0\}\), which ensures that \(\{x_{n}\}\) is well defined.
Since \(\|P_{U_{n+1}}(x_{1}) - x_{1}\| = \inf_{y \in U_{n+1}}\|y - x_{1}\|\), then for \(\lambda_{n+1}\), there exists \(\delta_{n+1} \in U_{n+1}\) such that \(\|x_{1} - \delta_{n+1}\|^{2} \leq(\inf_{y \in U_{n+1}} \|x_{1} - y\|)^{2} + \lambda_{n+1} = \|P_{U_{n+1}}(x_{1})-x_{1}\|^{2}+ \lambda_{n+1}\). This ensures that \(V_{n+1}\neq\emptyset\) for \(n \in N \cup\{0\}\).
Step 4. Both \(\{x_{n}\}\) and \(\{P_{U_{n+1}}(x_{1})\}\) are bounded.
Since \(\lambda_{n} \rightarrow0\), then there exists \(M_{1} > 0\) such that \(\lambda_{n} < M_{1}\) for \(n \in N\). Step 2 implies that \(\{ P_{U_{n+1}}(x_{1})\}\) is bounded, and then there exists \(M_{2} > 0\) such that \(\|P_{U_{n+1}}(x_{1})\| \leq M_{2}\) for \(n \in N\). Set \(M = (M_{2}+\| x_{1}\|)^{2}+M_{1}\). Since \(x_{n+1} \in V_{n+1}\), then \(\|x_{1} - x_{n+1}\|^{2} \leq\|P_{U_{n+1}}(x_{1})-x_{1}\|^{2}+ \lambda _{n+1}\leq M\), \(\forall n \in N\). Thus \(\{x_{n}\}\) is bounded.
Step 5. \(x_{n+1} - P_{U_{n+1}}(x_{1})\rightarrow0\), as \(n \rightarrow \infty\).
Since \(x_{n+1} \in V_{n+1}\subset U_{n+1}\) and \(U_{n}\) is a convex subset of X, then for \(\forall k \in(0,1)\), \(kP_{U_{n+1}}(x_{1})+(1-k)x_{n+1}\in U_{n+1}\). Thus
Since \(\{x_{n}\}\) is bounded, it follows from (3.2) and Lemma 2.8 that
Therefore, \(k\eta(\|P_{U_{n+1}}(x_{1})-x_{n+1}\|)\leq\|x_{n+1}-x_{1}\|^{2}-\| P_{U_{n+1}}(x_{1})-x_{1}\|^{2} \leq\lambda_{n+1}\). Letting \(k \rightarrow1\) first and then \(n \rightarrow\infty\), we know that \(P_{U_{n+1}}(x_{1})-x_{n+1} \rightarrow0\), as \(n \rightarrow\infty\).
Step 6. \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), \(y_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\) and \(z_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow \infty\).
From Step 2 and Step 5, we know that \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow\infty\). And then \(x_{n+1} - x_{n} \rightarrow0\), as \(n \rightarrow\infty\). Since \(x_{n+1} \in V_{n+1}\subset U_{n+1}\) and \(e_{n} \rightarrow0\), then
Then Lemma 2.4 implies that \(x_{n+1} - y_{n}\rightarrow0\) and then \(y_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow \infty\).
Since \(x_{n+1} \in V_{n+1}\subset U_{n+1}\) and J is uniformly continuous on each bounded subset of X, then
Using Lemma 2.4 again, we have \(x_{n+1} - z_{n}\rightarrow0\) and then \(z_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow\infty\).
Step 7. \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0) \cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\).
First, we shall show that \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in \bigcap_{i = 1}^{\infty}A_{i}^{-1}0\).
From (3.1) and Lemma 2.5, for \(\forall q \in(\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\), we have
Then
Since \(0 \leq \sup_{n}\alpha_{n} < 1\), then \(\sum_{i = 1}^{\infty} a_{n,i}\phi((J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n}),x_{n}+e_{n}) \rightarrow0\), which implies from Lemma 2.4 that \((J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n})- (x_{n}+e_{n}) \rightarrow0\), as \(n \rightarrow\infty\). Thus from Step 6, \((J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n}) \rightarrow P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\), as \(n \rightarrow\infty\).
Denote \(u_{n,i} = (J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n})\), then \(Ju_{n,i} + r_{n,i}A_{i}u_{n,i} = J(x_{n}+e_{n})\). Since \(u_{n,i} \rightarrow P_{\bigcap _{m = 1}^{\infty}U_{m}}(x_{1})\), \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), \(e_{n} \rightarrow0\), \(\inf_{n}r_{n,i} > 0\) and J is uniformly continuous on each bounded subset of X, then \(A_{i}u_{n,i} \rightarrow0\) for \(i \in N\), as \(n \rightarrow\infty\). Using Lemma 2.2, \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in\bigcap_{i = 1}^{\infty}A_{i}^{-1}0\).
Next, we shall show that \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\in \bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i})\).
Since \(z_{n} = J^{-1}[ \beta_{n}Jx_{n} + (1-\beta_{n})\sum_{i = 1}^{\infty }b_{i}JB_{i}y_{n}]\), then \(Jz_{n} - Jx_{n} = (1-\beta_{n})(\sum_{i = 1}^{\infty }b_{i}JB_{i}y_{n} - Jx_{n})\). Since both J and \(J^{-1}\) are uniformly continuous on each bounded subset of X, \(z_{n} \rightarrow P_{\bigcap _{m = 1}^{\infty}U_{m}}(x_{1})\), \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\) and \(0 \leq \sup_{n} \beta_{n} < 1\), then \(\sum_{i = 1}^{\infty }b_{i}JB_{i}y_{n} - Jx_{n} \rightarrow0\), which implies that \(J^{-1}(\sum_{i = 1}^{\infty}b_{i}JB_{i}y_{n}) \rightarrow P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\), as \(n \rightarrow\infty\).
Employing Lemma 2.9, for \(\forall q \in(\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\), we have
Since \(Jy_{n} \rightarrow JP_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\) and \(\sum_{i = 1}^{\infty}b_{i}JB_{i}y_{n} \rightarrow JP_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\), then from the definition of weakly relatively non-expansive mapping and (3.3), we have
as \(n \rightarrow\infty\). This ensures that \(JB_{1}y_{n} - JB_{k}y_{n} \rightarrow0\) for \(k \neq1\), as \(n \rightarrow\infty\).
Since \(y_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), then \(\{ y_{n}\}\) is bounded. Since \((\|q\| - \|B_{i}y_{n}\|)^{2} \leq\phi(q, B_{i}y_{n}) \leq\phi(q, y_{n})\leq(\|q\|+\|y_{n}\|)^{2}\), then \(\|B_{i}y_{n}\| \leq\|q\|\) or \(\|B_{i}y_{n}\| \leq2\|q\|+\|y_{n}\|\), \(i \in N\). Set \(K = \sup\{\|y_{n}\|: n \in N\}+2\|q\|\), then \(K < +\infty\).
Since \(\sum_{i = 1}^{\infty}b_{i} = 1\), then for \(\forall\varepsilon> 0\), there exists \(m_{0} \in N\) such that \(\sum_{i = m_{0}+1}^{\infty}b_{i} < \frac{\varepsilon}{4K}\).
Since \(JB_{1}y_{n} - JB_{k}y_{n} \rightarrow0\), as \(n \rightarrow\infty\), for \(\forall k \in\{1,2,\ldots, m_{0}\}\), then we can choose \(n_{0} \in N\) such that \(\|JB_{1}y_{n} - JB_{k}y_{n}\|< \frac{\varepsilon}{2}\) for all \(n \geq n_{0}\) and \(k \in\{2,\ldots, m_{0}\}\). Then, if \(n \geq n_{0}\),
This implies that \(JB_{1}y_{n} - \sum_{i = 1}^{\infty}b_{i}JB_{i}y_{n} \rightarrow0\), and then \(JB_{1}y_{n} \rightarrow JP_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow\infty\). Thus \(B_{1}y_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow \infty\). Lemma 2.1 implies that \(P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\in \operatorname{Fix}(B_{1})\).
Repeating the above process for showing \(P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\in \operatorname{Fix}(B_{1})\), we can also prove that \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\in \operatorname{Fix}(B_{k})\), \(\forall k \in N\). Therefore, \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\in\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i})\).
This completes the proof. □
Theorem 3.2
Let \(\{x_{n}\}\) be generated by the following iterative scheme:
If \(\alpha_{n} \rightarrow0\), \(\beta_{n} \rightarrow0\), then \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\), as \(n \rightarrow\infty\).
Proof
Copy Steps 2, 3, 4, and 5 of Theorem 3.1, and do small changes in Steps 1, 6, and 7 in the following way.
Step 1. \(U_{n}\) is a non-empty closed and convex subset of X.
We notice that
and
Thus \(U_{n}\) is closed and convex for \(n \in N\).
Next, we shall prove that \((\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\subset U_{n}\), which ensures that \(U_{n} \neq\emptyset\).
For this, we shall use inductive method. Now, \(\forall q \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty }\operatorname{Fix}(B_{i}))\).
If \(n=1\), \(q \in U_{1} = X \) is obviously true. In view of the convexity of \(\|\cdot\|^{2}\) and Lemma 2.5, we have
Moreover, from the definition of weakly relatively non-expansive mapping, we have
Thus \(q \in U_{2}\).
Suppose the result is true for \(n = k+1\). Then, if \(n = k+2\), we have
Moreover,
Then \(q \in U_{k+2}\). Therefore, by induction, \(\emptyset\neq(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty }\operatorname{Fix}(B_{i}))\subset U_{n}\), for \(n \in N\).
Step 6. \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), \(y_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), and \(z_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow \infty\).
Following from the results of Step 2 and Step 5, \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow\infty\). And then \(x_{n+1} - x_{n} \rightarrow0\), as \(n \rightarrow\infty\).
Since \(x_{n+1} \in V_{n+1}\subset U_{n+1}\), \(\alpha_{n} \rightarrow0\), and \(e_{n} \rightarrow0\), then
as \(n \rightarrow\infty\). Lemma 2.4 implies that \(x_{n+1} - y_{n}\rightarrow0\) and then \(y_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\) as \(n \rightarrow\infty\).
Since \(x_{n+1} \in V_{n+1}\subset U_{n+1}\) and \(\beta_{n} \rightarrow0\), then
Lemma 2.4 implies that \(x_{n+1} - z_{n}\rightarrow0\) and then \(z_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\) as \(n \rightarrow \infty\).
Step 7. \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0) \cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\).
First, we shall show that \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in \bigcap_{i = 1}^{\infty}A_{i}^{-1}0\).
From (3.4) and Lemma 2.5, for \(\forall q \in(\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\), we have
Thus
Since \(\alpha_{n} \rightarrow0\), then \(\sum_{i = 1}^{\infty} a_{n,i}\phi((J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n}),x_{n}+e_{n}) \rightarrow0\), which implies from Lemma 2.4 that \((J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n})- (x_{n}+e_{n}) \rightarrow0\), as \(n \rightarrow\infty\). Thus \((J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n}) \rightarrow P_{\bigcap_{m = 1}^{\infty }U_{m}}(x_{1})\), as \(n \rightarrow\infty\).
Let \(u_{n,i} = (J+r_{n,i}A_{i})^{-1}J(x_{n}+e_{n})\), then \(Ju_{n,i} + r_{n,i}A_{i}u_{n,i} = J(x_{n}+e_{n})\). Since \(u_{n,i} \rightarrow P_{\bigcap _{m = 1}^{\infty}U_{m}}(x_{1})\), \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), \(e_{n} \rightarrow0\), and \(\inf_{n}r_{n,i} > 0\), then \(A_{i}u_{n,i} \rightarrow0\) for \(i \in N\), as \(n \rightarrow\infty \). Using Lemma 2.2, \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in\bigcap_{i = 1}^{\infty}A_{i}^{-1}0\).
Next, we shall show that \(P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\in \bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i})\).
Since \(z_{n} = J^{-1}[ \beta_{n}Jx_{1} + (1-\beta_{n})\sum_{i = 1}^{\infty }b_{i}JB_{i}y_{n}]\), then \(Jz_{n} - Jx_{n} = \beta_{n}(Jx_{1}-Jx_{n})+(1-\beta _{n})(\sum_{i = 1}^{\infty}b_{i}JB_{i}y_{n} - Jx_{n})\). Since both J and \(J^{-1}\) are uniformly continuous on each bounded subset of X, \(z_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), and \(\beta_{n} \rightarrow0\), then \(\sum_{i = 1}^{\infty}b_{i}JB_{i}y_{n} - Jx_{n} \rightarrow0\), which implies that \(J^{-1}(\sum_{i = 1}^{\infty}b_{i}JB_{i}y_{n}) \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1})\), as \(n \rightarrow\infty\).
The following proof is the same as the corresponding part in Step 7 of Theorem 3.1.
This completes the proof. □
Theorem 3.3
Suppose that \(\{x_{n}\}\) is generated by the following iterative scheme:
If \(0 \leq \sup_{n}\beta_{n} < 1\) and \(\alpha_{n} \rightarrow0\), then \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\), as \(n \rightarrow\infty\).
Theorem 3.4
Suppose that \(\{x_{n}\}\) is generated by the following iterative scheme:
If \(0 \leq \sup_{n}\alpha_{n} < 1\) and \(\beta_{n} \rightarrow0\), then \(x_{n} \rightarrow P_{\bigcap_{m = 1}^{\infty}U_{m}}(x_{1}) \in(\bigcap_{i = 1}^{\infty}A_{i}^{-1}0)\cap(\bigcap_{i = 1}^{\infty}\operatorname{Fix}(B_{i}))\), as \(n \rightarrow\infty\).
Remark 3.5
The main difference between ours and [8] is that: in [8], in each step n, countable sets \(V_{n+1,i}\) and \(W_{n+1,i}\) are needed to be evaluated, but in our paper, in each step n, two sets \(U_{n+1}\) and \(V_{n+1}\) are enough. This difference leads to some different techniques for proving the main results.
Corollary 3.6
If X reduces to a Hilbert space H, then (3.1) becomes as follows:
Similarly, we can get the special forms of (3.4), (3.5), and (3.6) in the frame of Hilbert space H.
Corollary 3.7
If, further, \(r_{n,i} \equiv r_{n}\), \(A_{i} \equiv A\), and \(B_{i} \equiv B\), then we can get a special case for (3.7):
where A is maximal monotone, B is weakly relatively non-expansive, and \(\{r_{n}\}\subset[0, +\infty)\) satisfies \(\inf_{n}r_{n} > 0\).
Corollary 3.8
If, in Corollary 3.7, \(\alpha_{n} \equiv0\), then (3.8) can be further simplified as follows:
Remark 3.9
Comparing (3.9) and (1.4), we may find that they are different due to different construction of \(U_{n+1}\). This indicates again that (3.1) is different from (1.3).
Remark 3.10
Choose \(H = (-\infty,+\infty)\), \(Ax = 2x\), and \(Bx = x\) for \(x \in(-\infty,+\infty)\). Let \(e_{n} = \beta_{n} = \lambda_{n} = \frac {1}{n}\) and \(r_{n} = 2^{n-1}\) for \(n \in N\). Then A is maximal monotone and B is weakly relatively non-expansive. Moreover, \(A^{-1}0 \cap \operatorname{Fix}(B) = \{0\}\).
Corollary 3.11
Take the example in Remark 3.10. We can choose the following three iterative sequences \(\{x_{n}\}\) among infinite choices by iterative scheme (3.9).
and
Then \(\{x_{n}\}\) generated by (3.10), (3.11), and (3.12) converges strongly to \(0 \in A^{-1}0\cap \operatorname{Fix}(B)\), as \(n\rightarrow\infty\).
Proof
We can easily see from iterative scheme (3.9) that
and
From (3.14), we can see that \((v - z_{n})^{2} \leq\beta_{n} (v - x_{n})^{2} + (1-\beta_{n})(v-y_{n})^{2}\) is always true for \(v \in(-\infty, +\infty)\). Then we can simplify \(U_{n+1}\) and \(V_{n+1}\) as follows:
and
Next, we split the proof into three parts.
Part 1. We shall show that both \(\{x_{n}\}\) and \(\{y_{n}\}\) generated by (3.10) converge strongly to \(0 \in A^{-1}0\cap \operatorname{Fix}(B)\), as \(n\rightarrow\infty\).
By using inductive method, we first show that the following is true:
In fact, if \(n = 1\), \(y_{1} = \frac{x_{1}+e_{1}}{1+2r_{1}} = \frac{2}{3}\). Since \((x_{1}+e_{1})-y_{1} = 2r_{1}y_{1} = 2y_{1} = \frac{4}{3} > 0\), then from (3.15), \(U_{2} = (-\infty, +\infty)\cap(-\infty,(1+r_{1})y_{1}]= (-\infty ,\frac{4}{3}]\). Thus \(P_{U_{2}}(x_{1}) = x_{1}\). From (3.16), \(V_{2} = U_{2} \cap[1-\frac {\sqrt{2}}{2}, 1+\frac{\sqrt{2}}{2}] = [1-\frac{\sqrt{2}}{2}, \frac{4}{3}]\). So, we may choose \(x_{2} = 1 - \frac{\sqrt{2}}{2}\).
If \(n = 2\), \(y_{2} = \frac{x_{2}+e_{2}}{1+2r_{2}} = \frac{3}{10}-\frac{\sqrt {2}}{10}\) and \(w_{2} = \min\{(1+r_{1})y_{1}, (1+r_{2})y_{2}\} = \frac{9-3\sqrt {2}}{10} = (1+r_{2})y_{2}\). It is easy to see that \(0 < (1+r_{2})y_{2} < 1\), and then \(x_{2} + e_{2} - y_{2} = 2r_{2}y_{2} > 0\). From (3.15), \(U_{3} = U_{2} \cap(-\infty, 3y_{2}] = [-\infty, \frac{4}{3}] \cap(-\infty, \frac {9-3\sqrt{2}}{10}] = (-\infty, w_{2}]\), and then \(P_{U_{3}}(x_{1}) = w_{2} = \frac{9-3\sqrt{2}}{10}\). From (3.16), \(V_{3} = U_{3} \cap[x_{1}-\sqrt {(x_{1}-w_{2})^{2} + \lambda_{3}}, x_{1}+\sqrt{(x_{1}-w_{2})^{2} + \lambda_{3}}] = [1-\sqrt{(\frac{1+3\sqrt{2}}{10})^{2}+\frac{1}{3}},\frac{9-3\sqrt {2}}{10}] = [x_{1}-\sqrt{(x_{1}-w_{2})^{2} + \lambda_{3}},w_{2}]\). Then we may choose \(x_{3} = x_{1}-\sqrt{(x_{1}-w_{2})^{2} + \lambda_{3}}\).
Suppose that (3.17) is true for \(n = k\). We now begin the discussion for \(n = k+1\).
Since \(0 < (1+r_{k+1})y_{k+1} < 1\), then \(x_{k+1}+e_{k+1}-y_{k+1} = 2r_{k+1}y_{k+1} > 0\). From (3.15) and (3.13), \(U_{k+2} = U_{k+1} \cap(-\infty, (1+r_{k+1})y_{k+1}] = (-\infty, w_{k+1}]\), and then \(P_{U_{k+2}}(x_{1}) = w_{k+1}\).
Note that \(w_{k+1} < 1 = x_{1} < x_{1} + \sqrt{(x_{1}-w_{k+1})^{2}+\lambda _{k+2}}\) and \(\sqrt{(x_{1}-w_{k+1})^{2}+\lambda_{k+2}}> x_{1}-w_{k+1}> 0\), then from (3.16) we know that
Then we may choose
Since \(y_{k+2} = \frac{x_{k+2}+e_{k+2}}{1+2r_{k+2}} = \frac {x_{k+2}}{1+2^{k+2}}+\frac{1}{(k+2)(1+2^{k+2})}\), then \((1+r_{k+2})y_{k+2} = \frac{1+r_{k+2}}{1+2r_{k+2}}(x_{k+2}+e_{k+2})\). Note that
This is obviously true. Then \((1+r_{k+2})y_{k+2} > 0\). Since
then \((1+r_{k+2})y_{k+2}= \frac{1+r_{k+2}}{1+2r_{k+2}} (x_{k+2}+e_{k+2}) < 1\).
By now, we have proved that (3.17) is true.
In this part, it is left to prove that \(x_{n} \rightarrow0\), \(y_{n} \rightarrow0\), as \(n \rightarrow\infty\).
From (3.17), \(\{(1+r_{n})y_{n}\}\) is bounded, which implies that \(\{w_{n}\}\) is bounded. Thus \(\{x_{n}\}\) is bounded. Let \(\{x_{n_{i}}\}\) be any subsequence of \(\{x_{n}\}\) such that \(\lim_{i \rightarrow\infty}x_{n_{i}} = a\). Then \(w_{n_{i}} \rightarrow a\) and \(y_{n_{i}}\rightarrow0\) as \(i \rightarrow\infty\). Since \(0 < w_{n_{i}} \leq(1+r_{n_{i}})y_{n_{i}} < 1\), then \(0 \leq a \leq \lim_{i \rightarrow\infty}(1+r_{n_{i}})y_{n_{i}}\leq 1\). That is, \(0 \leq a \leq \lim_{i \rightarrow\infty }r_{n_{i}}y_{n_{i}}\leq1\). From the fact that \(2r_{n}y_{n} = x_{n} + e_{n} - y_{n}\), we have \(\lim_{i \rightarrow\infty}(1+r_{n_{i}})y_{n_{i}} = \frac{a}{2}\). By now, we know that \(0 \leq a \leq\frac{a}{2} \leq1\), then \(a = 0\). This means that each strongly convergent subsequence of \(\{x_{n}\}\) converges strongly to 0. Thus \(x_{n} \rightarrow0 \in A^{-1}0 \cap \operatorname{Fix}(B)\), as \(n \rightarrow\infty\). And then \(y_{n} \rightarrow0\), \(w_{n} \rightarrow0\), as \(n \rightarrow\infty\).
Part 2. We shall show that both \(\{x_{n}\}\) and \(\{y_{n}\}\) generated by (3.11) converge strongly to \(0 \in A^{-1}0\cap \operatorname{Fix}(B)\), as \(n\rightarrow\infty\).
First, we shall use inductive method to show that the following is true:
In fact, if \(n = 1\), \(y_{1} = \frac{x_{1}+e_{1}}{1+2r_{1}} = \frac{2}{3}\). Since \((x_{1}+e_{1})-y_{1} = 2r_{1}y_{1} = 2y_{1} = \frac{4}{3} > 0\), then from (3.15), \(U_{2} = (-\infty, +\infty)\cap(-\infty,(1+r_{1})y_{1}]= (-\infty ,\frac{4}{3}]\). Thus \(P_{U_{2}}(x_{1}) = x_{1}\). From (3.16), \(V_{2} = U_{2} \cap[1-\frac {\sqrt{2}}{2}, 1+\frac{\sqrt{2}}{2}] = [1-\frac{\sqrt{2}}{2}, \frac{4}{3}]\). Then we may choose \(x_{2} = (1+r_{1})y_{1} = \frac{4}{3}\).
If \(n = 2\), \(y_{2} = \frac{x_{2}+e_{2}}{1+2r_{2}} = \frac{11}{30}\). It is easy to see that \(0 < (1+r_{2})y_{2} = \frac{11}{10} < (1+r_{1})y_{1} = \frac {4}{3}\). From (3.15), \(U_{3} = U_{2} \cap(-\infty, 3y_{2}] = (-\infty, \frac{11}{10}] = (-\infty, (1+r_{2})y_{2}]\), and then \(P_{U_{3}}(x_{1}) = x_{1}\). From (3.16), \(V_{3} = U_{3} \cap[1-\frac{\sqrt{3}}{3}, 1+\frac {\sqrt{3}}{3}]= [1-\frac{\sqrt{3}}{3}, \frac{11}{10}]\). Then we may choose \(x_{3} = (1+r_{2})y_{2} = \frac{11}{10}\). Thus \(y_{3} = \frac {x_{3}+e_{3}}{1+2r_{3}} = \frac{43}{270}\). It is easy to check that \(0 < (1+r_{3})y_{3} = \frac{43}{54}< \frac{11}{10} = (1+r_{2})y_{2}\) and \(\frac {1+2^{3}}{(2+2)2^{3}} < (1+r_{3})y_{3} < 1\).
Suppose that (3.18) is true for \(n = k\). Next, we show the result is true for \(n = k+1\).
Since \(0 < (1+r_{k+1})y_{k+1} < (1+ r_{k})y_{k} < 1\), then (3.15) implies that \(U_{k+2} = U_{k+1} \cap(-\infty, (1+r_{k+1})y_{k+1}] = (-\infty,(1+r_{k+1})y_{k+1}]\) and \(P_{U_{k+2}}(x_{1}) = (1+r_{k+1})y_{k+1}\).
Note that \((1+r_{k+1})y_{k+1} < 1 = x_{1} < x_{1} + \sqrt {[x_{1}-(1+r_{k+1})y_{k+1}]^{2}+\lambda_{k+2}}\) and \(x_{1} - \sqrt{[x_{1}-(1+r_{k+1})y_{k+1}]^{2}+\lambda _{k+2}}<(1+r_{k+1})y_{k+1}\). Then, from (3.16), we know that
Thus we may choose
And then, \(y_{k+2} = \frac{x_{k+2}+e_{k+2}}{1+2r_{k+2}} = \frac {x_{k+2}}{1+2^{k+2}}+\frac{1}{(k+2)(1+2^{k+2})}\). So \((1+r_{k+2})y_{k+2} = \frac{1+r_{k+2}}{1+2r_{k+2}}(x_{k+2}+e_{k+2})\). Note that
which is obviously true from the assumption. Thus \((1+r_{k+2})y_{k+2} > 0\).
Since \((1+r_{k+1})y_{k+1}< 1\), then \(\frac {1+2^{k+1}}{1+2^{k+2}}[(1+r_{k+1})y_{k+1}+\frac{1}{k+2}]<\frac {1+2^{k+1}}{1+2^{k+2}} \frac{k+3}{k+2}< 1\). Thus
Note that
which is true from the assumption.
Compute the following:
By now, we have proved that (3.18) is true.
In this part, it is left to prove that \(x_{n} \rightarrow0\), \(y_{n} \rightarrow0\), as \(n \rightarrow\infty\).
Since \(\{(1+r_{n})y_{n}\}\) is decreasing and bounded in \((0,1)\), then \(\lim_{n \rightarrow\infty}(1+r_{n})y_{n} = \lim_{n \rightarrow\infty}x_{n} = a\). Coming back to (3.13), we know that \(r_{n}y_{n} \rightarrow0\), as \(n \rightarrow\infty\). Then \(y_{n} \rightarrow0\), and then \(x_{n} \rightarrow0\), as \(n \rightarrow\infty\).
Part 3. We shall show that both \(\{x_{n}\}\) and \(\{y_{n}\}\) generated by (3.12) converge strongly to \(0 \in A^{-1}0\cap \operatorname{Fix}(B)\), as \(n\rightarrow\infty\).
First, we shall use inductive method to show that the following is true:
In fact, if \(n = 1\), \(y_{1} = \frac{x_{1}+e_{1}}{1+2r_{1}} = \frac{2}{3}\). Since \((x_{1}+e_{1})-y_{1} = 2r_{1}y_{1} = 2y_{1} = \frac{4}{3} > 0\), then from (3.15), \(U_{2} = (-\infty, +\infty)\cap(-\infty,(1+r_{1})y_{1}]= (-\infty ,\frac{4}{3}]\). Then \(P_{U_{2}}(x_{1}) = x_{1}\). From (3.16), \(V_{2} = U_{2} \cap[1-\frac {\sqrt{2}}{2}, 1+\frac{\sqrt{2}}{2}] = [1-\frac{\sqrt{2}}{2}, \frac{4}{3}]\). Thus we may choose \(x_{2} = \frac{1-\frac{\sqrt{2}}{2}+\frac{4}{3}}{2} = \frac{7}{6} - \frac{\sqrt{2}}{4}\).
If \(n = 2\), \(y_{2} = \frac{x_{2}+e_{2}}{1+2r_{2}} = \frac{1}{3}-\frac{\sqrt {2}}{20}\) and \(w_{2} = \min\{(1+r_{1})y_{1}, (1+r_{2})y_{2}\} = 1-\frac {3\sqrt{2}}{20}\). It is easy to see that \(0 < (1+r_{2})y_{2} = 1 -\frac {3\sqrt{2}}{20}< 1\). Thus from (3.15), \(U_{3} = U_{2} \cap(-\infty, 3y_{2}] = (-\infty, \frac{4}{3}] \cap(-\infty, 1-\frac{3\sqrt {2}}{20}] = (-\infty, w_{2}]\), and then \(P_{U_{3}}(x_{1}) = 1-\frac{3\sqrt {2}}{20} = w_{2}\).
From (3.16), \(V_{3} = U_{3} \cap[x_{1}-\sqrt{(x_{1}-w_{2})^{2} + \lambda_{3}}, x_{1}+\sqrt{(x_{1}-w_{2})^{2} + \lambda_{3}}] = [1-\sqrt{\frac{18}{400}+\frac {1}{3}}, 1- \frac{3\sqrt{2}}{20}] = [x_{1}-\sqrt{(x_{1}-w_{2})^{2} + \lambda _{3}},w_{2}]\). Then we may choose \(x_{3} = \frac{x_{1}-\sqrt{(x_{1}-w_{2})^{2} + \lambda_{3}}+w_{2}}{2}= 1- \frac{3\sqrt{2}}{40} - \frac{\sqrt{1362}}{120}\). We can easily check that \(0 < (1+r_{3})y_{3} = 5y_{3} = \frac{20}{27}-\frac {9\sqrt{2}+\sqrt{1362}}{216} < 1\).
Suppose that (3.19) is true for \(n = k\). Next, we shall show that (3.19) is true for \(n = k+1\).
Since \(0 <(1+r_{k+1})y_{k+1}< 1\), then \(x_{k+1}+e_{k+1}-y_{k+1} = 2r_{k+1}y_{k+1} > 0\). From (3.15), \(U_{k+2} = U_{k+1} \cap(-\infty, (1+r_{k+1})y_{k+1}] = (-\infty, w_{k+1}]\), and \(P_{U_{k+2}}(x_{1}) = w_{k+1}\). From (3.16), \(V_{k+2} = U_{k+2} \cap[x_{1} - \sqrt {(x_{1}-w_{k+1})^{2}+\lambda_{k+2}},x_{1} + \sqrt{(x_{1}-w_{k+1})^{2}+\lambda_{k+2}}]\).
Note that \(w_{k+1} < 1 = x_{1} < x_{1} + \sqrt{(x_{1}-w_{k+1})^{2}+\lambda _{k+2}}\) and \(\sqrt{(x_{1}-w_{k+1})^{2}+\lambda_{k+2}}> x_{1}- w_{k+1}> 0\). Then \(V_{k+2} = [x_{1} - \sqrt{(x_{1}-w_{k+1})^{2}+\lambda_{k+2}}, w_{k+1}]\). Thus we may choose
Note that
which is obviously true since \((\frac{k+4}{k+2})^{2} > 1+\frac{1}{k+2}\). Then \((1+r_{k+2})y_{k+2} > 0\).
Moreover,
which is true since \(\frac{1+3\cdot2^{k+1}}{1+2^{k+1}}-\frac{2}{k+2} > 1\). Then \((1+r_{k+2})y_{k+2}= \frac{1+r_{k+2}}{1+2r_{k+2}} (x_{k+2}+e_{k+2}) < 1\).
By now, we have proved that (3.19) is true.
In this part, it is left to prove that \(x_{n} \rightarrow0\), \(y_{n} \rightarrow0\), as \(n \rightarrow\infty\).
From (3.19), \(\{(1+r_{n})y_{n}\}\) is bounded, which implies that \(\{w_{n}\}\) is bounded. Then we can easily check that \(\{x_{n}\}\) is bounded. Let \(\{ x_{n_{i}}\}\) be any subsequence of \(\{x_{n}\}\) such that \(\lim_{i \rightarrow\infty}x_{n_{i}} = a\). Then \(w_{n_{i}} \rightarrow a\) and \(y_{n_{i}}\rightarrow0\) as \(i \rightarrow\infty\). Since \(2r_{n}y_{n} = x_{n} + e_{n} - y_{n}\), then \(\lim_{i \rightarrow\infty}(1+r_{n_{i}})y_{n_{i}} = \frac {a}{2}\). Since \(0 < w_{n_{i}} \leq(1+r_{n_{i}})y_{n_{i}} < 1\), then \(0 \leq a \leq\frac{a}{2}\leq1\). Thus \(a = 0\). This means that each strongly convergent subsequence of \(\{x_{n}\}\) converges strongly to 0. Thus \(x_{n} \rightarrow0\), as \(n \rightarrow\infty\). And then \(y_{n} \rightarrow 0\), \(w_{n} \rightarrow0\), as \(n \rightarrow\infty\).
This completes the proof. □
Remark 3.12
Do computational experiments on (3.10), (3.11), and (3.12) in Corollary 3.11. By using the codes of Visual Basic Six, we get Tables 1–3 and Figs. 1–3.
Convergence of \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{w_{n}\}\) corresponding to Table 1
Convergence of \(\{x_{n}\}\) and \(\{y_{n}\}\) corresponding to Table 2
Convergence of \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{w_{n}\}\) corresponding to Table 3
Remark 3.13
From Tables 1–3 and Figs. 1–3, we can see that for initial value \(x_{1} = 1\), different choices of \(x_{n+1}\) in \(V_{n+1}\) lead to different rates of convergence. It is a natural phenomenon that the larger \(x_{n+1}\) is chosen, the slower the rate of convergence is. Although \(x_{n+1}\) in (3.11) is the slowest sequence among the three, it is worth being considered because of its “nice and simple” expression compared to the other two.
Remark 3.14
Although both \(x_{n+1}\) in (3.12) and (1.5) are chosen as the mid-point of \(V_{n+1}\), they have different rates of convergence. From Table 1 in [8], we may find that the iterative sequence in (1.5) converges more rapidly than that in (3.12). From this point view, it is not easy for us to draw the conclusion which one is better, (1.3) or (3.1).
References
Pascali, D., Sburlan, S.: Nonlinear Mappings and Monotone Type. Sijthoff & Noordhoff, Alphen aan den Rijn (1978)
Zhang, J.L., Su, Y.F., Cheng, Q.Q.: Simple projection algorithm for a countable family of weak relatively nonexpansive mappings and applications. Fixed Point Theory Appl. 2012, Article ID 205 (2012)
Alber, Y.I.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Kartsatos, A.G. (ed.) Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Lecture Notes in Pure and Applied Mathematics, vol. 178, pp. 15–50. Dekker, New York (1996)
Wei, L., Su, Y.F., Zhou, H.Y.: Iterative convergence theorems for maximal monotone operators and relatively nonexpansive mappings. Appl. Math. J. Chin. Univ. Ser. B 23(3), 319–325 (2008)
Klin-eam, C., Suantai, S., Takahashi, W.: Strong convergence of generalized projection algorithms for nonlinear operators. Abstr. Appl. Anal. 2009, Article ID 649831 (2009)
Wei, L., Su, Y.G., Zhou, H.Y.: New iterative schemes for strongly relatively nonexpansive mappings and maximal monotone operators. Appl. Math. J. Chin. Univ. Ser. B 25(2), 199–208 (2010)
Inoue, G., Takahashi, W., Zembayashi, K.: Strong convergence theorems by hybrid methods for maximal monotone operator and relatively nonexpansive mappings in Banach spaces. J. Convex Anal. 16(16), 791–806 (2009)
Wei, L., Agarwal, R.P.: New construction and proof techniques of projection algorithm for countable maximal monotone mappings and weakly relatively non-expansive mappings in a Banach space. J. Inequal. Appl. 2018, Article ID 64 (2018)
Agarwal, R.P., O’Regan, D., Sahu, D.R.: Fixed Point Theory for Lipschitz-Type Mappings with Applications. Springer, Berlin (2008)
Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2012)
Wei, L., Tan, R.L.: Iterative schemes for finite families of maximal monotone operators based on resolvents. Abstr. Appl. Anal. 2014, Article ID 451279 (2014)
Mosco, U.: Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 3(4), 510–585 (1969)
Tsukada, M.: Convergence of best approximations in a smooth Banach space. J. Approx. Theory 40, 301–309 (1984)
Xu, H.K.: Inequalities in Banach space with applications. In: Nonlinear Analysis, vol. 16, pp. 1127–1138 (1991)
Nilsrakoo, W., Saejung, S.: On the fixed-point set of a family of relatively nonexpansive and generalized nonexpansive mappings. Fixed Point Theory Appl. 2010, Article ID 414232 (2010)
Funding
Supported by the National Natural Science Foundation of China (11071053), Natural Science Foundation of Hebei Province (A2014207010), Key Project of Science and Research of Hebei Educational Department (ZD2016024), Key Project of Science and Research of Hebei University of Economics and Business (2016KYZ07), Youth Project of Science and Research of Hebei University of Economics and Business (2017KYQ09), and Youth Project of Science and Research of Hebei Educational Department (QN2017328).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wei, L., Agarwal, R.P. Simple form of a projection set in hybrid iterative schemes for non-linear mappings, application of inequalities and computational experiments. J Inequal Appl 2018, 179 (2018). https://doi.org/10.1186/s13660-018-1774-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1774-z
Keywords
- Iterative scheme
- Lyapunov functional
- Metric projection
- Maximal monotone operator
- Weakly relatively non-expansive mapping