 Research
 Open access
 Published:
Strong convergence of projection methods for a countable family of nonexpansive mappings and applications to constrained convex minimization problems
Journal of Inequalities and Applications volume 2013, Article number: 546 (2013)
Abstract
In this paper, we introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space, which solves a corresponding variational inequality. Furthermore, we propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature.
MSC:47H10, 37C25.
1 Introduction
A viscosity approximation method for finding fixed points of nonexpansive mappings was first proposed by Moudafi in 2000 [1]. He proved the convergence of the sequence generated by the proposed method. In 2004, Xu [2] proved the strong convergence of the sequence generated by the viscosity approximation method to a unique solution of a certain variational inequality problem defined on the set of fixed points of a nonexpansive map.
It is well known that the iterative methods for finding fixed points of nonexpansive mappings can also be used to solve a convex minimization problem; see, for example, [3–5] and the references therein. In 2003, Xu [4] introduced an iterative method for computing the approximate solutions of a quadratic minimization problem over the set of fixed points of a nonexpansive mapping defined on a real Hilbert space. He proved that the sequence generated by the proposed method converges strongly to the unique solution of the quadratic minimization problem. By combining the iterative schemes proposed by Moudafi [1] and Xu [4], Marino and Xu [6] considered a general iterative method and proved that the sequence generated by the method converges strongly to a unique solution of a certain variational inequality problem, which is the optimality condition for a particular minimization problem. Liu [7] and Qin et al. [8] also studied some applications of the iterative method considered in [6]. Yamada [5] introduced the socalled hybrid steepestdescent method for solving the variational inequality problem and also studied the convergence of the sequence generated by the proposed method. Very recently, Tian [9] combined the iterative methods of [5, 6] in order to propose implicit and explicit schemes for constructing a fixed point of a nonexpansive mapping T defined on a real Hilbert space. He also proved the strong convergence of these two schemes to a fixed point of T under appropriate conditions. Related iterative methods for solving fixed point problems, variational inequalities and optimization problems can be found in [10–15] and the references therein.
On the other hand, the gradientprojection method for finding the approximate solutions of the constrained convex minimization problem is well known; see, for example, [16] and the references therein. The convergence of the sequence generated by this method depends on the behavior of the gradient of the objective function. If the gradient fails to be strongly monotone, then the strong convergence of the sequence generated by the gradientprojection method may fail. Very recently, Xu [17] gave an operatororiented approach as an alternative to the gradientprojection method and to the relaxed gradientprojection algorithm, namely, an averaged mapping approach. Moreover, he constructed a counterexample which shows that the sequence generated by the gradientprojection method does not converge strongly in the setting of an infinitedimensional space. He also presented two modifications of gradientprojection algorithms which are shown to have strong convergence. Further, he regularized the minimization problem to derive an iterative scheme that generates a sequence converging in norm to the minimumnorm solution of the constrained convex minimization problem in the consistent case. The related methods and results can be found in [18–26] and the references therein. By virtue of projections, the authors in [27] extended the implicit and explicit iterative schemes proposed in [9].
The purpose of this paper is to introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space. We prove the strong convergence theorems for the sequences produced by the methods to a common fixed point of a countable family of nonexpansive mappings which is the unique solution of a corresponding variational inequality. We also propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature, see, for example, [27, 28].
2 Preliminaries
Throughout this paper, we denote the set of real numbers and the set of positive integers by ℝ and ℕ, respectively. Let H be a real Hilbert space, and let C be a nonempty subset of H. Let T:C\to C be a mapping. We denote by F(T) the set of fixed points of T, i.e., F(T)=\{x\in C:Tx=x\}.
Definition 2.1 (i) A mapping T:C\to C is said to be nonexpansive if \parallel TxTy\parallel \le \parallel xy\parallel for all x, y in C. T is said to be quasinonexpansive if F(T)\ne \mathrm{\varnothing} and \parallel Txy\parallel \le \parallel xy\parallel for all x in C and y in F(T).

(ii)
A mapping T:H\to H is said to be an averaged mapping [29] if it can be written as the average of the identity I and a nonexpansive mapping; that is,
T=(1\alpha )I+\alpha S,(2.1)
where α is a number in (0,1), and S:H\to H is nonexpansive. More precisely, when (2.1), holds, we say that T is αaveraged.

(iii)
A mapping B:C\to H is said to be LLipschitzian if \parallel BxBy\parallel \le L\parallel xy\parallel, \mathrm{\forall}x,y\in C, where L\ge 0 is a constant. In particular, if L\in [0,1), then B is called a contraction on C; if L=1, then B is nonexpansive.

(iv)
A mapping V:D(V)\subset H\to H is called firmly nonexpansive [30] if
\u3008xy,VxVy\u3009\ge {\parallel VxVy\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in D(V),
where D(V) is the domain of V.
Clearly, a firmly nonexpansive mapping is a \frac{1}{2}averaged map.
Let H be a real Hilbert space, and let S,T,B:H\to H be mappings.

(a)
If T=(1\alpha )S+\alpha B for some α in (0,1), and if S is averaged, and B is nonexpansive, then T is averaged.

(b)
T is firmly nonexpansive if and only if the complement (IT) is firmly nonexpansive.

(c)
If T=(1\alpha )S+\alpha B for some α in (0,1), and if S is firmly nonexpansive, and B is nonexpansive, then T is averaged.
Recall that the metric (or nearest point) projection from H onto C is the mapping {P}_{C}:H\to C, which assigns to each point x in H the unique point {P}_{C}x in C satisfying the property
Lemma 2.1 [30]
Let H be a real Hilbert space. For given x in H:

(a)
z={P}_{C}x if and only if
\u3008zx,zy\u3009\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}y\in C. 
(b)
z={P}_{C}x if and only if
{\parallel xz\parallel}^{2}\le {\parallel xy\parallel}^{2}{\parallel yz\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}y\in C. 
(c)
\u3008{P}_{C}x{P}_{C}y,xy\u3009\ge {\parallel {P}_{C}x{P}_{C}y\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in H.
Consequently, {P}_{C} is nonexpansive and monotone.
In general, a projection mapping is firmly nonexpansive, and, thus, a 1/2averaged map.
Lemma 2.2 The following inequality holds in an inner product space X:
Lemma 2.3 [33]
In a Hilbert space H, we have
Lemma 2.4 (Demiclosedness principle [33])
Let T:C\to C be a nonexpansive mapping with F(T)\ne \mathrm{\varnothing}. If \{{x}_{n}\} is a sequence in C that converges weakly to x, and if \{(IT){x}_{n}\} converges strongly to y, then (IT)x=y; in particular, if y=0, then x\in F(T).
Definition 2.2 Let H be a real Hilbert space. A nonlinear operator T, whose domain D(T)\subset H and range R(T)\subset H is said to be:

(a)
monotone if
\u3008xy,TxTy\u3009\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in D(T), 
(b)
ηstrongly monotone if there exists \eta >0 such that
\u3008xy,TxTy\u3009\ge \eta {\parallel xy\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in D(T), 
(c)
αinverse strongly monotone (for short, αism) if there exists \alpha >0 such that
\u3008xy,TxTy\u3009\ge \alpha {\parallel TxTy\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in D(T).
It can be easily seen that (i) if T is nonexpansive, then IT is monotone; (ii) the projection map {P}_{C} is a 1ism. The inverse strongly monotone (also referred to as cocoercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [34, 35] and the references therein.
Proposition 2.2 [31]
Let T:H\to H be an operator.

(a)
T is nonexpansive if and only if the complement IT is \frac{1}{2}ism.

(b)
If T is vism, then γT is \frac{v}{\gamma}ism for v>0.

(b)
T is averaged if and only if the complement IT is vism for some v>\frac{1}{2}. Indeed, for α in (0,1), T is αaveraged if and only if IT is \frac{1}{2\alpha}ism.
Lemma 2.5 [27]
Let B:C\to H be an LLipschitzian mapping with constant L\ge 0, and let A:C\to H be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0. Then for 0\le \gamma L<\mu \eta, we have
That is, \mu A\gamma B is strongly monotone with coefficient \mu \eta \gamma L.
The following lemma plays a key role in proving strong convergence of our iterative schemes.
Lemma 2.6 [[5], Lemma 3.1]
Suppose that \lambda \in (0,1) and \mu ,\kappa ,\eta >0. Let A:C\to H be an LLipschitzian and ηstrongly monotone operator on C. In association with a nonexpansive mapping T:C\to C, define the mapping {T}^{\lambda}:C\to H by
Then {T}^{\lambda} is a contraction provided \mu <\frac{2\eta}{{\kappa}^{2}}, that is,
where \tau =1\sqrt{1\mu (2\eta \mu {\kappa}^{2})}\in (0,1].
Let A:C\to H be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0. Let B:C\to E be an LLipschitzian mapping with constant L\ge 0. Assume that T:C\to C is a nonexpansive mapping with F(T)\ne \mathrm{\varnothing}. Let 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma L<\tau, where \tau =1\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. Let the net {\{{x}_{t}\}}_{t\in (0,1)} be generated by the following implicit scheme:
Then {\{{x}_{t}\}}_{t\in (0,1)} converges strongly to a fixed point \tilde{x} of T, which solves the variational inequality
Let the sequence {\{{x}_{n}\}}_{n\in \mathbb{N}} be generated by the following explicit scheme:
Then {\{{x}_{n}\}}_{n\in \mathbb{N}} converges strongly to a fixed point \tilde{x} of T, which is also a solution of the variational inequality (2.3), see, for more details, [27].
Consider a selfmapping {S}_{t} on C defined by
Then, {S}_{t} is a contraction, and it has a unique fixed point in C, which uniquely solves the fixed point equation (2.2), see [27] for more details.
Proposition 2.3 [[27], Proposition 3.1]
Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let A:C\to H be a κLipschitzian and ηstrongly accretive operator with constants \kappa ,\eta >0. Let B:C\to H be an LLipschitzian mapping with constant L\ge 0. Assume that T:C\to C is a nonexpansive mapping with F(T)\ne \mathrm{\varnothing}. Let 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma L<\tau, where \tau =1\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. For each t in (0,1), let {x}_{t} denote a unique solution of the fixed point equation (2.2). Then, the following properties hold for the net {\{{x}_{t}\}}_{t\in (0,1)}:

(1)
{\{{x}_{t}\}}_{t\in (0,1)} is bounded;

(2)
{lim}_{t\to 0}\parallel {x}_{t}T{x}_{t}\parallel =0;

(3)
t\mapsto {x}_{t} defines a continuous curve from (0,1) into C.
Theorem 2.1 [[27], Theorem 3.1]
Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let A:C\to H be an κLipschitzian and ηstrongly accretive operator with constants \kappa ,\eta >0. Let B:C\to H be an LLipschitzian mapping with constant L\ge 0. Assume T:C\to C is a nonexpansive mapping with F(T)\ne \mathrm{\varnothing}. Let 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma L<\tau, where \tau =1\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. For each t in (0,1), let \{{x}_{t}\} denote the unique solution of the fixed point equation (2.2). Then the net \{{x}_{t}\} converges strongly, as t\to 0, to a fixed point \tilde{x} of T, which solves the variational inequality (2.3), or equivalently, {P}_{F(T)}(I\mu A+\gamma B)\tilde{x}=\tilde{x}.
Lemma 2.7 [36]
Let \{{s}_{n}\}, \{{\gamma}_{n}\} be sequences of nonnegative real numbers satisfying the inequality:
Suppose that \{{\gamma}_{n}\} and \{{\delta}_{n}\} satisfy the conditions:

(i)
\{{\gamma}_{n}\}\subset [0,1] and {\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}=\mathrm{\infty}, or equivalently, {\prod}_{n=0}^{\mathrm{\infty}}(1{\gamma}_{n})=0;

(ii)
{lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\delta}_{n}\le 0, or
(ii)′ {\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}{\delta}_{n}<\mathrm{\infty}.
Then {lim}_{n\to \mathrm{\infty}}{s}_{n}=0.
Lemma 2.8 [37]
Let \{{\beta}_{n}\} be a sequence of real numbers with
Let \{{x}_{n}\} and \{{z}_{n}\} be two sequences in a Banach space E such that
If
then {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}{z}_{n}\parallel =0.
Let C be a subset of a real Banach space E, and let {\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}} be a family of mappings of C such that {\bigcap}_{n=1}^{\mathrm{\infty}}F({T}_{n})\ne \mathrm{\varnothing}. Then {\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}} is said to satisfy the AKTTcondition [38] if for each bounded subset D of C, we have
Lemma 2.9 [38]
Let C be a nonempty subset of a real Banach space E, and let {\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}} be a family of mappings of C into itself, which satisfies the AKTTcondition. Then for each x in C, we have that {\{{T}_{n}x\}}_{n=1}^{\mathrm{\infty}} converges strongly to a point in C. Moreover, let the mapping T be defined by
Then for each bounded subset D of C, we have
In the sequel, we will write that ({\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}},T) satisfies the AKKTcondition if {\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}} satisfies the AKKTcondition, and T is defined by Lemma 2.9.
We end this section with the following simple examples of mappings satisfying the AKTTcondition (see also Lemma 5.2).
Example 2.1 (i) Let E be any Banach space. For any n\in \mathbb{N}, let a mapping {T}_{n}:E\to E be defined by
Then, {T}_{n} is a nonexpansive mapping for each n\in \mathbb{N}. It could easily be seen that ({\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}},T) satisfies the AKKTcondition, where T(x)=0 for all x\in E.

(ii)
Let E={l}^{2}, where
\begin{array}{c}{l}^{2}=\{\sigma =({\sigma}_{1},{\sigma}_{2},\dots ,{\sigma}_{n},\dots ):\sum _{n=1}^{\mathrm{\infty}}{\parallel {\sigma}_{n}\parallel}^{2}<\mathrm{\infty}\},\phantom{\rule{1em}{0ex}}\parallel \sigma \parallel ={\left(\sum _{n=1}^{\mathrm{\infty}}{\parallel {\sigma}_{n}\parallel}^{2}\right)}^{\frac{1}{2}},\mathrm{\forall}\sigma \in {l}^{2},\hfill \\ \u3008\sigma ,\eta \u3009=\sum _{n=1}^{\mathrm{\infty}}{\sigma}_{n}{\eta}_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall}\delta =({\sigma}_{1},{\sigma}_{2},\dots ,{\sigma}_{n},\dots ),\eta =({\eta}_{1},{\eta}_{2},\dots ,{\eta}_{n},\dots )\in {l}^{2}.\hfill \end{array}
Let {\{{x}_{n}\}}_{n\in \mathbb{N}\cup \{0\}}\subset E be a sequence defined by
where
for all n\in \mathbb{N}. It is clear that the sequence {\{{x}_{n}\}}_{n\in \mathbb{N}} converges weakly to {x}_{0}. Indeed, for any \mathrm{\Lambda}=({\lambda}_{1},{\lambda}_{2},\dots ,{\lambda}_{n},\dots )\in {l}^{2}={({l}^{2})}^{\ast}, we have
as n\to \mathrm{\infty}. It is also obvious that \parallel {x}_{n}{x}_{m}\parallel =\sqrt{2} for any n\ne m with n, m sufficiently large. Thus, {\{{x}_{n}\}}_{n\in \mathbb{N}} is not a Cauchy sequence. We define a countable family of mappings {T}_{j}:E\to E by
for all j\ge 1 and n\ge 0. It is clear that F({T}_{j})=\{0\} for all j\ge 1. It is obvious that {T}_{j} is a quasinonexpansive mapping for each j\in \mathbb{N}. Thus, {\{{T}_{j}\}}_{j\in \mathbb{N}} is a countable family of quasinonexpansive mappings.
Let Tx={lim}_{j\to \mathrm{\infty}}{T}_{j}x for all x\in E. It is easy to see that
Then, we obtain that T is a quasinonexpansive mapping with F(T)=\{0\}=\tilde{F}(T). Let D be a bounded subset of E. Then there exists r>0 such that D\subset {B}_{r}=\{z\in E:\parallel z\parallel <r\}. On the other hand, for any j\in \mathbb{N}, we have
Furthermore, we have
Therefore, ({\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}},T) satisfies the AKKTcondition.
3 Fixed point and convergence theorems
Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let {P}_{C} be the metric (or nearest point) projection from H onto C.
Theorem 3.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume {\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}} is a sequence of nonexpansive mappings from C into itself such that {\bigcap}_{n=1}^{\mathrm{\infty}}F({T}_{n})\ne \mathrm{\varnothing}. Suppose, in addition, that T:C\to C is a nonexpansive mapping such that ({\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}},T) satisfies the AKTTcondition, and S:C\to C is a nonexpansive mapping with F:={\bigcap}_{n=1}^{\mathrm{\infty}}F({T}_{n})\cap F(S)\ne \mathrm{\varnothing}. Let A:C\to E be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0. Let B:C\to E be an LLipschitzian mapping with constant L\ge 0. Let 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma L<\tau, where \tau =1\sqrt{1\mu (2\eta \mu {\kappa}^{2})}.
For arbitrarily given {x}_{1} in C, let the sequence \{{x}_{n}\} be generated iteratively by
where \{{\alpha}_{n}\}, \{{\beta}_{n}\} are two real sequences in (0,1) satisfying the following control conditions:
Then the sequence \{{x}_{n}\} converges strongly to {x}^{\ast}\in F(TS), which solves the variational inequality
Proof We divide the proof into several steps.
Step I. We claim that the sequence \{{x}_{n}\} is bounded. Let p in F be fixed. In view of Lemma 2.6 we conclude that
This together with (3.1)(3.3) implies that
Since {T}_{n} is nonexpansive, for all n in ℕ, it follows from (3.1) and (3.4) that
By induction, we have that \{{x}_{n}\} is bounded. This implies that the sequences \{A{x}_{n}\}, \{B{x}_{n}\}, \{{y}_{n}\}, \{S{y}_{n}\} and \{{T}_{n}S{y}_{n}\} are bounded too. Let
and set
Then we have D is a bounded subset of E and \{{x}_{n}\},\{A{x}_{n}\},\{B{x}_{n}\},\{{y}_{n}\},\{{T}_{n}S{y}_{n}\}\subset D.
Step II. We claim that {lim}_{n\to \mathrm{\infty}}\parallel {y}_{n}TS{y}_{n}\parallel =0. In view of (3.1), we obtain
Since {lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0, it follows from (3.6) that
In view of Lemma 2.6, we conclude that
This implies that
Next, we show that {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n+1}{x}_{n}\parallel =0. For this purpose, we denote a sequence \{{z}_{n}\} by {z}_{n}={T}_{n}S{y}_{n}. It follows from (3.8) that
This implies that
Since {lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0, in view of the AKTTcondition and (3.2)(a), we conclude that
Utilizing Lemma 2.8, we deduce that
It follows from (3.1) and (3.2)(b) that
On the other hand, we have
This implies that
In view of (3.7) and (3.11)(3.12), we obtain
By the triangle inequality, we obtain
In view of the AKTTcondition and (3.13)(3.14), we deduce that
Step III. We prove that there exists {x}^{\ast} in F(TS) such that
For each t in (0,1), we define the mapping {S}_{t}:C\to C by
Since S,T and It\mu A are nonexpansive mappings for each t in (0,1), in view of (2.5), we conclude that {S}_{t} is a contraction for each t in (0,1), and hence by the Banach contraction principle, there exists a unique fixed point {x}_{t} in C such that {S}_{t}({x}_{t})={x}_{t}. Thus, we have
Next, we show that {lim}_{t\to 0}{x}_{t}:={x}^{\ast} exists. We first show that \{{x}_{t}\} is bounded. To this end, let p in F be fixed. In view of Lemma 2.7, we obtain
This implies that
Thus, we have that {\{{x}_{t}\}}_{t\in (0,1)} is bounded and so are {\{ATS{x}_{t}\}}_{t\in (0,1)} and {\{(\gamma B\mu A){x}_{t}\}}_{t\in (0,1)}. In view of (3.15), we obtain
This implies that
Using the techniques in the proof of Theorem 2.1, we see that the variational inequality (3.3) has a unique solution \tilde{x}\in F(TS). We show that {x}_{t}\to \tilde{x} as t\to 0. To this end, set
Then we have {x}_{t}={P}_{C}{y}_{t}, and for any given z in F(TS),
Since {P}_{C} is the metric projection from E onto C, for each z in F(TS), we have
Exploiting Lemma 2.1 and (3.17), we obtain
It follows from (3.18) that
This implies that
Let \{{t}_{n}\}\subset (0,1) be such that {t}_{n}\to {0}^{+} as n\to \mathrm{\infty}. Let {x}_{n}^{\ast}:={x}_{{t}_{n}}. It follows from (3.16) that {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}^{\ast}TS{x}_{n}^{\ast}\parallel =0. The boundedness of \{{x}_{t}\} implies that there exists {x}^{\ast} in C such that {x}_{n}^{\ast}\rightharpoonup {x}^{\ast}, i.e., converges weakly, as n\to \mathrm{\infty}. In view of Lemma 2.4, we deduce that {x}^{\ast}\in F(TS). Since {x}_{n}^{\ast}\rightharpoonup {x}^{\ast} as n\to \mathrm{\infty}, it follows from (3.19) that {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}^{\ast}{x}^{\ast}\parallel =0. Thus, we have that {lim}_{t\to {0}^{+}}{x}_{t}={x}^{\ast} is well defined. Next, we show that {x}^{\ast} solves the variational inequality (3.3). Observe that
Thus, we have
Since TS is nonexpansive, we conclude that ITS is monotone. The property of metric projection implies that
Replacing t by {t}_{n} in (3.20), letting n\to \mathrm{\infty}, and noticing that {\{{x}_{t}z\}}_{t\in (0,1)} is bounded for z in F(TS), with (3.16), we have
Thus, we have that {x}^{\ast} in F(TS) is a solution of the variational inequality (3.3). Consequently, {x}^{\ast}=\tilde{x} by uniqueness. Therefore, {x}_{t}\to \tilde{x} as t\to 0. The variational inequality (3.3) can be written as
So, in terms of Lemma 2.1, it is equivalent to the following fixed point equation:
Since \{{y}_{n}\} is bounded, for any subsequence of \{{y}_{n}\}, there exists a further subsequence \{{y}_{{n}_{i}}\} such that {y}_{{n}_{i}}\rightharpoonup u in C. In view of Lemma 2.4 and Step II, we conclude that u\in F(TS). This together with (3.21) implies that
Step IV. We claim that {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}{x}^{\ast}\parallel =0.
For each n in \mathbb{N}\cup \{0\}, we set
and observe that {y}_{n}={P}_{C}{v}_{n}. Then, by Lemmas 2.1 and 2.5, we obtain
This implies that
where
In view of (3.22) and (3.23), we conclude that
where {\gamma}_{n}={\beta}_{n}{\alpha}_{n}(\tau \gamma L). It is easy to show that {lim}_{n\to \mathrm{\infty}}{\gamma}_{n}=0, {\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}=\mathrm{\infty} and {lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\xi}_{n}\le 0. Hence, in view of Lemma 2.7 and (3.24), we conclude that the sequence \{{x}_{n}\} converges strongly to {x}^{\ast} in F(TS). □
Remark 3.1 Theorem 3.1 improves and extends [[28], Theorems 3.1 and 3.2] in the following aspects.

(i)
The identity mapping I is extended to the case of IA:C\to E, where A:C\to E is a kLipschitzian and ηstrongly monotone (possibly nonself) mapping.

(ii)
In order to find a common fixed point of a countable family of nonexpansive selfmappings {T}_{n}:C\to C, the Manntype iterations in [[28], Theorem 3.2] are extended to develop the new Manntype iteration (3.1).

(iii)
The new technique of an argument is applied in deriving Theorem 3.1. For instance, the characteristic properties (Lemma 2.1) of metric projection {P}_{C} play an important role in proving the strong convergence of the net {\{{x}_{t}\}}_{t\in (0,1)} in Theorem 3.1.

(iv)
Whenever we have C=E, B=0, A=I, the identity mapping on C and \mu =1, then Theorem 3.1 reduces to [[28], Theorem 3.2]. Thus, Theorem 3.1 covers [[28], Theorems 3.1 and 3.2] as special cases.
Remark 3.2 In Theorem 3.1, it is shown that any sequence generated by the iterative step (3.1) converges strongly to the unique solution of the variational inequality problem (3.3). This variational inequality problem is more general than many variational inequality problems (see, for example, [27]) due to the fact that S is an arbitrary nonexpansive mapping, and due to the wellknown relations between fixed points of nonexpansive mappings and variational inequalities, the solution of (3.3) can be seen as a fixed point set of some nonexpansive mapping V, and then this mapping could be added to countable family of nonexpansive mappings {T}_{n}. In addition, the feasible set of the variational inequality problem (3.3) is Fix(TS), with T and S being nonexpansive mappings. For several subsets of nonexpansive mapping, Fix(TS)=Fix(T)\cap Fix(S) hold (see, e.g., [31, 32] for averaged mappings).
4 Constrained convex minimization problems
Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Consider the following constrained convex minimization problem:
where f:C\to \mathbb{R} is a realvalued convex function. If f is Fréchet differentiable, then the gradientprojection method (for short, GPM) generates a sequence \{{x}_{n}\} using the following recursive formula:
or more generally,
where in both (4.2) and (4.3), the initial guess {x}_{0} is taken from C arbitrary, and the parameters, λ or {\lambda}_{n}, are positive real numbers. The convergence of algorithms (4.2) and (4.3) depends on the behavior of the gradient ∇f. As a matter of fact, it is known that if ∇f is αstrongly monotone and LLipschitzian with constants \alpha ,L>0, then the operator
is a contraction; hence, the sequence \{{x}_{n}\} defined by algorithm (4.2) converges in norm to the unique solution of the minimization problem (4.1). More generally, if the sequence \{{\lambda}_{n}\} is chosen to satisfy the property
then the sequence \{{x}_{n}\} defined by algorithm (4.2) converges in norm to the unique minimizer of (4.1).
However, if the gradient ∇f fails to be strongly monotone, the operator T defined by (4.4) would fail to be contractive; consequently, the sequence \{{x}_{n}\} generated by algorithm (4.2) may fail to converge strongly (see [[17], Section 4]). If ∇f is Lipschitzian, then algorithms (4.2) and (4.3) can still converge in the weak topology under certain conditions.
Very recently, Xu [17] gave an alternative operatororiented approach to algorithm (4.3); namely, an averaged mapping approach. He gave his averaged mapping approach to the gradientprojection algorithm (4.3) and the relaxed gradientprojection algorithm. Moreover, he constructed a counterexample, which shows that algorithm (4.2) does not converge in norm in an infinitedimensional space, and also presented two modifications of gradientprojection algorithms, which are shown to have strong convergence. Further, he regularized the minimization problem (4.1) to devise an iterative scheme that generates a sequence converging in norm to the minimumnorm solution of (4.1) in the consistent case.
Let A:C\to H be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0, and let B:C\to H be an lLipschitzian mapping with constant l\ge 0. Suppose that 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma l<\tau, where \tau =\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is LLipschitzian with constant L>0. Motivated by the work of Xu [17], the authors of [27] introduced the following implicit scheme that generates a net {\{{x}_{\lambda}\}}_{\lambda \in (0,\frac{2}{L})} in an implicit way:
where {T}_{\lambda} and s satisfy the following conditions:

(i)
s:=s(\lambda )=\frac{2\lambda L}{4} for each \lambda \in (0,\frac{2}{L});

(ii)
{P}_{C}(I\lambda \mathrm{\nabla}f)=sI+(1s){T}_{\lambda} for each \lambda \in (0,\frac{2}{L}).
They proved that {\{{x}_{\lambda}\}}_{\lambda \in (0,\frac{2}{L})} converges strongly to a minimizer {x}^{\ast} in Ω of (4.1), which solves the variational inequality (2.3).
For a given arbitrary initial guess {x}_{0} in C and a sequence \{{\lambda}_{n}\}\subset (0,\frac{2}{L}) with {\lambda}_{n}\to \frac{2}{L}, they also proposed the following explicit scheme that generates a sequence \{{x}_{n}\} in an explicit way:
where {r}_{n}=\frac{2{\lambda}_{n}L}{4} and {P}_{C}(I{\lambda}_{n}\mathrm{\nabla}f)={s}_{n}I+(1{s}_{n}){T}_{n} for each n\ge 0. It is proven in [27] that the sequence \{{x}_{n}\} strongly converges to a minimizer {x}^{\ast} in Ω of (4.1).
On the other hand, we know that {x}^{\ast} in C solves the minimization problem (4.1) if and only if {x}^{\ast} solves the fixed point equation
where \lambda >0 is any fixed positive number. Note that ∇f being Lipschitzian implies that the gradient ∇f is \frac{1}{L}ism [39], which then implies that \lambda \mathrm{\nabla}f is \frac{1}{\lambda L}ism. So by Proposition 2.2(c), I\lambda \mathrm{\nabla}f is \frac{\lambda L}{2}averaged. Now since the projection {P}_{C} is \frac{1}{2}averaged, we know from Proposition 2.2(c) that I\lambda \mathrm{\nabla}f is \frac{2+\lambda L}{4}averaged for each \lambda \in (0,\frac{2}{L}). Hence, we can write
where {T}_{\lambda} is nonexpansive, and s:=s(\lambda )=\frac{2\lambda L}{4}\in (0,\frac{1}{2}) for each \lambda \in (0,\frac{2}{L}). It is easy to see that
For each fixed \lambda \in (0,\frac{2}{L}), we now consider the selfmapping
It is easy to see that {Q}_{\lambda} is a contraction, see [27] for more details. Thus, there exists a unique fixed point {x}_{\lambda} in C, which uniquely solves the fixed point equation (4.6).
The following two results, which summarize the properties of the net {\{{x}_{\lambda}\}}_{\lambda \in (0,\frac{2}{L})} have been proved in [27].
Proposition 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A:C\to H be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0, and let B:C\to H be an lLipschitzian mapping with constant l\ge 0. Suppose that 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma l<\tau, where \tau =\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is LLipschitzian with constant L>0. For each λ in (0,\frac{2}{L}), let {x}_{\lambda} denote a unique solution of the fixed point equation (4.6), where {T}_{\lambda} and s satisfy the following conditions:

(i)
s:=s(\lambda )=\frac{2\lambda L}{4} for each λ in (0,\frac{2}{L});

(ii)
{P}_{C}(I\lambda \mathrm{\nabla}f)=sI+(1s){T}_{\lambda} for each λ in (0,\frac{2}{L}).
Then, the following properties for the net {\{{x}_{\lambda}\}}_{\lambda \in (0,\frac{2}{L})} hold:

(a)
{\{{x}_{\lambda}\}}_{\lambda \in (0,\frac{2}{L})} is bounded;

(b)
{lim}_{\lambda \to \frac{2}{L}}\parallel {x}_{\lambda}{T}_{\lambda}{x}_{\lambda}\parallel =0;

(c)
{x}_{\lambda} defines a continuous curve from (0,\frac{2}{L}) into C.
Theorem 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A:C\to H be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0, and let B:C\to H be an lLipschitzian mapping with constant l\ge 0. Suppose that 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma l<\tau, where \tau =\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is LLipschitzian with constant L>0. For each λ in (0,\frac{2}{L}), let {x}_{\lambda} denote a unique solution of the fixed point equation (4.6), where {T}_{\lambda} and r satisfy the following conditions:

(i)
s:=s(\lambda )=\frac{2\lambda L}{4} for each λ in (0,\frac{2}{L});

(ii)
{P}_{C}(I\lambda \mathrm{\nabla}f)=sI+(1s){T}_{\lambda} for each λ in (0,\frac{2}{L}).
Then the net {\{{x}_{\lambda}\}}_{\lambda \in (0,\frac{2}{L})} converges strongly, as \lambda \to \frac{2}{L}, to a minimizer {x}^{\ast} of (4.1), which solves the variational inequality (2.3); equivalently, we have {P}_{C}(I\mu A+\gamma B){x}^{\ast}={x}^{\ast}.
Now, we are ready to propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem.
Theorem 4.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that {\{{S}_{n}\}}_{n=1}^{\mathrm{\infty}} is a sequence of nonexpansive mappings from C into itself such that {\bigcap}_{n=1}^{\mathrm{\infty}}F({S}_{n})\ne \mathrm{\varnothing}. Suppose, in addition, that S:C\to C is a nonexpansive mapping such that ({\{{S}_{n}\}}_{n=1}^{\mathrm{\infty}},S) satisfies the AKTTcondition. Let A:C\to H be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0, and let B:C\to H be an lLipschitzian mapping with constant l\ge 0. Suppose that 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma l<\tau, where \tau =\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is LLipschitzian with constant L>0. Let \{{\lambda}_{n}\} be a sequence in the interval (0,\frac{2}{L}) such that \{{T}_{n}\} and \{{s}_{n}\} satisfy the following conditions:

(i)
{s}_{n}=\frac{2{\lambda}_{n}L}{4} for each n\ge 0;

(ii)
{P}_{C}(I{\lambda}_{n}\mathrm{\nabla}f)={s}_{n}I+(1{s}_{n}){T}_{n} for each n\ge 0;

(iii)
{s}_{n}\to 0;

(iv)
{\sum}_{n=0}^{\mathrm{\infty}}{s}_{n}=\mathrm{\infty};

(v)
{\sum}_{n=1}^{\mathrm{\infty}}{\lambda}_{n+1}{\lambda}_{n}<\mathrm{\infty}.
Suppose that \{{\alpha}_{n}\}, \{{\beta}_{n}\} are two sequences of real numbers in (0,1) satisfying the following control conditions:
For given {x}_{1} in C arbitrarily, let the sequence \{{x}_{n}\} be generated by
If {lim}_{n\to \mathrm{\infty}}\parallel {y}_{n}S{y}_{n}\parallel =0 and {\bigcap}_{n=1}^{\mathrm{\infty}}F({T}_{n})\cap F(S)\ne \mathrm{\varnothing}, then there exists a nonexpansive mapping T:C\to C such that ({\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}},T) satisfies the AKTTcondition, and \{{x}_{n}\} converges strongly to a common element {x}^{\ast} in F(TS)\cap \mathrm{\Omega}, which solves the variational inequality
Proof We divide the proof into several steps.
First, we note that

(1)
\tilde{x} in C solves the minimization problem (4.1) if and only if for each fixed \lambda >0, \tilde{x} solves the fixed point equation
\tilde{x}={P}_{C}(I\lambda \mathrm{\nabla}f)\tilde{x}; 
(2)
{P}_{C}(I\lambda \mathrm{\nabla}f) is \frac{2+\lambda L}{4}averaged for each λ in (0,\frac{2}{L}); in particular, the following relation holds:
{P}_{C}(I{\lambda}_{n}\mathrm{\nabla}f)=\frac{2{\lambda}_{n}L}{4}I+\frac{2+{\lambda}_{n}L}{4}{T}_{n}={s}_{n}I+(1{s}_{n}){T}_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall}n\ge 0.
Step I. We claim that the sequence \{{T}_{n}\} satisfies the AKTTcondition.
From the proof of Theorem 3.1, \{{x}_{n}\} is bounded and so are \{B{x}_{n}\} and \{{T}_{n}{x}_{n}\}. Let D be a bounded subset of C such that \{B{x}_{n},{T}_{n}{x}_{n}:n\in \mathbb{N}\}\subset D. Since ∇f is \frac{1}{L}ism, {P}_{C}(I{\lambda}_{n}\mathrm{\nabla}f) is nonexpansive. It follows that for any given z in D and v in Ω,
This implies that
On the other hand, we have for any z in D and u in Ω that
Therefore,
This shows that \{A{T}_{n}z:n\in \mathbb{N},z\in D\} is bounded. We also obtain, for any z in D, that
for some appropriate constant M>0 such that
Thus, we get
Now, define a mapping T:C\to C by
Then T is a nonexpansive mapping. Since the minimization problem (4.1) is consistent, we conclude that {\bigcap}_{n=1}^{\mathrm{\infty}}F({T}_{n})\ne \mathrm{\varnothing}. Consequently, the sequence ({\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}},T) satisfies the AKTTcondition.
Step II. We claim that {lim}_{n\to \mathrm{\infty}}\parallel {y}_{n}TS{y}_{n}\parallel =0. In view of (4.8), we obtain
Since {lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0, it follows from (4.11) that
In view of Lemma 2.6, we conclude that
This implies that
Next, we show that {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n+1}{x}_{n}\parallel =0. To this end, denote a sequence \{{z}_{n}\} by {z}_{n}={T}_{n}{S}_{n}{y}_{n}. It follows from (4.14) that
This implies that
Since {lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0, in view of Lemma 2.8 and (4.14), we conclude that
Using Lemma 2.8, we deduce that
Thus, we have
On the other hand, we have
It follows from (4.16) that
In view of (4.11) and (4.15), we obtain
By the triangle inequality, we obtain
In view of Lemma 2.9, (4.18) and (4.19), we deduce that
By the triangle inequality, we obtain
In view of the AKTTcondition and (4.20)(4.21), we deduce that
Step III. We prove that
where {x}^{\ast}\in F(TS) is the same as in Theorem 3.1 and satisfies
Let \{{y}_{{n}_{k}}\} be such that
By the same manner as in the proof of Theorem 3.1 Step II, we can find u\in F(TS) such that {y}_{{n}_{k}}\rightharpoonup u as k\to \mathrm{\infty}. In view of (ii), we have that
where {s}_{n}=\frac{2{\lambda}_{n}L}{4} for each n\ge 0. In view of (4.24) and taking into account \parallel {y}_{n}S{y}_{n}\parallel \to 0, we conclude that
Hence we have
Thus, from the boundedness of \{{x}_{n}\}, {s}_{n}\to 0 (\u27fa{\lambda}_{n}\to \frac{2}{L}) and \parallel {T}_{n}{y}_{n}{y}_{n}\parallel \to 0, we conclude that
Note that the gradient ∇f is \frac{1}{L}ism. Hence, it is known that {P}_{C}(I\frac{2}{L}\mathrm{\nabla}f) is a nonexpansive selfmapping on C. As a matter of fact, we have for each x, y in C (see the proof of Theorem 4.1)
Since {y}_{{n}_{k}}\rightharpoonup u, by Lemma 2.4, we obtain
This shows that u\in \mathrm{\Omega}. Consequently, from (3.22) and (4.23), it follows that
As in the last part of the proof of Theorem 3.1, we obtain that {x}_{n}\to {x}^{\ast}, which completes the proof. □
Corollary 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A:C\to H be a κLipschitzian and ηstrongly monotone operator with constants \kappa ,\eta >0, and let B:C\to H be an lLipschitzian mapping with constant l\ge 0. Suppose that 0<\mu <\frac{2\eta}{{\kappa}^{2}} and 0\le \gamma l<\tau, where \tau =\sqrt{1\mu (2\eta \mu {\kappa}^{2})}. Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is LLipschitzian with constant L>0. Let \{{\lambda}_{n}\} be a sequence in the interval (0,\frac{2}{L}) such that \{{T}_{n}\} and \{{s}_{n}\} satisfy the following conditions:

(i)
{s}_{n}=\frac{2{\lambda}_{n}L}{4} for each n\ge 0;

(ii)
{P}_{C}(I{\lambda}_{n}\mathrm{\nabla}f)={s}_{n}I+(1{s}_{n}){T}_{n} for each n\ge 0;

(iii)
{s}_{n}\to 0;

(iv)
{\sum}_{n=0}^{\mathrm{\infty}}{s}_{n}=\mathrm{\infty};

(v)
{\sum}_{n=1}^{\mathrm{\infty}}{\lambda}_{n+1}{\lambda}_{n}<\mathrm{\infty}.
Suppose that \{{\alpha}_{n}\}, \{{\beta}_{n}\} are two real sequences in (0,1) satisfying the following control conditions:
For given {x}_{1} in C arbitrarily, let the sequence \{{x}_{n}\} be generated by
Then, there exists a nonexpansive mapping T:C\to C such that ({\{{T}_{n}\}}_{n=1}^{\mathrm{\infty}},T) satisfies the AKTTcondition, and \{{x}_{n}\} converges strongly to a common element {x}^{\ast}\in F(T)\cap \mathrm{\Omega}, which solves the variational inequality
We end this section by considering simple examples of sequences that fulfill the desired conditions of our results.
Example 4.1 Let {\{{\alpha}_{n}\}}_{n=1}^{\mathrm{\infty}} be a sequence defined by
Let L>0 be any arbitrary real number, and let {n}_{0}\in \mathbb{N} be such that {n}_{0}>\frac{L}{2}. We define the sequence {\{{\lambda}_{n}\}}_{n=1}^{\mathrm{\infty}} as follows:
Then the sequences {\{{\alpha}_{n}\}}_{n=1}^{\mathrm{\infty}} and {\{{\lambda}_{n}\}}_{n=1}^{\mathrm{\infty}} satisfy all the aspects of the hypotheses of our results.
5 Applications
Let H be a real Hilbert space, and let Q:H\to {2}^{H} be a mapping. The effective domain of Q is denoted by dom(Q), that is, dom(Q)=\{x\in H:Qx\ne \mathrm{\varnothing}\}. The range of Q is denoted by R(Q). A multivalued mapping Q is said to be monotone if for all x,y\in H, f\in Qx and g in Qy,
A monotone mapping Q:H\to {2}^{H} is said to be maximal if its graph G(Q):\{(x,f):f\in Q(x)\} is not properly contained in the graph of any other monotone mapping. It is wellknown that a monotone mapping Q:H\to {2}^{H} is maximal if and only if, for (x,f) in H\times H, \u3008xy,fg\u3009\ge 0 for every (y,g)\in G(Q) implies that f\in Q(x). For a maximal monotone operator Q on H and r>0, we may define a singlevalued operator {J}_{r}={(I+rQ)}^{1}:H\to dom(Q), which is called the resolvent of Q for r>0. Assume that {Q}^{1}0=\{x\in H:0\in Qx\}. It is known that {Q}^{1}0=F({J}_{r}) for all r>0, and the resolvent {J}_{r} is firmly nonexpansive, i.e.,
The following lemma has been proved in [40].
Lemma 5.1 Let H be a real Hilbert space, and let Q be a maximal monotone operator on H. For r>0, let {J}_{r} be the resolvent operator associated with Q and r. Then
for all \rho ,\sigma >0 and x\in H.
We also know the following lemma from [38].
Lemma 5.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let Q be a maximal monotone operator on H such that {Q}^{1}0\ne \mathrm{\varnothing} and cl(dom(Q))\subset C\subset {\bigcap}_{r>0}R(I+rQ), where cl(dom(Q)) stands for the closure of dom(Q). Suppose that \{{r}_{n}\} is a sequence of (0,\mathrm{\infty}) such that inf\{{r}_{n}:n\in \mathbb{N}\}>0 and {\sum}_{n=1}^{\mathrm{\infty}}{r}_{n+1}{r}_{n}<\mathrm{\infty}. Then

(i)
{\sum}_{n=1}^{\mathrm{\infty}}sup\{\parallel {J}_{{r}_{n+1}}z{J}_{{r}_{n}}z\parallel :z\in D\}<\mathrm{\infty} for any bounded subset D of C.

(ii)
{lim}_{n\to \mathrm{\infty}}{J}_{{r}_{n}}z={J}_{r}z for all z in C and F({J}_{r})={\bigcap}_{n=1}^{\mathrm{\infty}}F({J}_{{r}_{n}}), where {r}_{n}\to r as n\to \mathrm{\infty}.
From Theorem 3.1 and Lemma 5.2, we obtain the following result.
Theorem 5.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let Q be a maximal monotone operator on H such that {Q}^{1}0\ne \mathrm{\varnothing}. Given real sequences \{{\alpha}_{n}\}, \{{\beta}_{n}\} in (0,1) and \{{r}_{n}\} in (0,\mathrm{\infty}), assume that \{{\alpha}_{n}\}, \{{\beta}_{n}\} satisfy the following control conditions: