- Research
- Open access
- Published:
Strong convergence of projection methods for a countable family of nonexpansive mappings and applications to constrained convex minimization problems
Journal of Inequalities and Applications volume 2013, Article number: 546 (2013)
Abstract
In this paper, we introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space, which solves a corresponding variational inequality. Furthermore, we propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature.
MSC:47H10, 37C25.
1 Introduction
A viscosity approximation method for finding fixed points of nonexpansive mappings was first proposed by Moudafi in 2000 [1]. He proved the convergence of the sequence generated by the proposed method. In 2004, Xu [2] proved the strong convergence of the sequence generated by the viscosity approximation method to a unique solution of a certain variational inequality problem defined on the set of fixed points of a nonexpansive map.
It is well known that the iterative methods for finding fixed points of nonexpansive mappings can also be used to solve a convex minimization problem; see, for example, [3–5] and the references therein. In 2003, Xu [4] introduced an iterative method for computing the approximate solutions of a quadratic minimization problem over the set of fixed points of a nonexpansive mapping defined on a real Hilbert space. He proved that the sequence generated by the proposed method converges strongly to the unique solution of the quadratic minimization problem. By combining the iterative schemes proposed by Moudafi [1] and Xu [4], Marino and Xu [6] considered a general iterative method and proved that the sequence generated by the method converges strongly to a unique solution of a certain variational inequality problem, which is the optimality condition for a particular minimization problem. Liu [7] and Qin et al. [8] also studied some applications of the iterative method considered in [6]. Yamada [5] introduced the so-called hybrid steepest-descent method for solving the variational inequality problem and also studied the convergence of the sequence generated by the proposed method. Very recently, Tian [9] combined the iterative methods of [5, 6] in order to propose implicit and explicit schemes for constructing a fixed point of a nonexpansive mapping T defined on a real Hilbert space. He also proved the strong convergence of these two schemes to a fixed point of T under appropriate conditions. Related iterative methods for solving fixed point problems, variational inequalities and optimization problems can be found in [10–15] and the references therein.
On the other hand, the gradient-projection method for finding the approximate solutions of the constrained convex minimization problem is well known; see, for example, [16] and the references therein. The convergence of the sequence generated by this method depends on the behavior of the gradient of the objective function. If the gradient fails to be strongly monotone, then the strong convergence of the sequence generated by the gradient-projection method may fail. Very recently, Xu [17] gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-projection algorithm, namely, an averaged mapping approach. Moreover, he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the setting of an infinite-dimensional space. He also presented two modifications of gradient-projection algorithms which are shown to have strong convergence. Further, he regularized the minimization problem to derive an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of the constrained convex minimization problem in the consistent case. The related methods and results can be found in [18–26] and the references therein. By virtue of projections, the authors in [27] extended the implicit and explicit iterative schemes proposed in [9].
The purpose of this paper is to introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space. We prove the strong convergence theorems for the sequences produced by the methods to a common fixed point of a countable family of nonexpansive mappings which is the unique solution of a corresponding variational inequality. We also propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature, see, for example, [27, 28].
2 Preliminaries
Throughout this paper, we denote the set of real numbers and the set of positive integers by ℝ and ℕ, respectively. Let H be a real Hilbert space, and let C be a nonempty subset of H. Let be a mapping. We denote by the set of fixed points of T, i.e., .
Definition 2.1 (i) A mapping is said to be nonexpansive if for all x, y in C. T is said to be quasi-nonexpansive if and for all x in C and y in .
-
(ii)
A mapping is said to be an averaged mapping [29] if it can be written as the average of the identity I and a nonexpansive mapping; that is,
(2.1)
where α is a number in , and is nonexpansive. More precisely, when (2.1), holds, we say that T is α-averaged.
-
(iii)
A mapping is said to be L-Lipschitzian if , , where is a constant. In particular, if , then B is called a contraction on C; if , then B is nonexpansive.
-
(iv)
A mapping is called firmly nonexpansive [30] if
where is the domain of V.
Clearly, a firmly nonexpansive mapping is a -averaged map.
Let H be a real Hilbert space, and let be mappings.
-
(a)
If for some α in , and if S is averaged, and B is nonexpansive, then T is averaged.
-
(b)
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
-
(c)
If for some α in , and if S is firmly nonexpansive, and B is nonexpansive, then T is averaged.
Recall that the metric (or nearest point) projection from H onto C is the mapping , which assigns to each point x in H the unique point in C satisfying the property
Lemma 2.1 [30]
Let H be a real Hilbert space. For given x in H:
-
(a)
if and only if
-
(b)
if and only if
-
(c)
Consequently, is nonexpansive and monotone.
In general, a projection mapping is firmly nonexpansive, and, thus, a -averaged map.
Lemma 2.2 The following inequality holds in an inner product space X:
Lemma 2.3 [33]
In a Hilbert space H, we have
Lemma 2.4 (Demiclosedness principle [33])
Let be a nonexpansive mapping with . If is a sequence in C that converges weakly to x, and if converges strongly to y, then ; in particular, if , then .
Definition 2.2 Let H be a real Hilbert space. A nonlinear operator T, whose domain and range is said to be:
-
(a)
monotone if
-
(b)
η-strongly monotone if there exists such that
-
(c)
α-inverse strongly monotone (for short, α-ism) if there exists such that
It can be easily seen that (i) if T is nonexpansive, then is monotone; (ii) the projection map is a 1-ism. The inverse strongly monotone (also referred to as co-coercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [34, 35] and the references therein.
Proposition 2.2 [31]
Let be an operator.
-
(a)
T is nonexpansive if and only if the complement is -ism.
-
(b)
If T is v-ism, then γT is -ism for .
-
(b)
T is averaged if and only if the complement is v-ism for some . Indeed, for α in , T is α-averaged if and only if is -ism.
Lemma 2.5 [27]
Let be an L-Lipschitzian mapping with constant , and let be a κ-Lipschitzian and η-strongly monotone operator with constants . Then for , we have
That is, is strongly monotone with coefficient .
The following lemma plays a key role in proving strong convergence of our iterative schemes.
Lemma 2.6 [[5], Lemma 3.1]
Suppose that and . Let be an L-Lipschitzian and η-strongly monotone operator on C. In association with a nonexpansive mapping , define the mapping by
Then is a contraction provided , that is,
where .
Let be a κ-Lipschitzian and η-strongly monotone operator with constants . Let be an L-Lipschitzian mapping with constant . Assume that is a nonexpansive mapping with . Let and , where . Let the net be generated by the following implicit scheme:
Then converges strongly to a fixed point of T, which solves the variational inequality
Let the sequence be generated by the following explicit scheme:
Then converges strongly to a fixed point of T, which is also a solution of the variational inequality (2.3), see, for more details, [27].
Consider a self-mapping on C defined by
Then, is a contraction, and it has a unique fixed point in C, which uniquely solves the fixed point equation (2.2), see [27] for more details.
Proposition 2.3 [[27], Proposition 3.1]
Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let be a κ-Lipschitzian and η-strongly accretive operator with constants . Let be an L-Lipschitzian mapping with constant . Assume that is a nonexpansive mapping with . Let and , where . For each t in , let denote a unique solution of the fixed point equation (2.2). Then, the following properties hold for the net :
-
(1)
is bounded;
-
(2)
;
-
(3)
defines a continuous curve from into C.
Theorem 2.1 [[27], Theorem 3.1]
Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let be an κ-Lipschitzian and η-strongly accretive operator with constants . Let be an L-Lipschitzian mapping with constant . Assume is a nonexpansive mapping with . Let and , where . For each t in , let denote the unique solution of the fixed point equation (2.2). Then the net converges strongly, as , to a fixed point of T, which solves the variational inequality (2.3), or equivalently, .
Lemma 2.7 [36]
Let , be sequences of nonnegative real numbers satisfying the inequality:
Suppose that and satisfy the conditions:
-
(i)
and , or equivalently, ;
-
(ii)
, or
(ii)′ .
Then .
Lemma 2.8 [37]
Let be a sequence of real numbers with
Let and be two sequences in a Banach space E such that
If
then .
Let C be a subset of a real Banach space E, and let be a family of mappings of C such that . Then is said to satisfy the -condition [38] if for each bounded subset D of C, we have
Lemma 2.9 [38]
Let C be a nonempty subset of a real Banach space E, and let be a family of mappings of C into itself, which satisfies the -condition. Then for each x in C, we have that converges strongly to a point in C. Moreover, let the mapping T be defined by
Then for each bounded subset D of C, we have
In the sequel, we will write that satisfies the AKKT-condition if satisfies the AKKT-condition, and T is defined by Lemma 2.9.
We end this section with the following simple examples of mappings satisfying the -condition (see also Lemma 5.2).
Example 2.1 (i) Let E be any Banach space. For any , let a mapping be defined by
Then, is a nonexpansive mapping for each . It could easily be seen that satisfies the AKKT-condition, where for all .
-
(ii)
Let , where
Let be a sequence defined by
where
for all . It is clear that the sequence converges weakly to . Indeed, for any , we have
as . It is also obvious that for any with n, m sufficiently large. Thus, is not a Cauchy sequence. We define a countable family of mappings by
for all and . It is clear that for all . It is obvious that is a quasi-nonexpansive mapping for each . Thus, is a countable family of quasi-nonexpansive mappings.
Let for all . It is easy to see that
Then, we obtain that T is a quasi-nonexpansive mapping with . Let D be a bounded subset of E. Then there exists such that . On the other hand, for any , we have
Furthermore, we have
Therefore, satisfies the AKKT-condition.
3 Fixed point and convergence theorems
Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let be the metric (or nearest point) projection from H onto C.
Theorem 3.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume is a sequence of nonexpansive mappings from C into itself such that . Suppose, in addition, that is a nonexpansive mapping such that satisfies the -condition, and is a nonexpansive mapping with . Let be a κ-Lipschitzian and η-strongly monotone operator with constants . Let be an L-Lipschitzian mapping with constant . Let and , where .
For arbitrarily given in C, let the sequence be generated iteratively by
where , are two real sequences in satisfying the following control conditions:
Then the sequence converges strongly to , which solves the variational inequality
Proof We divide the proof into several steps.
Step I. We claim that the sequence is bounded. Let p in F be fixed. In view of Lemma 2.6 we conclude that
This together with (3.1)-(3.3) implies that
Since is nonexpansive, for all n in ℕ, it follows from (3.1) and (3.4) that
By induction, we have that is bounded. This implies that the sequences , , , and are bounded too. Let
and set
Then we have D is a bounded subset of E and .
Step II. We claim that . In view of (3.1), we obtain
Since , it follows from (3.6) that
In view of Lemma 2.6, we conclude that
This implies that
Next, we show that . For this purpose, we denote a sequence by . It follows from (3.8) that
This implies that
Since , in view of the -condition and (3.2)(a), we conclude that
Utilizing Lemma 2.8, we deduce that
It follows from (3.1) and (3.2)(b) that
On the other hand, we have
This implies that
In view of (3.7) and (3.11)-(3.12), we obtain
By the triangle inequality, we obtain
In view of the -condition and (3.13)-(3.14), we deduce that
Step III. We prove that there exists in such that
For each t in , we define the mapping by
Since and are nonexpansive mappings for each t in , in view of (2.5), we conclude that is a contraction for each t in , and hence by the Banach contraction principle, there exists a unique fixed point in C such that . Thus, we have
Next, we show that exists. We first show that is bounded. To this end, let p in F be fixed. In view of Lemma 2.7, we obtain
This implies that
Thus, we have that is bounded and so are and . In view of (3.15), we obtain
This implies that
Using the techniques in the proof of Theorem 2.1, we see that the variational inequality (3.3) has a unique solution . We show that as . To this end, set
Then we have , and for any given z in ,
Since is the metric projection from E onto C, for each z in , we have
Exploiting Lemma 2.1 and (3.17), we obtain
It follows from (3.18) that
This implies that
Let be such that as . Let . It follows from (3.16) that . The boundedness of implies that there exists in C such that , i.e., converges weakly, as . In view of Lemma 2.4, we deduce that . Since as , it follows from (3.19) that . Thus, we have that is well defined. Next, we show that solves the variational inequality (3.3). Observe that
Thus, we have
Since TS is nonexpansive, we conclude that is monotone. The property of metric projection implies that
Replacing t by in (3.20), letting , and noticing that is bounded for z in , with (3.16), we have
Thus, we have that in is a solution of the variational inequality (3.3). Consequently, by uniqueness. Therefore, as . The variational inequality (3.3) can be written as
So, in terms of Lemma 2.1, it is equivalent to the following fixed point equation:
Since is bounded, for any subsequence of , there exists a further subsequence such that in C. In view of Lemma 2.4 and Step II, we conclude that . This together with (3.21) implies that
Step IV. We claim that .
For each n in , we set
and observe that . Then, by Lemmas 2.1 and 2.5, we obtain
This implies that
where
In view of (3.22) and (3.23), we conclude that
where . It is easy to show that , and . Hence, in view of Lemma 2.7 and (3.24), we conclude that the sequence converges strongly to in . □
Remark 3.1 Theorem 3.1 improves and extends [[28], Theorems 3.1 and 3.2] in the following aspects.
-
(i)
The identity mapping I is extended to the case of , where is a k-Lipschitzian and η-strongly monotone (possibly nonself-) mapping.
-
(ii)
In order to find a common fixed point of a countable family of nonexpansive self-mappings , the Mann-type iterations in [[28], Theorem 3.2] are extended to develop the new Mann-type iteration (3.1).
-
(iii)
The new technique of an argument is applied in deriving Theorem 3.1. For instance, the characteristic properties (Lemma 2.1) of metric projection play an important role in proving the strong convergence of the net in Theorem 3.1.
-
(iv)
Whenever we have , , , the identity mapping on C and , then Theorem 3.1 reduces to [[28], Theorem 3.2]. Thus, Theorem 3.1 covers [[28], Theorems 3.1 and 3.2] as special cases.
Remark 3.2 In Theorem 3.1, it is shown that any sequence generated by the iterative step (3.1) converges strongly to the unique solution of the variational inequality problem (3.3). This variational inequality problem is more general than many variational inequality problems (see, for example, [27]) due to the fact that S is an arbitrary nonexpansive mapping, and due to the well-known relations between fixed points of nonexpansive mappings and variational inequalities, the solution of (3.3) can be seen as a fixed point set of some nonexpansive mapping V, and then this mapping could be added to countable family of non-expansive mappings . In addition, the feasible set of the variational inequality problem (3.3) is , with T and S being nonexpansive mappings. For several sub-sets of nonexpansive mapping, hold (see, e.g., [31, 32] for averaged mappings).
4 Constrained convex minimization problems
Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Consider the following constrained convex minimization problem:
where is a real-valued convex function. If f is Fréchet differentiable, then the gradient-projection method (for short, GPM) generates a sequence using the following recursive formula:
or more generally,
where in both (4.2) and (4.3), the initial guess is taken from C arbitrary, and the parameters, λ or , are positive real numbers. The convergence of algorithms (4.2) and (4.3) depends on the behavior of the gradient ∇f. As a matter of fact, it is known that if ∇f is α-strongly monotone and L-Lipschitzian with constants , then the operator
is a contraction; hence, the sequence defined by algorithm (4.2) converges in norm to the unique solution of the minimization problem (4.1). More generally, if the sequence is chosen to satisfy the property
then the sequence defined by algorithm (4.2) converges in norm to the unique minimizer of (4.1).
However, if the gradient ∇f fails to be strongly monotone, the operator T defined by (4.4) would fail to be contractive; consequently, the sequence generated by algorithm (4.2) may fail to converge strongly (see [[17], Section 4]). If ∇f is Lipschitzian, then algorithms (4.2) and (4.3) can still converge in the weak topology under certain conditions.
Very recently, Xu [17] gave an alternative operator-oriented approach to algorithm (4.3); namely, an averaged mapping approach. He gave his averaged mapping approach to the gradient-projection algorithm (4.3) and the relaxed gradient-projection algorithm. Moreover, he constructed a counterexample, which shows that algorithm (4.2) does not converge in norm in an infinite-dimensional space, and also presented two modifications of gradient-projection algorithms, which are shown to have strong convergence. Further, he regularized the minimization problem (4.1) to devise an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of (4.1) in the consistent case.
Let be a κ-Lipschitzian and η-strongly monotone operator with constants , and let be an l-Lipschitzian mapping with constant . Suppose that and , where . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant . Motivated by the work of Xu [17], the authors of [27] introduced the following implicit scheme that generates a net in an implicit way:
where and s satisfy the following conditions:
-
(i)
for each ;
-
(ii)
for each .
They proved that converges strongly to a minimizer in Ω of (4.1), which solves the variational inequality (2.3).
For a given arbitrary initial guess in C and a sequence with , they also proposed the following explicit scheme that generates a sequence in an explicit way:
where and for each . It is proven in [27] that the sequence strongly converges to a minimizer in Ω of (4.1).
On the other hand, we know that in C solves the minimization problem (4.1) if and only if solves the fixed point equation
where is any fixed positive number. Note that ∇f being Lipschitzian implies that the gradient ∇f is -ism [39], which then implies that is -ism. So by Proposition 2.2(c), is -averaged. Now since the projection is -averaged, we know from Proposition 2.2(c) that is -averaged for each . Hence, we can write
where is nonexpansive, and for each . It is easy to see that
For each fixed , we now consider the self-mapping
It is easy to see that is a contraction, see [27] for more details. Thus, there exists a unique fixed point in C, which uniquely solves the fixed point equation (4.6).
The following two results, which summarize the properties of the net have been proved in [27].
Proposition 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let be a κ-Lipschitzian and η-strongly monotone operator with constants , and let be an l-Lipschitzian mapping with constant . Suppose that and , where . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant . For each λ in , let denote a unique solution of the fixed point equation (4.6), where and s satisfy the following conditions:
-
(i)
for each λ in ;
-
(ii)
for each λ in .
Then, the following properties for the net hold:
-
(a)
is bounded;
-
(b)
;
-
(c)
defines a continuous curve from into C.
Theorem 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let be a κ-Lipschitzian and η-strongly monotone operator with constants , and let be an l-Lipschitzian mapping with constant . Suppose that and , where . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant . For each λ in , let denote a unique solution of the fixed point equation (4.6), where and r satisfy the following conditions:
-
(i)
for each λ in ;
-
(ii)
for each λ in .
Then the net converges strongly, as , to a minimizer of (4.1), which solves the variational inequality (2.3); equivalently, we have .
Now, we are ready to propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem.
Theorem 4.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that is a sequence of nonexpansive mappings from C into itself such that . Suppose, in addition, that is a nonexpansive mapping such that satisfies the -condition. Let be a κ-Lipschitzian and η-strongly monotone operator with constants , and let be an l-Lipschitzian mapping with constant . Suppose that and , where . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant . Let be a sequence in the interval such that and satisfy the following conditions:
-
(i)
for each ;
-
(ii)
for each ;
-
(iii)
;
-
(iv)
;
-
(v)
.
Suppose that , are two sequences of real numbers in satisfying the following control conditions:
For given in C arbitrarily, let the sequence be generated by
If and , then there exists a nonexpansive mapping such that satisfies the -condition, and converges strongly to a common element in , which solves the variational inequality
Proof We divide the proof into several steps.
First, we note that
-
(1)
in C solves the minimization problem (4.1) if and only if for each fixed , solves the fixed point equation
-
(2)
is -averaged for each λ in ; in particular, the following relation holds:
Step I. We claim that the sequence satisfies the -condition.
From the proof of Theorem 3.1, is bounded and so are and . Let D be a bounded subset of C such that . Since ∇f is -ism, is nonexpansive. It follows that for any given z in D and v in Ω,
This implies that
On the other hand, we have for any z in D and u in Ω that
Therefore,
This shows that is bounded. We also obtain, for any z in D, that
for some appropriate constant such that
Thus, we get
Now, define a mapping by
Then T is a nonexpansive mapping. Since the minimization problem (4.1) is consistent, we conclude that . Consequently, the sequence satisfies the -condition.
Step II. We claim that . In view of (4.8), we obtain
Since , it follows from (4.11) that
In view of Lemma 2.6, we conclude that
This implies that
Next, we show that . To this end, denote a sequence by . It follows from (4.14) that
This implies that
Since , in view of Lemma 2.8 and (4.14), we conclude that
Using Lemma 2.8, we deduce that
Thus, we have
On the other hand, we have
It follows from (4.16) that
In view of (4.11) and (4.15), we obtain
By the triangle inequality, we obtain
In view of Lemma 2.9, (4.18) and (4.19), we deduce that
By the triangle inequality, we obtain
In view of the -condition and (4.20)-(4.21), we deduce that
Step III. We prove that
where is the same as in Theorem 3.1 and satisfies
Let be such that
By the same manner as in the proof of Theorem 3.1 Step II, we can find such that as . In view of (ii), we have that
where for each . In view of (4.24) and taking into account , we conclude that
Hence we have
Thus, from the boundedness of , () and , we conclude that
Note that the gradient ∇f is -ism. Hence, it is known that is a nonexpansive self-mapping on C. As a matter of fact, we have for each x, y in C (see the proof of Theorem 4.1)
Since , by Lemma 2.4, we obtain
This shows that . Consequently, from (3.22) and (4.23), it follows that
As in the last part of the proof of Theorem 3.1, we obtain that , which completes the proof. □
Corollary 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let be a κ-Lipschitzian and η-strongly monotone operator with constants , and let be an l-Lipschitzian mapping with constant . Suppose that and , where . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant . Let be a sequence in the interval such that and satisfy the following conditions:
-
(i)
for each ;
-
(ii)
for each ;
-
(iii)
;
-
(iv)
;
-
(v)
.
Suppose that , are two real sequences in satisfying the following control conditions:
For given in C arbitrarily, let the sequence be generated by
Then, there exists a nonexpansive mapping such that satisfies the -condition, and converges strongly to a common element , which solves the variational inequality
We end this section by considering simple examples of sequences that fulfill the desired conditions of our results.
Example 4.1 Let be a sequence defined by
Let be any arbitrary real number, and let be such that . We define the sequence as follows:
Then the sequences and satisfy all the aspects of the hypotheses of our results.
5 Applications
Let H be a real Hilbert space, and let be a mapping. The effective domain of Q is denoted by , that is, . The range of Q is denoted by . A multi-valued mapping Q is said to be monotone if for all , and g in Qy,
A monotone mapping is said to be maximal if its graph is not properly contained in the graph of any other monotone mapping. It is well-known that a monotone mapping is maximal if and only if, for in , for every implies that . For a maximal monotone operator Q on H and , we may define a single-valued operator , which is called the resolvent of Q for . Assume that . It is known that for all , and the resolvent is firmly nonexpansive, i.e.,
The following lemma has been proved in [40].
Lemma 5.1 Let H be a real Hilbert space, and let Q be a maximal monotone operator on H. For , let be the resolvent operator associated with Q and r. Then
for all and .
We also know the following lemma from [38].
Lemma 5.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let Q be a maximal monotone operator on H such that and , where stands for the closure of . Suppose that is a sequence of such that and . Then
-
(i)
for any bounded subset D of C.
-
(ii)
for all z in C and , where as .
From Theorem 3.1 and Lemma 5.2, we obtain the following result.
Theorem 5.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let Q be a maximal monotone operator on H such that . Given real sequences , in and in , assume that , satisfy the following control conditions:
Suppose, in addition, that is a nonexpansive mapping with . Let be a κ-Lipschitzian and η-strongly monotone operator with constants , let be an L-Lipschitzian mapping with constant . Let and , where . For given in C arbitrarily, let the sequence be generated iteratively by
Then the sequence defined by (5.1) converges strongly to in , which solves the variational inequality
The following result is yet another easy consequence of Theorem 3.1 and Lemma 5.2.
Theorem 5.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let Q be a maximal monotone operator on H such that . Given real sequences , in and in , assume that , satisfy the following control conditions:
Let be a κ-Lipschitzian and η-strongly accretive operator with constants , be an L-Lipschitzian mapping with constant . Let and , where . For given in C arbitrarily, let the sequence be generated iteratively by
If , then the sequence converges strongly to in , which solves the variational inequality
The following results are easy consequences of Theorem 4.2 and Lemma 5.2.
Theorem 5.3 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that is a sequence of nonexpansive mappings from C into itself such that . Suppose, in addition, that is a nonexpansive mapping such that satisfies the -condition. Let be a κ-Lipschitzian and η-strongly monotone operator with constants , and let be an l-Lipschitzian mapping with constant . Suppose that and , where . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant . Let be a sequence in the interval such that and satisfy the following conditions:
-
(i)
for each ;
-
(ii)
for each ;
-
(iii)
;
-
(iv)
;
-
(v)
.
Suppose that , are two real sequences in satisfying the following control conditions:
For given in C arbitrarily, let the sequence be generated by
If , then the sequence converges strongly to a common element in , which solves the variational inequality
Theorem 5.4 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that is a sequence of nonexpansive mappings from C into itself such that . Let be a κ-Lipschitzian and η-strongly monotone operator with constants , and let be an l-Lipschitzian mapping with constant . Suppose that and , where . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient ∇f is L-Lipschitzian with constant . Let be a sequence in the interval such that and satisfy the following conditions:
-
(i)
for each ;
-
(ii)
for each ;
-
(iii)
;
-
(iv)
;
-
(v)
.
Suppose that , are two real sequences in satisfying the following control conditions:
For given in C arbitrarily, let the sequence be generated by
Then the sequence converges strongly to a common element in , which solves the variational inequality
Remark 5.1 In Theorem 5.1, it is shown that any sequence generated by the iterative step (5.1) converges strongly to the unique solution of the variational inequality problem (5.2). This variational inequality problem is more general than many variational inequality problems (see, for example, [27]) due to the fact that S is an arbitrary nonexpansive mapping. Indeed, in particular case, when , the identity mapping on H, the corresponding results in current literature are special cases of our result (Theorem 5.1).
References
Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615
Xu H-K: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059
Deutch F, Yamada I: Minimizing certain convex functions over the intersection of the fixed point sets of nonexpansive mappings. Numer. Funct. Anal. Optim. 1998, 19: 33–56.
Xu H-K: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589
Yamada I: The hybrid steepest descent method for the variational inequality problems over the intersection of fixed point sets of nonexpansive mappings. Stud. Comput. Math. 8. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu D, Censor Y, Reich S. North-Holland, Amsterdam; 2001:473–504.
Marino G, Xu H-K: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028
Liu Y: A general iterative method for equilibrium problems and strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2009, 71: 4852–4861. 10.1016/j.na.2009.03.060
Qin X, Shang M, Kang SM: Strong convergence theorems of modified Mann iterative process for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2009, 70: 1257–1264. 10.1016/j.na.2008.02.009
Tian M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. 2010, 73: 689–694. 10.1016/j.na.2010.03.058
Ceng LC, Huang S: Modified extragradient methods for strict pseudo-contractions and monotone mappings. Taiwan. J. Math. 2009, 13(4):1197–1211.
Ceng LC, Huang S, Liou YC: Hybrid proximal point algorithms for solving constrained minimization problems in Banach spaces. Taiwan. J. Math. 2009, 13(2B):805–820.
Ceng LC, Huang S, Petrusel A: Weak convergence theorem by a modified extragradient method for nonexpansive mappings and monotone mappings. Taiwan. J. Math. 2009, 13(1):225–238.
Ceng LC, Wong NC: Viscosity approximation methods for equilibrium problems and fixed point problems of nonlinear semigroups. Taiwan. J. Math. 2009, 13(5):1497–1513.
Ceng LC, Yao JC: Relaxed viscosity approximation methods for fixed point problems and variational inequality problems. Nonlinear Anal. 2008, 69: 3299–3309. 10.1016/j.na.2007.09.019
Zeng LC, Ansari QH, Shyu DS, Yao JC: Strong and weak convergence theorems for common solutions of generalized equilibrium problems and zeros of maximal monotone operators. Fixed Point Theory Appl. 2010., 2010: Article ID 590278
Su M, Xu H-K: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1(1):35–43.
Xu H-K: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150(2):360–378. 10.1007/s10957-011-9837-z
Chen R, Su Y, Xu H-K: Regularization and iteration methods for a class of monotone variational inequalities. Taiwan. J. Math. 2009, 13(2B):739–752.
Cianciaruso F, Colao V, Muglia L, Xu H-K: On an implicit hierarchical fixed point approach to variational inequalities. Bull. Aust. Math. Soc. 2009, 80(1):117–124. 10.1017/S0004972709000082
He S, Xu H-K: Variational inequalities governed by boundedly Lipschitzian and strongly monotone operators. Fixed Point Theory 2009, 10(2):245–258.
Marino G, Xu H-K: Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 2011, 149: 61–78. 10.1007/s10957-010-9775-1
Xu H-K: Viscosity method for hierarchical fixed point approach to variational inequalities. Taiwan. J. Math. 2010, 14(2):463–478.
Yao Y, Chen R, Xu H-K: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72: 3447–3456. 10.1016/j.na.2009.12.029
Ceng LC, Guu SY, Hu HY, Yao JC: Hybrid shrinking projection method for a generalized equilibrium problem, a maximal monotone operator and a countable family of relatively nonexpansive mappings. Comput. Math. Appl. 2011, 61(9):2468–2479. 10.1016/j.camwa.2011.02.028
Ceng LC, Ansari QH, Yao JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 2011, 1(3):341–359.
Kimura Y, Takahashi W, Yao JC: Strong convergence of an iterative scheme by a new type of projection method for a family of quasinonexpansive mappings. J. Optim. Theory Appl. 2011, 149: 239–253. 10.1007/s10957-010-9788-9
Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005
Yao Y, Liou YC, Marino G: Strong convergence of two iterative algorithms for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2009., 2009: Article ID 179058
Baillon JB, Bruck RE, Reich S: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 1978, 4: 1–9.
Goebel G, Reich S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York; 1984.
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53(5–6):475–504. 10.1080/02331930412331327157
Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.
Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965
Han D, Lo HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5
Xu H-K: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332
Suzuki T: Strong convergence of Krasnoseleskii and Mann type sequences for one parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017
Aoyama K, Kamimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. TMA 2007, 67: 2350–2360. 10.1016/j.na.2006.08.032
Baillon JB, Haddad G: Quelques proprietes des operateurs angle-bornes et n-cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664
Takahashi W: Nonlinear Functional Analysis, Fixed Point Theory and Its Applications. Yokahama Publishers, Yokahama; 2000.
Acknowledgements
The author would like to thank Professor Simeon Reich and two anonymous referees for sincere evaluation and constructive comments which improved the paper considerably.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Naraghirad, E. Strong convergence of projection methods for a countable family of nonexpansive mappings and applications to constrained convex minimization problems. J Inequal Appl 2013, 546 (2013). https://doi.org/10.1186/1029-242X-2013-546
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2013-546