- Open Access
Regularized gradient-projection methods for equilibrium and constrained convex minimization problems
© Tian and Huang; licensee Springer. 2013
- Received: 30 January 2013
- Accepted: 30 April 2013
- Published: 14 May 2013
In this article, based on Marino and Xu’s method, an iterative method which combines the regularized gradient-projection algorithm (RGPA) and the averaged mappings approach is proposed for finding a common solution of equilibrium and constrained convex minimization problems. Under suitable conditions, it is proved that the sequences generated by implicit and explicit schemes converge strongly. The results of this paper extend and improve some existing results.
MSC:58E35, 47H09, 65J15.
- iterative method
- constrained convex minimization
- fixed point
- variational inequality
Let H be a real Hilbert space with the inner product and the induced norm . Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.
Let be nonlinear operators.
T is nonexpansive if for all .
T is Lipschitz continuous if there exists a constant such that for all .
A is a strongly positive bounded linear operator if there exists a constant such that for all .
is monotone if for all .
Given is a number , is η-strongly monotone if for all .
Given is a number . is υ-inverse strongly monotone (υ-ism) if for all .
It is known that inverse strongly monotone operators have been studied widely (see [1–3]) and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).
T is firmly nonexpansive if and only if is nonexpansive or, equivalently, for all .
is said to be an averaged mapping if , where α is a number in and is nonexpansive. In particular, projections are -averaged mappings.
The set of solutions of (1.1) is denoted by . Given a mapping , let for all . Then if and only if for all , i.e., z is a solution of the variational inequality. Numerous problems in physics, optimization and economics reduce to finding a solution of (1.1). Some methods have been proposed to solve the equilibrium problem; see, for instance, [11–15].
We use to denote the set of fixed points of the mapping T; that is, .
for all , where and satisfy some appropriate conditions. Further, they proved and converge strongly to , where .
where the parameters are real positive numbers, and is the metric projection from H onto C. It is known that the convergence of the sequence depends on the behavior of the gradient ∇f. If the gradient ∇f is only assumed to be inverse strongly monotone, then can only be convergent weakly to a minimizer of (1.5). If the gradient ∇f is Lipschitz continuous and strongly monotone, then the sequence can be convergent strongly to a minimizer of (1.5) provided the parameters satisfy appropriate conditions.
As everyone knows, Xu  gave an averaged mapping approach to the gradient-projection method, and he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the infinite-dimensional space. Moreover, he presented two modifications of the gradient-projection method which are shown to have strong convergence.
where is an l-Lipschitzian mapping with a constant , and is a k-Lipschitzian and η-strongly monotone operator with constants . Let , , and . Let and satisfy , . Under suitable conditions, it is proved that the sequence generated by (1.6) converges strongly to a minimizer of (1.5).
It is the first time that the equilibrium and constrained convex minimization problems have been solved.
where the parameter , γ is a constant with , and is the metric projection from H onto C. We all know that the sequence generated by algorithm (1.8) converges weakly to a minimizer of (1.5) in the setting of infinite-dimensional spaces (see ).
In this section we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.
Some properties of averaged mappings are gathered in the proposition below.
If for some and if S is averaged and V is nonexpansive, then T is averaged.
The composition of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is α-averaged, where .
- (iii)If the mappings are averaged and have a common fixed point, then
Here the notation denotes the set of fixed points of the mapping T; that is, .
The following proposition gathers some results on the relationship between averaged mappings and inverse strongly monotone operators.
T is nonexpansive if and only if the complement is ()-ism;
If T is υ-ism, then for , γT is ()-ism;
T is averaged if and only if the complement is υ-ism for some ; indeed, for , T is α-averaged if and only if is ()-ism.
Lemma 2.1 
either or ;
The so-called demiclosed principle for nonexpansive mappings will often be used.
Lemma 2.2 (Demiclosed principle )
Let C be a closed and convex subset of a Hilbert space H and let be a nonexpansive mapping with . If is a sequence in C weakly converging to x and if converges strongly to y, then . In particular, if , then .
Lemma 2.3 
That is, is strongly monotone with a coefficient .
is characterized as follows.
Lemma 2.5 
Assume that A is a strongly positive bounded linear operator on a Hilbert space H with a coefficient and . Then .
For solving the equilibrium problem for a bifunction , let us assume that ϕ satisfies the following conditions:
(A1) for all ;
(A2) ϕ is monotone, i.e., for all ;
(A3) for each , ;
(A4) for each , is convex and lower semicontinuous.
Lemma 2.6 
Lemma 2.7 
is firmly nonexpansive, i.e., for any ;
is closed and convex.
We adopt the following notation:
means that strongly;
means that weakly.
Recall that throughout this paper we always denote U as the solution set of the constrained convex minimization problem (1.5), and denote as the solution set of the equilibrium problem (1.1).
Note that indeed depends on V as well, but we will suppress this dependence of on V for simplicity of notation throughout the rest of this paper.
The following theorem summarizes the properties of the sequence .
Equivalently, we have .
It is clear that , i.e., .
where and .
Hence is bounded and we also obtain that is bounded.
Next, we show that .
Since both and are bounded and , , it follows that .
we have and .
Since is bounded, without loss of generality, we can assume that . Next, we show that .
So, by Lemma 2.2, we get .
Since and , it follows from (A4) that for any .
Let , , , then we have and hence .
and hence . From (A3), we have for any , hence . Therefore, .
Since and , it follows from (3.7) that as .
Next, we show that z solves the variational inequality (3.2). Observe that .
It follows that is a solution of the variational inequality (3.2). Further, by the uniqueness of the solution of the variational inequality (3.2), we conclude that as .
This completes the proof. □
where , , and . Let , and satisfy the following conditions:
(C1) , , ;
(C2) , , , ;
(C3) , , .
Then the sequence converges strongly to a point , which solves the variational inequality (3.2).
- (a)solves the minimization problem (1.5) if and only if for each fixed , solves the fixed-point equation
The gradient ∇f is -ism.
- (c)is averaged for , in particular, the following relation holds:
Hence is bounded. From (3.9), we also derive that is bounded.
Next, we show that .
It follows that .
where is a unique solution of the variational inequality (3.2).
By (3.17) and , we derive that .
Hence, by , we get .
In terms of Lemma 2.2, we get .
Then, by the same argument as in the proof of Theorem 3.1, we have .
Finally, we show that .
By (3.20) and , we get . Now applying Lemma 2.1 to (3.21) concludes that as . □
In this section, we give an application of Theorem 3.2 to the split feasibility problem (SFP for short) which was introduced by Censor and Elfving . Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [21, 26, 27]) due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.
where C and Q are nonempty closed convex subsets of Hilbert spaces and , respectively. is a bounded linear operator.
where . He obtained that the sequence generated by (4.3) converges weakly to a solution of the SFP.
for all n,
where is Lipschitzian with a constant and is a strongly positive bounded linear operator with a constant . Suppose that . We can show that the sequence generated by (4.4) converges strongly to a solution of SFP (4.1) if the sequence and the sequence of parameters satisfy appropriate conditions.
Applying Theorem 3.2, we obtain the following result.
Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence be generated by (4.4). Where the sequence and the sequence satisfy the conditions (C1)-(C3). Then the sequence converges strongly to a solution of the split feasibility problem (4.1).
for all n.
Due to Theorem 3.2, we have the conclusion immediately. □
Methods for solving the equilibrium problem (EP) and the constrained convex minimization problem have been extensively studied, respectively, in a Hilbert space. In 2012, Tian and Liu proposed an iterative method for finding a common solution of an EP and a constrained convex minimization problem. But, in this paper, it is the first time that we combine the regularized gradient-projection algorithm and the averaged mappings approach to propose implicit and explicit algorithms for finding the common solution of an EP and a constrained convex minimization problem, which also solves a certain variational inequality.
The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (No. ZXH2012K001).
- Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.Google Scholar
- Jaiboon C, Kumam P: A hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815Google Scholar
- Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051Google Scholar
- Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965MathSciNetView ArticleGoogle Scholar
- Han D, Lo HK: Solving non-additive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5MathSciNetView ArticleGoogle Scholar
- Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MathSciNetView ArticleGoogle Scholar
- Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006MathSciNetView ArticleGoogle Scholar
- Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157MathSciNetView ArticleGoogle Scholar
- Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-zMathSciNetView ArticleGoogle Scholar
- Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.View ArticleGoogle Scholar
- Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036MathSciNetView ArticleGoogle Scholar
- Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.MathSciNetGoogle Scholar
- Flam SD, Antipin AS: Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78: 29–41.MathSciNetView ArticleGoogle Scholar
- He HM, Liu SY, Cho YJ: An explicit method for systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. J. Comput. Appl. Math. 2011, 235: 4128–4139. 10.1016/j.cam.2011.03.003MathSciNetView ArticleGoogle Scholar
- Qin XL, Cho YJ, Kang SM: Convergence analysis on hybrid projection algorithms for equilibrium problems and variational inequality problems. Math. Model. Anal. 2009, 14: 335–351. 10.3846/1392-6292.2009.14.335-351MathSciNetView ArticleGoogle Scholar
- Moudafi A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615MathSciNetView ArticleGoogle Scholar
- Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059MathSciNetView ArticleGoogle Scholar
- Marino G, Xu HK: A generated iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028MathSciNetView ArticleGoogle Scholar
- Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005MathSciNetView ArticleGoogle Scholar
- Tian M, Liu L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 2012. doi:10.1080/02331934.2012.713361Google Scholar
- Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
- Martinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed-point iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018MathSciNetView ArticleGoogle Scholar
- Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004MathSciNetView ArticleGoogle Scholar
- Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.MathSciNetGoogle Scholar
- Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MathSciNetView ArticleGoogle Scholar
- López G, Martin-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004Google Scholar
- Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005MathSciNetView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.