- Research
- Open access
- Published:
General iterative scheme based on the regularization for solving a constrained convex minimization problem
Journal of Inequalities and Applications volume 2013, Article number: 550 (2013)
Abstract
It is well known that the regularization method plays an important role in solving a constrained convex minimization problem. In this article, we introduce implicit and explicit iterative schemes based on the regularization for solving a constrained convex minimization problem. We establish results on the strong convergence of the sequences generated by the proposed schemes to a solution of the minimization problem. Such a point is also a solution of a variational inequality. We also apply the algorithm to solve a split feasibility problem.
MSC:47H09, 47H05, 47H06, 47J25, 47J05.
1 Introduction
The gradient-projection algorithm is a classical power method for solving constrained convex optimization problems and has been studied by many authors (see [1–14] and the references therein). The method has recently been applied to solve split feasibility problems which find applications in image reconstruction and the intensity modulated radiation therapy (see [15–22]).
Consider the problem of minimizing f over the constraint set C (assuming that C is a nonempty closed and convex subset of a real Hilbert space H). If is a convex and continuously Fréchet differentiable functional, the gradient-projection algorithm generates a sequence determined by the gradient of f and the metric projection onto C. Under the condition that f has a Lipschitz continuous and strongly monotone gradient, the sequence can be strongly convergent to a minimizer of f in C. If the gradient of f is only assumed to be inverse strongly monotone, then can only be weakly convergent if H is infinite-dimensional.
Recently, Xu [23] gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-projection algorithm, namely, an averaged mapping approach. He also presented two modifications of gradient-projection algorithms which are shown to have strong convergence.
On the other hand, regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems [24, 25]. Under some conditions, we know that the regularization method is weakly convergent.
The purpose of this paper is to present the general iterative method combining the regularization method and the averaged mapping approach. We first propose implicit and explicit iterative schemes for solving a constrained convex minimization problem and prove that the methods converge strongly to a solution of the minimization problem, which is also a solution of the variational inequality. Furthermore, we use the above method to solve a split feasibility problem.
2 Preliminaries
Throughout the paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by and , respectively, and that C is a nonempty closed convex subset of H. The set of fixed points of a mapping T is denoted by , that is, . We write to indicate that the sequence converges weakly to x. The fact that the sequence converges strongly to x is denoted by . The following definition and results are needed in the subsequent sections.
Recall that a mapping is said to be L-Lipschitzian if
where is a constant. In particular, if , then T is called a contraction on H; if , then T is called a nonexpansive mapping on H. T is called firmly nonexpansive if is nonexpansive, or equivalently, , . Alternatively, T is firmly nonexpansive if and only if T can be expressed as , where is nonexpansive.
Definition 2.1 A mapping is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping; that is,
where α is a number in and is nonexpansive. More precisely, when (2) holds, we say that T is α-averaged. Clearly, a firmly nonexpansive mapping (in particular, projection) is a -averaged map.
For given operators :
-
(i)
If for some and if W is averaged and V is nonexpansive, then T is averaged.
-
(ii)
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
-
(iii)
If for some and if W is firmly nonexpansive and V is nonexpansive, then T is averaged.
-
(iv)
The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, then the composite is α-averaged, where .
Recall that the metric (or nearest point) projection from H onto C is the mapping which assigns to each point the unique point satisfying the property
Lemma 2.1 For given :
-
(i)
if and only if
-
(ii)
if and only if
-
(iii)
Consequently, is nonexpansive.
Lemma 2.2 The following inequality holds in a Hilbert space X:
Lemma 2.3 [27]
In a Hilbert space H, we have
Lemma 2.4 (Demiclosedness principle [27])
Let C be a closed and convex subset of a Hilbert space H, and let be a nonexpansive mapping with . If is a sequence in C weakly converging to x and if converges strongly to y, then . In particular, if , then .
Definition 2.2 A nonlinear operator G with domain and range is said to be:
-
(i)
monotone if
-
(ii)
β-strongly monotone if there exists such that
-
(iii)
ν-inverse strongly monotone (for short, ν-ism) if there exists such that
Proposition 2.2 [16]
Let be an operator from H to itself.
-
(i)
T is nonexpansive if and only if the complement is -ism.
-
(ii)
If T is ν-ism, then for , γT is -ism.
-
(iii)
T is averaged if and only if the complement is ν-ism for some . Indeed, for , T is α-averaged if and only if is -ism.
Lemma 2.5 [6]
Assume that is a sequence of nonnegative real numbers such that
where is a sequence in and is a sequence in ℝ such that
-
(i)
;
-
(ii)
or .
Then .
3 Main results
We now look at the constrained convex minimization problem:
where C is a closed and convex subset of a Hilbert space H and is a real-valued convex function. Assume that problem (4) is consistent, let S denote the solution set. If f is Fréchet differentiable, then the gradient-projection algorithm (GPA) generates a sequence according to the recursive formula
or more generally,
where, in both (5) and (6), the initial guess is taken from C arbitrarily, the parameters γ or are positive real numbers.
As a matter of fact, it is known that if ∇f fails to be strongly monotone, and is only -ism, namely, there is a constant such that
under some assumption for γ or , then algorithms (5) and (6) can still converge in the weak topology.
Now consider the regularized minimization problem
where is the regularization parameter, and again f is convex with a -ism gradient ∇f.
It is known that the regularization method is defined as follows:
We also know that , where is a solution of constrained convex minimization problem (4).
Let be a contraction with a constant . In this section, we introduce the following implicit scheme generating a net in an implicit way:
where , . Let and s satisfy the following conditions:
-
(i)
;
-
(ii)
.
Consider a mapping
It is easy to see that is a contraction. Indeed, we have
Hence, has a unique fixed point in C, denoted by , which uniquely solves fixed point equation (7).
We proved the strong convergence of to a solution of the minimization problem
For a given arbitrary guess and a sequence , we also propose the following explicit scheme that generates a sequence in an explicit way:
where , and for each . It is proven that this sequence converges strongly to a minimizer of (4).
3.1 Convergence of the implicit scheme
Proposition 3.1 If , , ∇f is -ism, then
where , .
In addition, for ,
where
Proof Since ∇f is -ism, so is -ism, by Proposition 2.2, is -averaged, because is -averaged, by Proposition 2.1, is -averaged, i.e.,
where . The same case holds for .
Hence,
then
where . □
Proposition 3.2 Let be a contraction with and , let be continuous with respect to s, . Suppose that problem (4) is consistent, let S denote the solution set for each , and let denote a unique solution of fixed point equation (7). Then the following properties hold for the net :
-
(i)
is bounded;
-
(ii)
;
-
(iii)
defines a continuous curve from into C.
Proof (i) Take any , then
Therefore,
hence,
So, is bounded.
-
(ii)
-
(iii)
Take , and calculate
(12)
So, by (12) and (13),
□
Theorem 3.1 Assume that minimization problem (4) is consistent, and let S denote the solution set. Assume that the gradient ∇f is -ism. Let be a ρ-contraction with ,
where , , . Let satisfy the following conditions:
-
(i)
;
-
(ii)
.
Then the net converges strongly as to a minimizer of problem (4), which is also the unique solution of the variational inequality
Proof Set , . Let , .
We then have . For any given , , we obtain
Next we prove that , which is also the unique solution of the variational inequality. We have
So,
Then if , then . Next, we prove that
Since
So,
Finally, we prove that , which is also the unique solution of the variational inequality. We only need to prove that if , then
Suppose that , by Lemma 2.4 and , , it follows that . Note that by (14). From the definition
we have
So
then
So, , which is also the unique solution of the variational inequality. □
3.2 Convergence of the explicit scheme
Theorem 3.2 Assume that minimization problem (4) is consistent, and let S denote the solution set. Assume that the gradient ∇f is -ism. Let be a ρ-contraction with . Let a sequence be generated by the following hybrid gradient projection algorithm:
where , and , and, in addition, assume that the following conditions are satisfied for and :
-
(i)
; ;
-
(ii)
;
-
(iii)
;
-
(iv)
.
Then the sequence converges in norm to a minimizer of (4) which is also the unique solution of the variational inequality (VI)
In other words, is the unique fixed point of the contraction ,
Proof (1∘) We first prove that is bounded. Set , . Indeed, we have, for ,
So, is bounded.
(2∘) Next we prove that as .
but
So,
by Lemma 2.5,
(3∘) Next we show that .
Indeed, it follows that
Now we show that
Let , observe that
hence we have
So
by Lemma 2.4 and , then
This shows that
It follows that
Hence,
by Lemma 2.5 and , , , we have . □
4 Application of the iterative method
Next, we give an application of Theorem 3.2 to the split feasibility problem (say SFP, for short) which was introduced by Censor and Elfving [15].
where C and Q are nonempty closed convex subsets of Hilbert spaces and , respectively. is a bounded linear operator.
It is clear that is a solution of split feasibility problem (18) if and only if and .
We define the proximity function f by
and consider the convex optimization problem
Then solves split feasibility problem (18) if and only if solves minimization (19) with minimizer equal to 0. Byrne [16] introduced the so-called CQ algorithm to solve the (SFP)
where .
He obtained that the sequence generated by (20) converges weakly to a solution of the (SFP).
Now we consider the regularization technique. Let
then we establish the iterative scheme as follows:
where is a contraction with ,
Applying Theorem 3.2, we obtain the following result.
Theorem 4.1 Assume that the split problem is consistent, let the sequence be generated by
where
and the sequence , and satisfy the following conditions:
-
(i)
; ;
-
(ii)
;
-
(iii)
;
-
(iv)
;
-
(v)
.
Then the sequence converges strongly to the solution of split feasibility problem (18).
Proof By the definition of the proximity function f, we have
and ∇f is -ism, i.e., since is -averaged mapping, then is 1-ism,
Hence, .
Set ; consequently,
Let , , then the iterative scheme is equivalent to
where , and .
Due to Theorem 3.2, we have the conclusion immediately. □
Author’s contributions
The author read and approved the final manuscript.
References
Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1966, 6: 787–823.
Calamai PH, More JJ: Projected gradient methods for linearly constrained problem. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073
Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.
Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.
Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615
Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059
Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. TMA 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005
Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.
Marino G, Xu HK: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028
Tian M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. TMA 2010, 73: 689–694. 10.1016/j.na.2010.03.058
Yao Y, Liou YC, Chen CP: Algorithms construction for nonexpansive mappings and inverse-strongly monotone mapping. Taiwan. J. Math. 2011, 15: 1979–1998.
Yao Y, Chen R, Liou Y-C: A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem. Math. Comput. Model. 2012, 55: 1506–1515. 10.1016/j.mcm.2011.10.041
Yao Y, Liou Y-C, Kang SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 2013. 10.1007/s10898-011-9804-0
Wiyada K, Praairat J, Poom K: Generalized systems of variational inequalities and projection methods for inverse-strongly monotone mapping. Discrete Dyn. Nat. Soc. 2011. 10.1155/2011/976505
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problem. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017
Censor Y, Bortfeld T, Martin B, Trfimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001
Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007
Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 2010., 26: Article ID 105018
Lopez G, Martin V, Xu HK: Perturbation techniques for nonexpansive mapping with applications. Nonlinear Anal., Real World Appl. 2009, 10: 2369–2383. 10.1016/j.nonrwa.2008.04.020
Lopez G, Martin V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physica Publishing, Madison; 2009:243–279.
Xu HK: Averaged mapping and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z
Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7
Cho YJ, Petrot N: Regularization and iterative method for general variational inequality problem in Hilbert spaces. J. Inequal. Appl. 2011., 2011: Article ID 21
Combettes PL: Solving monotone inclusions via composition of nonexpansive averaged operators. Optimization 2004, 53(5–6):475–504. 10.1080/02331930412331327157
Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.
Acknowledgements
The author thanks the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. 3122013K004).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Tian, M. General iterative scheme based on the regularization for solving a constrained convex minimization problem. J Inequal Appl 2013, 550 (2013). https://doi.org/10.1186/1029-242X-2013-550
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2013-550