- Research
- Open access
- Published:
General viscosity iterative approximation for solving unconstrained convex optimization problems
Journal of Inequalities and Applications volume 2015, Article number: 334 (2015)
Abstract
In this paper, we combine a sequence of contractive mappings \(\{h_{n}\}\) with the proximal operator and propose a generalized viscosity approximation method for solving the unconstrained convex optimization problems in a real Hilbert space H. We show that, under reasonable parameter conditions, our algorithm strongly converges to the unique solution of a variational inequality problem. Our result presented in the paper improves and extends the corresponding results reported by many authors recently.
1 Introduction
Since the inception in 1978, the unconstrained minimization problem (1.1) has received much attention due to its applications in signal processing, image reconstruction and in particular in compressed sensing. In this paper, let H be a Hilbert space with the inner product \(\langle\, , \, \rangle\) and the induced norm \(\|\cdot\|\). Let \(\Gamma _{0}(H)\) be the space of convex functions in H that are proper, lower semicontinuous and convex. We will deal with the convex unconstrained optimization problem of the following type:
where \(f,g\in\Gamma_{0}(H)\). In general, f is differentiable and g is subdifferential.
As we know, problem (1.1) was first studied in [1] and provided a nature vehicle to study various generic optimization models under a common framework. Many methods have been already proposed to solve problem (1.1) (see [2–4]). Many important classes of optimization problems can be cast in this form, and problem (1.1) is a common problem in control theory. See, for instance, [3] as a special case of (1.1) where due to the involvement of the \(l_{1}\) norm which promotes sparsity that we can get a good result on solving the corresponding problem. We mention in particular the classical works developed by [2] and [4], where a lot of weak convergence results have been discussed.
Definition 1.1
The proximal operator of \(\varphi\in\Gamma_{0}(H)\) is defined by
The proximal operator of φ of order \(\lambda>0\) is defined as the proximal operator of λφ, that is,
Lemma 1.2
(see [4])
Let \(f,g\in\Gamma_{0}(H)\). Let \(x^{*}\in H\) and \(\lambda>0\). Assume that f is finite-valued and differential on H. Then \(x^{*}\) is a solution to (1.1) if and only if \(x^{*}\) solves the fixed point equation
The fixed point equation (1.2) immediately yields the following fixed point algorithm which is also known as the proximal algorithm [7] for solving (1.1) as follows.
Initialize \(x_{0}\in H\) and iterate
where \(\{\lambda_{n}\}\) is a sequence of positive real numbers. Meanwhile, in [8], the authors Combettes and Wajs also proved that the algorithm converged weakly. Recently, Xu [4] introduced the relaxed proximal algorithm.
Initialize \(x_{0}\in H\) and iterate
where \(\{\alpha_{n}\}\) is the sequence of relaxation parameters and \(\{ \lambda_{n}\}\) is a sequence of positive real numbers, and obtain weak convergence.
However, it is well known that strongly convergent algorithms are very important for solving infinite dimensional problems and viscosity can effectively transfer weak convergence of certain iterative algorithm to convergence strongly under appropriate conditions. Recently, based on an idea introduced in the work of Moudafi and Thakur [9], Yao et al. [10], Shehu [11] and Shehu et al. [12, 13] proposed some iteration algorithms for solving proximal split feasibility problems which are related to problem (1.1). They obtained strong convergence.
In this paper, motivated by works [4, 9–14], we combine a consequence of contractive mappings \(\{h_{n}\}\) with the proximal operator and propose a generalized viscosity approximation method for solving problem (1.1). We propose our main iterative scheme and obtain strong convergence theorem for solving unconstrained convex minimization problems by the general iterative method. Meanwhile, we get the convergence point of the iterative method which is also the unique solution of the variational inequality problem (3.1). Further an example will be given to demonstrate the effectiveness of our iterative scheme.
2 Preliminaries
We adopt the following notations.
Let \(T: H\rightarrow H\) be a self-mapping of H.
-
(1)
\(x_{n}\rightarrow x\) stands for the strong convergence of \(\{x_{n}\}\) to x; \(x_{n}\rightharpoonup x\) stands for the weak convergence of \(\{x_{n}\} \) to x.
-
(2)
Use \(\operatorname{Fix}(T)\) to denote the set of fixed points of T; that is, \(\operatorname{Fix}(T)=\{x\in H: Tx=x\}\).
-
(3)
\(\omega_{w}(x_{n}):=\{x : \exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ω-limit set of \(\{x_{n}\}\).
In this paper, in order to prove our result, we collect some facts and tools in a Hilbert space H. We shall make full use of the following lemmas, definitions and propositions in a real Hilbert space H.
Lemma 2.1
Let H be a real Hilbert space. There holds the following inequality:
Recall that given a closed subset C of a real Hilbert space H, for any \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that
Such \(P_{C}x\) is called the metric (or the nearest point) projection of H onto C.
Lemma 2.2
(see [15])
Let C be a nonempty closed convex subject of a real Hilbert space H. Given \(x\in H\) and \(z\in C\), then \(y=P_{C}x\) if and only if we have the relation
Definition 2.3
A mapping \(F:H\rightarrow H\) is said to be:
-
(i)
Lipschitzian if there exists a positive constant L such that
$$\|F x-Fy\|\leq L\|x-y\|,\quad \forall x,y\in H. $$In particular, if \(L=1\), we say F is nonexpansive, namely
$$\|Fx-Fy\|\leq\|x-y\|,\quad \forall x,y\in H; $$if \(L\in(0,1)\), we say F is contractive, namely
$$\|Fx-Fy\|\leq L\|x-y\|, \quad \forall x,y\in H. $$ -
(ii)
α-averaged mapping (α-av for short) if
$$F=(1-\alpha)I+\alpha T, $$where \(\alpha\in(0,1)\) and \(T:H\rightarrow H\) is nonexpansive.
Lemma 2.4
(see [16])
Let \(h:H\rightarrow H\) be a ρ-contraction with \(\rho\in(0,1)\) and \(T:H\rightarrow H\) be a nonexpansive mapping. Then:
-
(i)
\(I-h\) is \((1-\rho)\)-strongly monotone:
$$\bigl\langle (I-h)x-(I-h)y,x-y\bigr\rangle \geq(1-\rho)\|x-y\|^{2},\quad \forall x,y\in H. $$ -
(ii)
\(I-T\) is monotone:
$$\bigl\langle (I-T)x-(I-T)y,x-y\bigr\rangle \geq0,\quad \forall x,y\in H. $$
Lemma 2.5
(see [17], Demiclosedness principle)
Let H be a real Hilbert space, and let \(T:H\rightarrow H \) be a nonexpansive mapping with \(\operatorname{Fix}(T)\neq\emptyset\). If \(\{x_{n}\}\) is a sequence in H weakly converging to x and if \(\{(I-T)x_{n}\}\) converges strongly to y, then \((I-T)x=y\); in particular, if \(y=0\), then \(x\in \operatorname{Fix}(T)\).
Lemma 2.6
Assume that \(\{s_{n}\}\) is a sequence of nonnegative real numbers such that
where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\), \((\eta_{n})\) is a sequence of nonnegative real numbers and \((\delta_{n})\) and \((\varphi_{n})\) are two sequences in \(\mathbb{R}\) such that
-
(i)
\(\sum_{n=0}^{\infty}\gamma_{n}=\infty\);
-
(ii)
\(\lim_{n\rightarrow\infty}\varphi_{n}=0\);
-
(iii)
\(\lim_{k\rightarrow\infty}\eta_{n_{k}}=0\) implies \(\limsup_{k\rightarrow\infty}\delta_{n_{k}}\leq0\) for any subsequence \((n_{k})\subset(n)\).
Then \(\lim_{n\rightarrow\infty}s_{n}=0\).
Proposition 2.7
(see [19])
If \(T_{1}, T_{2},\ldots, T_{n} \) are averaged mappings, we can get that \(T_{n}T_{n-1}\cdots T_{1}\) is averaged. In particular, if \(T_{i}\) is \(\alpha _{i}\)-av, \(i=1,2\), where \(\alpha_{i} \in(0,1)\), then \(T_{2}T_{1}\) is \((\alpha _{2}+\alpha_{1}-\alpha_{2}\alpha_{1})\)-av.
Proposition 2.8
(see[20])
If \(f:H\rightarrow\mathbb{R}\) is a differentiable functional, then we denote by ∇f the gradient of f. Assume that ∇f is Lipschitz continuous on H. The operator \(V_{\lambda}=\operatorname{prox}_{\lambda g} (I-\lambda\nabla f)\) is \(\frac{2+\lambda L}{4}\)-av for each \(0<\lambda <\frac{2}{L}\).
Lemma 2.9
The proximal identity
holds for \(\varphi\in\Gamma_{0}(H)\), \(\lambda>0\) and \(\mu>0\).
3 Main results
In this section, we combine a sequence of contractive mappings and apply a more generalized viscosity iterative method for approximating the unique fixed point of the following variational inequality problem:
where \(h: H\rightarrow H\) is ρ-contractive.
Suppose that the contractive sequence \(\{h_{n}(x)\}\) is uniformly convergent for any \(x\in D\), where D is any bounded subset of H. The uniform convergence of \(\{h_{n}(x)\}\) on D is denoted by \(h_{n}(x) \rightrightarrows h(x)\) (\(n \rightarrow\infty\)), \(x\in D\).
Theorem 3.1
Let \(f,g\in\Gamma_{0}(H)\) and assume that (1.1) is consistent. Let \(\{ h_{n}(x_{n})\}\) be a sequence of \(\rho_{n}\)-contractive self-maps of H with \(0\leq\rho_{l}=\liminf_{n\rightarrow\infty}\rho_{n}\leq\limsup_{n\rightarrow\infty}\rho_{n}=\rho_{u}<1\) and \(V_{\lambda_{n}}=\operatorname{prox}_{\lambda _{n}g}(I-\lambda_{n}\nabla f)\), where ∇f is L-Lipschitzian. Assume that \(\{h_{n}(x)\}\) is uniformly convergent for any \(x\in D\), where D is any bounded subset of H. Given \(x_{0}\in H\) and define the sequence \(\{x_{n}\}\) by the following iterative algorithm:
where \(\lambda_{n}\in(0,\frac{2}{L})\), \(\alpha_{n}\in(0,\frac{2+\lambda_{n} L}{4})\). Suppose that
-
(i)
\(\lim_{n\rightarrow\infty}\alpha_{n}=0\);
-
(ii)
\(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);
-
(iii)
\(0<\liminf_{n\rightarrow\infty}\lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n}<\frac{2}{L}\).
Then \(\{x_{n}\}\) converges strongly to \(x^{*}\), where \(x^{*}\) is a solution of (1.1), which is also the unique solution of the variational inequality problem (3.1).
Proof
Let S be a nonempty solution set of (1.1).
Step 1. Show that \(\{x_{n}\}\) is bounded.
For any \(x^{*}\in S\),
From the uniform convergence of \(\{h_{n}\}\) on D, it is easy to get the boundedness of \(\{h_{n}(x^{*})\}\). Thus there exists a positive constant \(M_{1}\) such that \(\|h_{n}(x^{*})-x^{*}\|\leq M_{1}\). By induction, we obtain
which implies that the sequence \(\{x_{n}\}\) is bounded.
Step 2. Show that for any sequence \((n_{k})\subset(n)\),
Firstly, note that from (3.2) we have
Secondly, since \(V_{\lambda_{n}} is \frac{2+\lambda_{n}L}{4} \)-av, we can rewrite
where \(w_{n}=\frac{2+\lambda_{n}L}{4}\), \(T_{n}\) is nonexpansive and, by condition (iii), we get \(\frac{1}{2}<\liminf_{n\rightarrow\infty}w_{n}\leq\limsup_{n\rightarrow \infty}w_{n}<1\). Thus, we have from (3.2) and (3.5)
Set
since \(\gamma_{n}\rightarrow0\), \(\sum_{n=0}^{\infty}\gamma_{n}=\infty\) (\(\lim_{n\rightarrow\infty}(2-\alpha_{n}(1+2{\rho_{n}}^{2})-2(1-\alpha_{n})\rho _{n})=2(1-\rho_{u})>0\)) and \(\varphi_{n}\rightarrow0\) (\(\alpha_{n}\rightarrow0\)) hold obviously, so in order to complete the proof by using Lemma 2.6, it suffices to verify that \(\eta_{n_{k}}\rightarrow0\) (\(k\rightarrow\infty\)) implies that
for any subsequence \((n_{k})\subset(n)\).
Indeed, \(\eta_{n_{k}}\rightarrow0\) (\(k\rightarrow\infty\)) implies that \(\| T_{n_{k}}x_{n_{k}}-x_{n_{k}}\|\rightarrow0\) (\(k\rightarrow\infty\)) due to condition (iii). So, from (3.5), we have
Step 3. Show that
Here, \(\omega_{w}(x_{n_{k}})\) is the set of all weak cluster points of \(\{ x_{n_{k}}\}\). To see (3.8), we prove as follows. Take \(\tilde{x} \in\omega _{w}\{x_{n_{k}}\}\) and assume that \(\{x_{n_{k_{j}}}\}\) is a subsequence of \(\{ x_{n_{k}}\}\) weakly converging to x̃. Without loss of generality, we rewrite \(\{x_{n_{k_{j}}}\}\) as \(\{x_{n_{k}}\}\) and may assume \(\lambda_{n_{j}}\rightarrow\lambda\), then \(0<\lambda<\frac{2}{L}\). Set \(V_{\lambda}=\operatorname{prox}_{\lambda g}(I-\lambda\nabla f)\), then \(V_{\lambda}\) is nonexpansive. Set
Using the proximal identity of Lemma 2.9, we deduce that
Since \(\{x_{n}\}\) is bounded, ∇f is Lipschitz continuous and \(\lambda_{n_{k}}\rightarrow\lambda\), we immediately derive from the last relation that \(\|V_{\lambda_{n_{k}}}x_{n_{k}}-V_{\lambda}x_{n_{k}}\|\rightarrow 0\). As a result, we find
Using Lemma 2.5, we get \(\omega_{w}(x_{n_{k}})\subset S\). Meanwhile, since \(\{h_{n}(x)\}\) is uniformly on D, we have
Also, since \(x^{*}\) is the unique solution of the variational inequality problem (3.1), we get
and hence \(\limsup_{k\rightarrow\infty}\delta_{n_{k}}=0\). □
4 Numerical result
In this section, we consider the following simple numerical example to demonstrate the effectiveness, realization and convergence of Theorem 3.1. Through the following numerical example, we can see that the convergent point, which is generated by (3.2), is not only the solution of (1.1) but is very close to the solution of the problem \(Ax=b\).
Example 1
Let \(H=\mathbb{R}^{m}\). Define \(h_{n}(x)=\frac{1}{100}x\). Take \(f(x)=\frac{1}{2}\|Ax-b\|^{2}\), thus we can get that \(\nabla f(x)=A^{T}(Ax-b)\) with Lipschitz coefficient \(L=\|A^{T}A\|\), where \(A^{T}\) represents the transpose of A. Take \(g=\|x\|_{1}\), then \(V_{\lambda _{n}g}x=\arg\min_{v\in H}\{\lambda_{n} g(v)+\frac{1}{2}\|v-x\|^{2}\}=\arg\min_{v\in H}\{\lambda\|v\|_{1}+\frac{1}{2}\|v-x\|^{2}\}\). From [20], we also know that \(\operatorname{prox}_{\lambda_{n}\|\cdot\|_{1}}x=[\operatorname{prox}_{\lambda_{n}|\cdot |}x_{1}, \operatorname{prox}_{\lambda_{n}|\cdot|}x_{2},\ldots, \operatorname{prox}_{\lambda_{n}|\cdot |}x_{m}]^{T}\), where \(\operatorname{prox}_{\lambda_{n}|\cdot|}x_{i}=\max\{|x_{i}|-\lambda_{n},0\} \operatorname{sign}(x_{i})\) (\(i=1,2,\ldots,m\)). Give \(\alpha_{n}=\frac{1}{1\text{,}000*n}\) for every \(n\geq1\). Fix \(\lambda_{n}=\frac{1}{150*L^{\frac{1}{2}}}\), generate a random matrix
and a random vector
Then, by Theorem 3.1, the sequence \(\{x_{n}\}\) is generated by
As \(n\rightarrow\infty\), we have \(\{x_{n}\}\rightarrow x^{*}\). Then, through taking a distinct initial guess \(x_{0}\), using software Matlab, we obtain the numerical experiment results in Tables 1 and 2, where
where \(x_{n}\) is the point which is generated by Theorem 3.1. Then we know the convergence point of \(x_{n}\) is the solution of problem (1.1). Until now, it has not been easy to get an exact solution about the problem of \(Ax=b\). Therefore, there exist a lot of algorithms to get the approximate solution about it. Also, by a series of analyses, we know that \(x_{n}\) is very close to satisfying the problem of \(Ax=b\). To some extent, we can say that Theorem 3.1 solved both (1.1) and \(Ax=b\). Further, as we know, many practical problems in applied sciences such as signal processing and image reconstruction are formulated as the problem of \(Ax=b\). So, our theorem is very useful for solving those problems.
References
Auslender, A: Minimisation de fonctions localement Lipschitziennes: applications a la programmation mi-convexe, mi-differentiable. In: Mangasarian, OL, Meyer, RR, Robinson, SM (eds.) Nonlinear Programming, vol. 3, pp. 429-460. Academic Press, New York (1978)
Mahammand, AA, Naseer, S, Xu, HK: Properties and iterative methods for the Q-lasso. Abstr. Appl. Anal. 2013, Article ID 250943 (2013)
Taheri, S, Mammadov, M, Seifollahi, S: Globally convergent algorithms for solving unconstrained optimization problems. Optimization (2015). doi:10.1080/02331934.2012.745529
Xu, HK: Properties and iterative methods for the lasso and its variants. Chin. Ann. Math., Ser. B 35(3), 501-518 (2014)
Moreau, JJ: Proprietes des applications ‘prox’. C. R. Acad. Sci. Paris, Sér. A Math. 256, 1069-1071 (1963)
Moreau, JJ: Proximite et dualite dans un espace hilbertien. Bull. Soc. Math. Fr. 93, 279-299 (1965)
He, SN, Yang, CP: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, Article ID 942315 (2013). doi:10.1155/2013/942315
Combettes, PL, Wajs, R: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168-1200 (2005)
Moudafi, A, Thakur, BS: Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 8, 2099-2110 (2014)
Yao, Z, Cho, SY, Kang, SM, Zhu, LJ: A regularized algorithm for the proximal split feasibility problem. Abstr. Appl. Anal. 2014, Article ID 894272 (2014). doi:10.1155/2014/894272
Shehu, Y: Iterative methods for convex proximal split feasibility problems and fixed point problems. Afr. Math. (2015). doi:10.1007/s13370-015-0344-5
Shehu, Y, Cai, G, Iyiola, OS: Iterative approximation of solutions for proximal split feasibility problems. Fixed Point Theory Appl. 2015, 123 (2015). doi:10.1186/s13663-015-0375-5
Shehu, Y, Ogbuisi, FU: Convergence analysis for proximal split feasibility problems and fixed point problems. J. Appl. Math. Comput. 48, 221-239 (2015)
Duan, PC, He, SN: Generalized viscosity approximation methods for nonexpansive mappings. Fixed Point Theory Appl. 2014, 68 (2014). doi:10.1186/1687-1812-2014-68
Marino, G, Xu, HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329, 336-346 (2007)
Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)
Geobel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)
Yang, CP, He, SN: General alternative regularization methods for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014, 203 (2014)
Combettes, PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53, 475-504 (2004)
Micchelli, CA, Shen, LX, Xu, YS: Proximity algorithms for image models: denoising. Inverse Probl. 27, 045009 (2011)
Acknowledgements
This work was supported by the Fundamental Research Funds for the Central Universities (3122015L007) and the National Nature Science Foundation of China (11501566).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Duan, P., Song, M. General viscosity iterative approximation for solving unconstrained convex optimization problems. J Inequal Appl 2015, 334 (2015). https://doi.org/10.1186/s13660-015-0857-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0857-3