A projection descent method for solving variational inequalities
- Abdellah Bnouhachem^{1, 2},
- Qamrul Hasan Ansari^{3, 4} and
- Ching-Feng Wen^{5}Email author
https://doi.org/10.1186/s13660-015-0665-9
© Bnouhachem et al.; licensee Springer. 2015
Received: 14 February 2015
Accepted: 14 April 2015
Published: 23 April 2015
Abstract
In this paper, we propose a descent direction method for solving variational inequalities. A new iterate is obtained by searching the optimal step size along a new descent direction which is obtained by the linear combination of two descent directions. Under suitable conditions, the global convergence of the proposed method is studied. Two numerical experiments are presented to illustrate the efficiency of the proposed method.
Keywords
MSC
1 Introduction
The aim of this paper is to develop an algorithm, inspired by Fu [16], for solving variational inequalities. More precisely, in the first method, a new iterate is obtained by searching the optimal step size along the integrated descent direction from the linear combination of two descent directions, and another optimal step length is employed to reach substantial progress in each iteration for the second method. It is proved theoretically that the lower-bound of the progress is greater when obtained by the second method than by the first one. Under certain conditions, the global convergence of the proposed methods is proved. Our results can be viewed as significant extensions of the previously known results.
2 Preliminaries
It is worth to mention that the solution set \(S^{*}\) of \(\operatorname{VI}(T,K)\) is nonempty if T is continuous and K is compact.
The following results will be used in the sequel.
Lemma 1
[23]
- (a)
\(\langle z-P_{K}(z),P_{K}(z)-v\rangle\geq 0\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).
- (b)
\({\|P_{K}(z)-v\|}^{2} \le{\|z-v\|}^{2}-{\| z-P_{K}(z)\|}^{2}\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).
- (c)
\(\|P_{K}(w)-P_{K}(v)\|^{2}\leq\langle w-v,P_{K}(w)-P_{K}(v)\rangle\), \(\forall v,w \in\mathbb{R}^{n}\).
Lemma 2
[7]
The next lemma shows that \(\|e(u,\rho)\|\) is a non-decreasing function, while \(\frac{\|e(u,\rho)\|}{\rho}\) is a non-increasing one with respect to ρ.
3 Algorithm and convergence results
In this section, we suggest and analyze two new methods for solving variational inequalities (1). For given \(u^{k}\in K\) and \(\rho_{k}>0\), each iteration of the first method consists of three steps, the first step offers \(\tilde{u}^{k}\), the second step makes \(\bar{u}^{k}\) and the third step produces the new iterate \(u^{k+1}\).
Algorithm 1
- Step 1.
Given \(u^{0} \in K\), \(\epsilon>0\), \(\rho_{0}=1\), \(\nu>1\), \(\mu\in(0, \sqrt{2})\), \(\sigma\in(0,1)\), \(\zeta\in(0,1)\), \(\eta _{1}\in(0,\zeta)\), \(\eta_{2}\in(\zeta, \nu)\) and let \(k=0\).
- Step 2.
If \(\|e(u^{k},\rho_{k} )\| \leq\epsilon\), then stop. Otherwise, go to Step 3.
- Step 3.
- (1)For a given \(u^{k}\in K\), calculate the two predictors$$\begin{aligned}& \tilde{u}^{k} = P_{K} \bigl[u^{k} - \rho_{k} T\bigl(u^{k}\bigr)\bigr], \end{aligned}$$(6a)$$\begin{aligned}& \bar{u}^{k} = P_{K}\bigl[ \tilde{u}^{k} - \rho_{k} T\bigl( \tilde{u}^{k} \bigr)\bigr]. \end{aligned}$$(6b)
- (2)
If \(\|e(\tilde{u}^{k},\rho_{k} )\|\leq\epsilon\), then stop. Otherwise, continue.
- (3)If \(\rho_{k}\) satisfies bothand$$ r_{1}:=\frac{|\rho_{k}[\langle \tilde{u}^{k}-\bar{u}^{k},T(u^{k})-T(\tilde{u}^{k})\rangle-\langle u^{k}-\bar{u}^{k},T(\tilde{u}^{k})-T(\bar{u}^{k})\rangle]|}{\|\tilde {u}^{k}-\bar{u}^{k}\|^{2}}\leq \mu^{2} $$(7)then go to Step 4; otherwise, continue.$$ r_{2}:=\frac{\|\rho_{k}(T(\tilde{u}^{k})-T(\bar{u}^{k}))\|}{\|\tilde{u}^{k} - \bar{u}^{k}\|} \leq\nu, $$(8)
- (4)Perform an Armijo-like line search via reducing \(\rho_{k}\),and go to Step 3.$$ \rho_{k} := \rho_{k} *\frac{\sigma}{\max(r_{1}, 1)} $$(9)
- (1)
- Step 4.Take the new iteration \(u^{k+1}\) by settingwhere$$ u^{k+1}(\alpha_{k})= P_{K} \bigl[u^{k} - \alpha_{k}d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr], $$(10)and$$ d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)=\beta_{1}d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)+\beta _{2} \rho_{k}F\bigl(\bar{u}^{k}\bigr) $$(11)$$ d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr) : =\bigl(\tilde{u}^{k} - \bar{u}^{k} \bigr) - \rho_{k}\bigl(T\bigl( \tilde{u}^{k} \bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr). $$(12)
- Step 5.Adaptive rule of choosing a suitable \(\rho_{k+1}\) as the start prediction step size for the next iteration.
- (1)Prepare a proper \(\rho_{k+1}\),$$ \rho_{k+1} :=\left \{ \begin{array}{l@{\quad}l} {\rho_{k}*\zeta/r_{2}} & \mbox{if } r_{2} \le\eta_{1}, \\ \rho_{k}*\zeta/r_{2} & \mbox{if } r_{2} \ge\eta_{2}, \\ \rho_{k} & \mbox{otherwise}. \end{array} \right . $$
- (2)
Return to Step 2, with k replaced by \(k+1\).
- (1)
Remark 1
(a) The proposed method can be viewed as a refinement and improvement of the method of He et al. [27] by performing an additional projection step at each iteration and another optimal step length is employed to reach substantial progress in each iteration.
(b) If \(\beta_{1}=0\) and \(\beta_{2}=1\), we obtain the method proposed in [16].
(c) If \(\beta_{1}=1\) and \(\beta_{2}=0\), we obtain a descent method, the new iterate is obtained along a descent direction \(d_{1}(\tilde {u}^{k},\bar{u}^{k})\).
Remark 2
In (9), if \(\sigma>\max(r_{1}, 1)\), it indicates that \(\rho _{k}\) will be too large for the next iteration and will increase the number of Armijo-like line searches. So, we choose \(\rho_{k}\) for the next iteration to be only modestly smaller than \(\rho_{k}\) to avoid expensive implementations in the next iteration.
Theorem 1
Proof
Proposition 1
- (i)
\(\varPhi ^{\prime}(\alpha)=2 \langle u^{k+1}(\alpha)- \bar {u}^{k},d(\tilde{u}^{k},\bar{u}^{k})\rangle\);
- (ii)
\(\varPhi ^{\prime}(\alpha)\) is a non-increasing function with respect to \(\alpha\geq0\), and hence, \(\varPhi (\alpha)\) is concave.
Proof
Based on Theorem 1 and Proposition 1, the following result can be proved easily.
Proposition 2
- (i)
\(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}})-u^{*}\|^{2}\geq \varPhi (\alpha^{*}_{k_{2}})\);
- (ii)
\(\| u^{k} - u^{*} \|^{2}-\| u^{k+1}(\alpha^{*}_{k_{1}}) - u^{*} \| ^{2}\geq \varPsi (\alpha^{*}_{k_{1}})\);
- (iii)
\(\varPhi (\alpha^{*}_{k_{2}})\geq \varPsi (\alpha^{*}_{k_{1}})\);
- (iv)
if \(\varPhi ^{\prime}(\alpha^{*}_{k_{2}})=0\), then \(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}}) -u^{*}\|^{2}\geq\|u^{k}- u^{k+1}(\alpha^{*}_{k_{2}}) \|^{2}\).
Remark 3
In the next theorem, we show that \(\alpha^{*}_{k_{1}}\) and \(\varPsi (\alpha^{*}_{k_{1}})\) are lower bounded away from zero, and it is one of the keys to prove the global convergence results.
Theorem 2
Proof
From the computational point of view, a relaxation factor \(\gamma\in (0,2)\) is preferable in the correction. We are now in the position to prove the contractive property of the iterative sequence.
Lemma 4
Proof
We now present the convergence result of Algorithm 1.
Theorem 3
If \(\inf_{k=0}^{\infty} \rho_{k} := \rho> 0\), then any cluster point of the sequence \(\{\tilde{u}^{k}\}\) generated by Algorithm 1 is a solution of problem (1).
Proof
4 Numerical experiments
Example 1
Dimension of the problem | The method in [ 14 ] | Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\) | Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\) | ||||||
---|---|---|---|---|---|---|---|---|---|
k | l | CPU (sec.) | k | l | CPU (sec.) | k | l | CPU (sec.) | |
n = 100 | 228 | 465 | 0.024 | 138 | 447 | 0.023 | 73 | 240 | 0.06 |
n = 300 | 259 | 540 | 0.04 | 185 | 580 | 0.03 | 103 | 332 | 0.09 |
n = 500 | 531 | 1,109 | 0.15 | 227 | 700 | 0.10 | 128 | 403 | 0.16 |
n = 600 | 520 | 1,160 | 0.22 | 225 | 701 | 0.13 | 135 | 419 | 0.20 |
n = 800 | 568 | 1,236 | 0.51 | 244 | 779 | 0.34 | 157 | 520 | 0.40 |
Dimension of the problem | The method in [ 14 ] | Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\) | Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\) | ||||||
---|---|---|---|---|---|---|---|---|---|
k | l | CPU (sec.) | k | l | CPU (sec.) | k | l | CPU (sec.) | |
n = 100 | 228 | 465 | 0.02 | 142 | 439 | 0.01 | 99 | 309 | 0.06 |
n = 300 | 259 | 540 | 0.04 | 180 | 553 | 0.03 | 97 | 302 | 0.08 |
n = 500 | 531 | 1,109 | 0.15 | 226 | 699 | 0.09 | 189 | 579 | 0.23 |
n = 600 | 520 | 1,160 | 0.23 | 251 | 768 | 0.14 | 129 | 400 | 0.18 |
n = 800 | 568 | 1,236 | 0.48 | 246 | 754 | 0.31 | 197 | 603 | 0.48 |
Example 2
Dimension of the problem | The method in [ 14 ] | Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\) | Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\) | ||||||
---|---|---|---|---|---|---|---|---|---|
k | l | CPU (sec.) | k | l | CPU (sec.) | k | l | CPU (sec.) | |
n = 100 | 318 | 676 | 0.03 | 93 | 312 | 0.01 | 68 | 235 | 0.05 |
n = 300 | 435 | 936 | 0.07 | 127 | 404 | 0.03 | 111 | 356 | 0.09 |
n = 500 | 489 | 1,035 | 0.15 | 146 | 491 | 0.07 | 129 | 416 | 0.17 |
n = 600 | 406 | 877 | 0.18 | 117 | 378 | 0.08 | 92 | 299 | 0.15 |
n = 800 | 386 | 832 | 0.64 | 110 | 359 | 0.29 | 76 | 249 | 0.28 |
Dimension of the problem | The method in [ 14 ] | Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\) | Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\) | ||||||
---|---|---|---|---|---|---|---|---|---|
k | l | CPU (sec.) | k | l | CPU (sec.) | k | l | CPU (sec.) | |
n = 100 | 318 | 676 | 0.02 | 157 | 793 | 0.02 | 84 | 298 | 0.06 |
n = 300 | 435 | 995 | 0.06 | 199 | 936 | 0.07 | 164 | 613 | 0.16 |
n = 500 | 489 | 1,035 | 0.14 | 190 | 769 | 0.12 | 155 | 550 | 0.22 |
n = 600 | 406 | 877 | 0.20 | 129 | 402 | 0.08 | 89 | 300 | 0.14 |
n = 800 | 386 | 832 | 0.35 | 169 | 714 | 0.32 | 89 | 309 | 0.26 |
Tables 1-4 show that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is very efficient even for large-scale classical nonlinear complementarity problems. Moreover, it demonstrates computationally that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is more effective than the method presented in [14] and Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) in the sense that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) needs fewer iterations and less evaluation numbers of T, which clearly illustrates its efficiency.
Declarations
Acknowledgements
The research part of second author was done during his visit to King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. In this research, third author was partially supported by the grant MOST 103-2115-M-037-001.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Fichera, G: Problemi elettrostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat., Sez. I 7, 91-140 (1964) MATHMathSciNetGoogle Scholar
- Stampacchia, G: Formes bilineaires coercivitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 258, 4413-4416 (1964) MATHMathSciNetGoogle Scholar
- Stampacchia, G: Variational inequalities. In: Ghizzetti, A (ed.) Theory and Applications of Monotone Operators. Proc. NATO Adv. Study Institute, Venice, Oderisi, Gubbio (1968) Google Scholar
- Ansari, QH, Lalitha, CS, Mehta, M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton (2014) MATHGoogle Scholar
- Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003) Google Scholar
- Glowinski, R, Lions, JL, Trémolieres, R: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981) MATHGoogle Scholar
- Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980) MATHGoogle Scholar
- Konnov, IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001) View ArticleMATHGoogle Scholar
- Khobotov, EN: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 27, 120-127 (1987) View ArticleMATHMathSciNetGoogle Scholar
- Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996) View ArticleGoogle Scholar
- Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon 12, 747-756 (1976) MATHGoogle Scholar
- Iusem, AN, Svaiter, BF: A variant of Korpelevich’s method for variational inequalities with a new strategy. Optimization 42, 309-321 (1997) View ArticleMATHMathSciNetGoogle Scholar
- Auslender, A, Teboule, M: Interior projection-like methods for monotone variational inequalities. Math. Program., Ser. A 104, 39-68 (2005) View ArticleMATHGoogle Scholar
- Bnouhachem, A, Noor, MA, Rassias, TR: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput. 183, 436-446 (2006) View ArticleMATHMathSciNetGoogle Scholar
- Bnouhachem, A, Fu, X, Xu, MH, Zhaohan, S: Modified extragradient methods for solving variational inequalities. Comput. Math. Appl. 57, 230-239 (2009) View ArticleMATHMathSciNetGoogle Scholar
- Fu, X: A two-stage prediction-correction method for solving variational inequalities. J. Comput. Appl. Math. 214, 345-355 (2008) View ArticleMATHMathSciNetGoogle Scholar
- He, BS, Liao, LZ: Improvement of some projection methods for monotone variational inequalities. J. Optim. Theory Appl. 112, 111-128 (2002) View ArticleMATHMathSciNetGoogle Scholar
- He, BS, Yang, H, Meng, Q, Han, DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. J. Optim. Theory Appl. 112, 129-143 (2002) View ArticleMATHMathSciNetGoogle Scholar
- Wang, YJ, Xiu, NH, Wang, CY: A new version of extragradient method for variational inequalities problems. Comput. Math. Appl. 42, 969-979 (2001) View ArticleMATHMathSciNetGoogle Scholar
- Xiu, NH, Zhang, JZ: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559-585 (2003) View ArticleMATHMathSciNetGoogle Scholar
- Bnouhachem, A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 309, 136-150 (2005) View ArticleMATHMathSciNetGoogle Scholar
- He, BS, Yang, ZH, Yuan, XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 300, 362-374 (2004) View ArticleMATHMathSciNetGoogle Scholar
- Zarantonello, EH: Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello, EH (ed.) Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971) Google Scholar
- Calamai, PH, Moré, JJ: Projected gradient methods for linearly constrained problems. Math. Program. 39, 93-116 (1987) View ArticleMATHGoogle Scholar
- Gafni, EM, Bertsekas, DP: Two-metric projection methods for constrained optimization. SIAM J. Control Optim. 22, 936-964 (1984) View ArticleMATHMathSciNetGoogle Scholar
- Peng, JM, Fukushima, M: A hybrid Newton method for solving the variational inequality problem via the D-gap function. Math. Program. 86, 367-386 (1999) View ArticleMATHMathSciNetGoogle Scholar
- He, BS, Yuan, XM, Zhang, JZ: Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl. 27, 247-267 (2004) View ArticleMATHMathSciNetGoogle Scholar
- Harker, PT, Pang, JS: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265-284 (1990) MathSciNetGoogle Scholar
- Marcotte, P, Dussault, JP: A note on a globally convergent Newton method for solving variational inequalities. Oper. Res. Lett. 6, 35-42 (1987) View ArticleMATHMathSciNetGoogle Scholar
- Taji, K, Fukushima, M, Ibaraki, T: A globally convergent Newton method for solving strongly monotone variational inequalities. Math. Program. 58, 369-383 (1993) View ArticleMATHMathSciNetGoogle Scholar