- Research
- Open Access
- Published:
A projection descent method for solving variational inequalities
Journal of Inequalities and Applications volume 2015, Article number: 143 (2015)
Abstract
In this paper, we propose a descent direction method for solving variational inequalities. A new iterate is obtained by searching the optimal step size along a new descent direction which is obtained by the linear combination of two descent directions. Under suitable conditions, the global convergence of the proposed method is studied. Two numerical experiments are presented to illustrate the efficiency of the proposed method.
1 Introduction
The theory of variational inequalities is a well-established subject in nonlinear analysis and optimization. It was started during early sixties by the pioneering work of Fichera [1] and Stampacchia [2] (see also [3]). Since then it has been extended and studied in several directions. One of the most important aspects of the theory of variational inequalities is the solution method. Several solution methods were proposed and analyzed in the literature (see, for example, [4–10] and the references therein). The fixed point theory plays an important role in developing various kinds of algorithms for solving variational inequalities. By using the projection operator technique, one can easily establish the equivalence between variational inequalities and fixed point problems. This alternative equivalent formulation provides an elegant solution method, known as projection gradient method, for solving variational inequalities. The convergence of this method requires the strong monotonicity and Lipschitz continuity of the underlying operator. Many applications do not have these strong conditions. As a result, the projection gradient method is not applicable for such problems. Korpelevich [11] modified the projection gradient method to overcome these difficulties and introduced the so-called extragradient method. It generates the iterates according to the following recursion:
where
and \(\rho>0\) is a fixed parameter. This method overcomes the difficulties arising in the project gradient method by performing an additional forward step and a projection at each iteration according to double projection. It was proved in [11] that the extragradient method is globally convergent if T is monotone and Lipschitz continuous on K and \(0<\rho<1/L\), where L is the Lipschitz constant of T. When the operator T is not Lipschitz continuous, or it is Lipschitz but the constant L is not known, the fixed parameter ρ must be replaced by the step size computed through an Armijo-like line search procedure with a new projection needed for each trial point (see, e.g., [11, 12]), and this can be computationally very expensive. To overcome these difficulties, several modified projection and extragradient-type methods [12–22] have been suggested and developed for solving variational inequalities. It was shown in [14, 16] that three-step method performs better than two-step and one-step methods for solving variational inequalities.
The aim of this paper is to develop an algorithm, inspired by Fu [16], for solving variational inequalities. More precisely, in the first method, a new iterate is obtained by searching the optimal step size along the integrated descent direction from the linear combination of two descent directions, and another optimal step length is employed to reach substantial progress in each iteration for the second method. It is proved theoretically that the lower-bound of the progress is greater when obtained by the second method than by the first one. Under certain conditions, the global convergence of the proposed methods is proved. Our results can be viewed as significant extensions of the previously known results.
2 Preliminaries
Let \(K \subset\mathbb{R}^{n}\) be a nonempty closed convex set and \(T : K \to \mathbb{R}^{n}\) be an operator. A classical variational inequality problem, denoted by \(\operatorname{VI}(T,K)\), is to find a vector \(u^{*}\in K\) such that
It is worth to mention that the solution set \(S^{*}\) of \(\operatorname{VI}(T,K)\) is nonempty if T is continuous and K is compact.
It is well known that if K is a closed convex cone, then \(\operatorname {VI}(T,K)\) is equivalent to the nonlinear complementarity problem of finding \(u^{*} \in K\) such that
where \(K^{*} := \{ y \in\mathbb{R}^{n}: \langle y,x \rangle\geq0 \mbox{ for all } x \in K \}\). For further details on variational inequalities and complementarity problems, we refer to [4–8] and the references therein.
In what follows, we always assume that the underlying operator T is continuous and pseudomonotone, that is,
and the solution set of problem (1), denoted by \(S^{*}\), is nonempty.
The following results will be used in the sequel.
Lemma 1
[23]
Let \(K \subset\mathbb{R}^{n}\) be a nonempty closed convex set and \(P_{K}(\cdot)\) denote the projection on K under the Euclidean norm, that is, \(P_{K}(z)= \arg\min\{ \|z-x\| : x \in K\}\). Then the following statements hold:
-
(a)
\(\langle z-P_{K}(z),P_{K}(z)-v\rangle\geq 0\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).
-
(b)
\({\|P_{K}(z)-v\|}^{2} \le{\|z-v\|}^{2}-{\| z-P_{K}(z)\|}^{2}\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).
-
(c)
\(\|P_{K}(w)-P_{K}(v)\|^{2}\leq\langle w-v,P_{K}(w)-P_{K}(v)\rangle\), \(\forall v,w \in\mathbb{R}^{n}\).
Lemma 2
[7]
\(u^{*}\) is a solution of problem (1) if and only if
From Lemma 2, it is clear that u is a solution of (1) if and only if u is a zero point of the function
The next lemma shows that \(\|e(u,\rho)\|\) is a non-decreasing function, while \(\frac{\|e(u,\rho)\|}{\rho}\) is a non-increasing one with respect to ρ.
Lemma 3
For all \(u \in\mathbb{R}^{n}\) and \(\rho' > \rho> 0\),
and
3 Algorithm and convergence results
In this section, we suggest and analyze two new methods for solving variational inequalities (1). For given \(u^{k}\in K\) and \(\rho_{k}>0\), each iteration of the first method consists of three steps, the first step offers \(\tilde{u}^{k}\), the second step makes \(\bar{u}^{k}\) and the third step produces the new iterate \(u^{k+1}\).
Algorithm 1
-
Step 1.
Given \(u^{0} \in K\), \(\epsilon>0\), \(\rho_{0}=1\), \(\nu>1\), \(\mu\in(0, \sqrt{2})\), \(\sigma\in(0,1)\), \(\zeta\in(0,1)\), \(\eta _{1}\in(0,\zeta)\), \(\eta_{2}\in(\zeta, \nu)\) and let \(k=0\).
-
Step 2.
If \(\|e(u^{k},\rho_{k} )\| \leq\epsilon\), then stop. Otherwise, go to Step 3.
-
Step 3.
-
(1)
For a given \(u^{k}\in K\), calculate the two predictors
$$\begin{aligned}& \tilde{u}^{k} = P_{K} \bigl[u^{k} - \rho_{k} T\bigl(u^{k}\bigr)\bigr], \end{aligned}$$(6a)$$\begin{aligned}& \bar{u}^{k} = P_{K}\bigl[ \tilde{u}^{k} - \rho_{k} T\bigl( \tilde{u}^{k} \bigr)\bigr]. \end{aligned}$$(6b) -
(2)
If \(\|e(\tilde{u}^{k},\rho_{k} )\|\leq\epsilon\), then stop. Otherwise, continue.
-
(3)
If \(\rho_{k}\) satisfies both
$$ r_{1}:=\frac{|\rho_{k}[\langle \tilde{u}^{k}-\bar{u}^{k},T(u^{k})-T(\tilde{u}^{k})\rangle-\langle u^{k}-\bar{u}^{k},T(\tilde{u}^{k})-T(\bar{u}^{k})\rangle]|}{\|\tilde {u}^{k}-\bar{u}^{k}\|^{2}}\leq \mu^{2} $$(7)and
$$ r_{2}:=\frac{\|\rho_{k}(T(\tilde{u}^{k})-T(\bar{u}^{k}))\|}{\|\tilde{u}^{k} - \bar{u}^{k}\|} \leq\nu, $$(8)then go to Step 4; otherwise, continue.
-
(4)
Perform an Armijo-like line search via reducing \(\rho_{k}\),
$$ \rho_{k} := \rho_{k} *\frac{\sigma}{\max(r_{1}, 1)} $$(9)and go to Step 3.
-
(1)
-
Step 4.
Take the new iteration \(u^{k+1}\) by setting
$$ u^{k+1}(\alpha_{k})= P_{K} \bigl[u^{k} - \alpha_{k}d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr], $$(10)where
$$ d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)=\beta_{1}d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)+\beta _{2} \rho_{k}F\bigl(\bar{u}^{k}\bigr) $$(11)and
$$ d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr) : =\bigl(\tilde{u}^{k} - \bar{u}^{k} \bigr) - \rho_{k}\bigl(T\bigl( \tilde{u}^{k} \bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr). $$(12) -
Step 5.
Adaptive rule of choosing a suitable \(\rho_{k+1}\) as the start prediction step size for the next iteration.
-
(1)
Prepare a proper \(\rho_{k+1}\),
$$ \rho_{k+1} :=\left \{ \begin{array}{l@{\quad}l} {\rho_{k}*\zeta/r_{2}} & \mbox{if } r_{2} \le\eta_{1}, \\ \rho_{k}*\zeta/r_{2} & \mbox{if } r_{2} \ge\eta_{2}, \\ \rho_{k} & \mbox{otherwise}. \end{array} \right . $$ -
(2)
Return to Step 2, with k replaced by \(k+1\).
-
(1)
Remark 1
(a) The proposed method can be viewed as a refinement and improvement of the method of He et al. [27] by performing an additional projection step at each iteration and another optimal step length is employed to reach substantial progress in each iteration.
(b) If \(\beta_{1}=0\) and \(\beta_{2}=1\), we obtain the method proposed in [16].
(c) If \(\beta_{1}=1\) and \(\beta_{2}=0\), we obtain a descent method, the new iterate is obtained along a descent direction \(d_{1}(\tilde {u}^{k},\bar{u}^{k})\).
Remark 2
In (9), if \(\sigma>\max(r_{1}, 1)\), it indicates that \(\rho _{k}\) will be too large for the next iteration and will increase the number of Armijo-like line searches. So, we choose \(\rho_{k}\) for the next iteration to be only modestly smaller than \(\rho_{k}\) to avoid expensive implementations in the next iteration.
We now consider the criteria for \(\alpha_{k}\), which ensures that \(u^{k+1}(\alpha_{k})\) is closer to the solution set than \(u^{k}\). For this purpose, we define
Theorem 1
Let \(u^{*}\in S^{*}\). Then we have
where
and
Proof
Since \(u^{*}\in K\), setting \(v=u^{*}\) and \(z=u^{k}-\alpha_{k} d(\tilde {u}^{k},\bar{u}^{k})\) in (b), we have
Using the definition of \(\varTheta (\alpha_{k})\), we get
For any solution \(u^{*}\in S^{*}\) of problem (1), we have
By the pseudomonotonicity of T, we obtain
Substituting \(z=\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})\) and \(v=u^{*}\) into (a), we get
Adding (17) and (18), and using the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we have
Multiplying (19) by \(2\alpha_{k}\beta_{1}\) and (17) by \(2\alpha_{k}\beta_{2}\), then adding the resultants with (16), we obtain
Note that \(\bar{u}^{k}= P_{K} [\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})]\). We can apply (a) with \(v=u^{k+1}\), to obtain
Multiplying (21) by \(2\alpha_{k}\beta_{2}\) and then adding the resultant to (20) and using the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we have
and the theorem is proved. □
Proposition 1
Assume that T is continuously differentiable, then we have
-
(i)
\(\varPhi ^{\prime}(\alpha)=2 \langle u^{k+1}(\alpha)- \bar {u}^{k},d(\tilde{u}^{k},\bar{u}^{k})\rangle\);
-
(ii)
\(\varPhi ^{\prime}(\alpha)\) is a non-increasing function with respect to \(\alpha\geq0\), and hence, \(\varPhi (\alpha)\) is concave.
Proof
For given \(u^{k}, \tilde{u}^{k}, \bar{u}^{k} \in K\), let
It easy to see that the solution of the following problem
is \(y^{*} = P_{K}[u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})]\). By substituting \(y^{*}\) into (22) and simplifying it, we obtain
\(\varPhi (\alpha)\) is differentiable and its derivative is given by
and hence (i) is proved. We now establish the proof of the second assertion. Let \(\bar{\alpha} > \alpha\geq0\). We show that
that is,
By setting \(z := u^{k}-\bar{\alpha} d(\tilde{u}^{k},\bar{u}^{k})\), \(v := u^{k+1}(\alpha)\) and \(z := u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})\), \(v :=u^{k+1}(\bar{\alpha})\) in (a), respectively, we get
and
By adding (24) and (25), we obtain
that is,
It follows that
We obtain inequality (23) and complete the proof. □
Now for the same kth approximate solution \(u^{k}\), let
and
Since \(\varPsi (\alpha)\) is a quadratic function of α and it reaches its maximum at
and
In order to make \(\alpha^{*}_{k_{2}}\) to be obtained more easily, we approximately compute \(\alpha^{*}_{k_{2}}\) by solving the following simple optimization problem:
where \(m_{1} \geq1\).
Based on Theorem 1 and Proposition 1, the following result can be proved easily.
Proposition 2
Let \(\alpha^{*}_{k_{1}}\) and \(\alpha^{*}_{k_{2}}\) be defined by (26) and (28), respectively, and let T be pseudomonotone and continuously differentiable. Then
-
(i)
\(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}})-u^{*}\|^{2}\geq \varPhi (\alpha^{*}_{k_{2}})\);
-
(ii)
\(\| u^{k} - u^{*} \|^{2}-\| u^{k+1}(\alpha^{*}_{k_{1}}) - u^{*} \| ^{2}\geq \varPsi (\alpha^{*}_{k_{1}})\);
-
(iii)
\(\varPhi (\alpha^{*}_{k_{2}})\geq \varPsi (\alpha^{*}_{k_{1}})\);
-
(iv)
if \(\varPhi ^{\prime}(\alpha^{*}_{k_{2}})=0\), then \(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}}) -u^{*}\|^{2}\geq\|u^{k}- u^{k+1}(\alpha^{*}_{k_{2}}) \|^{2}\).
Remark 3
Let
and
represent the new iterates generated by the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) and \(\alpha_{k}=\alpha^{*}_{k_{2}}\), respectively. Let
and
measure the progresses made by the new iterates, respectively. By using Proposition 2, it is easy to show that
and
The above inequalities show that the proposed method with \(\alpha _{k}=\alpha^{*}_{k_{2}}\) is expected to make more progress than the proposed method with \(\alpha_{k}=\alpha ^{*}_{k_{1}}\) at each iteration, and so it explains theoretically that the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) outperforms the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{1}}\).
In the next theorem, we show that \(\alpha^{*}_{k_{1}}\) and \(\varPsi (\alpha^{*}_{k_{1}})\) are lower bounded away from zero, and it is one of the keys to prove the global convergence results.
Theorem 2
Let \(u^{*}\) be a solution of problem (1). For given \(u^{k} \in K\), let \(\tilde{u}^{k}\), \(\bar{u}^{k}\) be the predictors produced by (6a) and (6b). Then
and
Proof
Note that \(\tilde{u}^{k}= P_{K} [u^{k}-\rho_{k} T(u^{k})]\), \(\bar{u}^{k}= P_{K} [\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})]\). We can apply (c) with \(v=u^{k}-\rho_{k} T(u^{k})\), \(w=\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})\) to obtain
By some manipulations, we obtain
Then we have
By using (31), (7) and the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we get
Recalling the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\) (see (12)) and applying criterion (8), it is easy to see that
Moreover, by using (32) together with (33), we get
By substituting (34) in (27), we get the assertion of this theorem and the proof is completed. □
From the computational point of view, a relaxation factor \(\gamma\in (0,2)\) is preferable in the correction. We are now in the position to prove the contractive property of the iterative sequence.
Lemma 4
Let \(u^{*}\) be a solution of problem (1) and let \(u^{k+1}(\gamma\alpha_{k})\) be the sequence generated by Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) or \(\alpha_{k}=\alpha^{*}_{k_{2}}\). Then \(u^{k}\) is bounded and
where
Proof
If \(\alpha_{k} = \alpha^{*}_{k_{2}}\). It follows from (14), (29) and (30) that
If \(\alpha_{k}=\alpha^{*}_{k_{1}}\), then we have
Since \(\mu\in(0,\sqrt{2})\), we have
From the above inequality, it is easy to verify that the sequence \(u^{k}\) is bounded. □
We now present the convergence result of Algorithm 1.
Theorem 3
If \(\inf_{k=0}^{\infty} \rho_{k} := \rho> 0\), then any cluster point of the sequence \(\{\tilde{u}^{k}\}\) generated by Algorithm 1 is a solution of problem (1).
Proof
It follows from (35) that
which means that
Since the sequence \(u^{k}\) is bounded, \(\{\tilde{u}^{k}\}\) is too, and it has at least a cluster point. Let \(u^{\infty}\) be a cluster point of \(\{\tilde{u}^{k}\}\) and the subsequence \(\{\tilde{u}^{k_{j}}\}\) converges to \(u^{\infty}\). By the continuity of \(e(u,\rho)\) and inequality (4), it follows that
This means that \(u^{\infty}\) is a solution of problem (1).
We now prove that the sequence \(\{u^{k}\}\) has exactly one cluster point. Assume that \(\tilde{u}\) is another cluster point and satisfies
Since \(u^{\infty} \) is a cluster point of the sequence \(\{u^{k}\}\), there is \(k_{0}>0\) such that
On the other hand, since \(u^{\infty}\in S^{*}\) and from (35), we have
it follows that
This contradicts the assumption that \(\tilde{u}\) is a cluster point of \(\{u^{k}\}\). Thus, the sequence \(\{u^{k}\}\) converges to \(u^{\infty} \in S^{*}\). □
4 Numerical experiments
In order to verify the theoretical assertions, we consider the nonlinear complementarity problems
where \(T(u)=D(u)+Mu+q\), \(D(u)\) and \(Mu+q\) are the nonlinear part and linear part of \(T(u)\), respectively. Problem (36) is equivalent to problem (1) by taking \(K = \mathbb{R}^{n}_{+}\).
Example 1
We form the linear part in the test problems as
In \(D(u)\), the nonlinear part of \(T(u)\), the components are chosen to be \(D_{j} (u)= d_{j} *\arctan(u_{j})\), where \(d_{j}\) is a random variable in \((0,1)\).
In all tests we take \(\rho_{0}=1\), \(\zeta=0.7\), \(\eta_{1}=0.3\), \(\eta_{2}=0.95\), \(\mu=0.95\), \(\nu=1.95\), \(\gamma=1.9\), \(\beta_{1}=0.5\), and \(\beta_{2}=1.5\). We employ \(\|e(u^{k},\rho_{k})\|\leq10^{-7}\) as the stopping criterion and choose \(u^{0}=0\) as the initial iterative points. All codes were written in Matlab. We compare Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\), with Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) and the method in [14]. The test results for problem (36) with different dimensions are reported in Tables 1 and 2. k is the number of iterations and l denotes the number of evaluations of mapping T.
Example 2
We form the linear part in the test problems similarly as in Harker and Pang [28]. The matrix \(M = A^{\top}A +B\), where A is an \(n\times n\) matrix whose entries are randomly generated in the interval \((-5,+5)\), and a skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval \((-500,500)\). In \(D(u)\), the nonlinear part of \(T(u)\), the components are chosen to be \(D_{j} (u)= d_{j} *\arctan(u_{j})\), where \(d_{j}\) is a random variable in \((0,1)\). A similar type of problem was tested in [29] and [30]. In all tests we take \(\rho_{0}=1\), \(\zeta=0.7\), \(\eta_{1}=0.3\), \(\eta_{2}=0.95\), \(\mu=0.95\), \(\nu=1.95\), \(\gamma=1.9\), \(\beta_{1}=0.5\), \(\beta_{2}=1.5\). We employ \(\|e(u^{k},\rho_{k})\|\leq10^{-7}\) as the stopping criterion and choose \(u^{0}=0\) as the initial iterative points. The test results for problem (36) with different dimensions are reported in Tables 3 and 4.
Tables 1-4 show that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is very efficient even for large-scale classical nonlinear complementarity problems. Moreover, it demonstrates computationally that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is more effective than the method presented in [14] and Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) in the sense that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) needs fewer iterations and less evaluation numbers of T, which clearly illustrates its efficiency.
References
Fichera, G: Problemi elettrostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat., Sez. I 7, 91-140 (1964)
Stampacchia, G: Formes bilineaires coercivitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 258, 4413-4416 (1964)
Stampacchia, G: Variational inequalities. In: Ghizzetti, A (ed.) Theory and Applications of Monotone Operators. Proc. NATO Adv. Study Institute, Venice, Oderisi, Gubbio (1968)
Ansari, QH, Lalitha, CS, Mehta, M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton (2014)
Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003)
Glowinski, R, Lions, JL, Trémolieres, R: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981)
Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)
Konnov, IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)
Khobotov, EN: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 27, 120-127 (1987)
Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996)
Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon 12, 747-756 (1976)
Iusem, AN, Svaiter, BF: A variant of Korpelevich’s method for variational inequalities with a new strategy. Optimization 42, 309-321 (1997)
Auslender, A, Teboule, M: Interior projection-like methods for monotone variational inequalities. Math. Program., Ser. A 104, 39-68 (2005)
Bnouhachem, A, Noor, MA, Rassias, TR: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput. 183, 436-446 (2006)
Bnouhachem, A, Fu, X, Xu, MH, Zhaohan, S: Modified extragradient methods for solving variational inequalities. Comput. Math. Appl. 57, 230-239 (2009)
Fu, X: A two-stage prediction-correction method for solving variational inequalities. J. Comput. Appl. Math. 214, 345-355 (2008)
He, BS, Liao, LZ: Improvement of some projection methods for monotone variational inequalities. J. Optim. Theory Appl. 112, 111-128 (2002)
He, BS, Yang, H, Meng, Q, Han, DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. J. Optim. Theory Appl. 112, 129-143 (2002)
Wang, YJ, Xiu, NH, Wang, CY: A new version of extragradient method for variational inequalities problems. Comput. Math. Appl. 42, 969-979 (2001)
Xiu, NH, Zhang, JZ: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559-585 (2003)
Bnouhachem, A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 309, 136-150 (2005)
He, BS, Yang, ZH, Yuan, XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 300, 362-374 (2004)
Zarantonello, EH: Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello, EH (ed.) Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971)
Calamai, PH, Moré, JJ: Projected gradient methods for linearly constrained problems. Math. Program. 39, 93-116 (1987)
Gafni, EM, Bertsekas, DP: Two-metric projection methods for constrained optimization. SIAM J. Control Optim. 22, 936-964 (1984)
Peng, JM, Fukushima, M: A hybrid Newton method for solving the variational inequality problem via the D-gap function. Math. Program. 86, 367-386 (1999)
He, BS, Yuan, XM, Zhang, JZ: Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl. 27, 247-267 (2004)
Harker, PT, Pang, JS: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265-284 (1990)
Marcotte, P, Dussault, JP: A note on a globally convergent Newton method for solving variational inequalities. Oper. Res. Lett. 6, 35-42 (1987)
Taji, K, Fukushima, M, Ibaraki, T: A globally convergent Newton method for solving strongly monotone variational inequalities. Math. Program. 58, 369-383 (1993)
Acknowledgements
The research part of second author was done during his visit to King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. In this research, third author was partially supported by the grant MOST 103-2115-M-037-001.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors have equal contribution. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Bnouhachem, A., Ansari, Q.H. & Wen, CF. A projection descent method for solving variational inequalities. J Inequal Appl 2015, 143 (2015). https://doi.org/10.1186/s13660-015-0665-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0665-9
MSC
- 49J40
- 65N30
Keywords
- variational inequalities
- self-adaptive rules
- pseudomonotone operators
- projection method