Skip to main content

A projection descent method for solving variational inequalities

Abstract

In this paper, we propose a descent direction method for solving variational inequalities. A new iterate is obtained by searching the optimal step size along a new descent direction which is obtained by the linear combination of two descent directions. Under suitable conditions, the global convergence of the proposed method is studied. Two numerical experiments are presented to illustrate the efficiency of the proposed method.

1 Introduction

The theory of variational inequalities is a well-established subject in nonlinear analysis and optimization. It was started during early sixties by the pioneering work of Fichera [1] and Stampacchia [2] (see also [3]). Since then it has been extended and studied in several directions. One of the most important aspects of the theory of variational inequalities is the solution method. Several solution methods were proposed and analyzed in the literature (see, for example, [410] and the references therein). The fixed point theory plays an important role in developing various kinds of algorithms for solving variational inequalities. By using the projection operator technique, one can easily establish the equivalence between variational inequalities and fixed point problems. This alternative equivalent formulation provides an elegant solution method, known as projection gradient method, for solving variational inequalities. The convergence of this method requires the strong monotonicity and Lipschitz continuity of the underlying operator. Many applications do not have these strong conditions. As a result, the projection gradient method is not applicable for such problems. Korpelevich [11] modified the projection gradient method to overcome these difficulties and introduced the so-called extragradient method. It generates the iterates according to the following recursion:

$$ u^{k+1}=P_{K}\bigl[u^{k}-\rho T\bigl( \bar{u}^{k}\bigr)\bigr], $$

where

$$ \bar{u}^{k}=P_{K}\bigl[u^{k}-\rho T \bigl({u}^{k}\bigr)\bigr], $$

and \(\rho>0\) is a fixed parameter. This method overcomes the difficulties arising in the project gradient method by performing an additional forward step and a projection at each iteration according to double projection. It was proved in [11] that the extragradient method is globally convergent if T is monotone and Lipschitz continuous on K and \(0<\rho<1/L\), where L is the Lipschitz constant of T. When the operator T is not Lipschitz continuous, or it is Lipschitz but the constant L is not known, the fixed parameter ρ must be replaced by the step size computed through an Armijo-like line search procedure with a new projection needed for each trial point (see, e.g., [11, 12]), and this can be computationally very expensive. To overcome these difficulties, several modified projection and extragradient-type methods [1222] have been suggested and developed for solving variational inequalities. It was shown in [14, 16] that three-step method performs better than two-step and one-step methods for solving variational inequalities.

The aim of this paper is to develop an algorithm, inspired by Fu [16], for solving variational inequalities. More precisely, in the first method, a new iterate is obtained by searching the optimal step size along the integrated descent direction from the linear combination of two descent directions, and another optimal step length is employed to reach substantial progress in each iteration for the second method. It is proved theoretically that the lower-bound of the progress is greater when obtained by the second method than by the first one. Under certain conditions, the global convergence of the proposed methods is proved. Our results can be viewed as significant extensions of the previously known results.

2 Preliminaries

Let \(K \subset\mathbb{R}^{n}\) be a nonempty closed convex set and \(T : K \to \mathbb{R}^{n}\) be an operator. A classical variational inequality problem, denoted by \(\operatorname{VI}(T,K)\), is to find a vector \(u^{*}\in K\) such that

$$ \bigl\langle T\bigl(u^{*}\bigr), v-u^{*} \bigr\rangle \geq0, \quad \forall v \in K. $$
(1)

It is worth to mention that the solution set \(S^{*}\) of \(\operatorname{VI}(T,K)\) is nonempty if T is continuous and K is compact.

It is well known that if K is a closed convex cone, then \(\operatorname {VI}(T,K)\) is equivalent to the nonlinear complementarity problem of finding \(u^{*} \in K\) such that

$$ T\bigl(u^{*}\bigr) \in K^{*}\quad \mbox{and} \quad \bigl\langle T\bigl(u^{*}\bigr), u \bigr\rangle = 0, $$
(2)

where \(K^{*} := \{ y \in\mathbb{R}^{n}: \langle y,x \rangle\geq0 \mbox{ for all } x \in K \}\). For further details on variational inequalities and complementarity problems, we refer to [48] and the references therein.

In what follows, we always assume that the underlying operator T is continuous and pseudomonotone, that is,

$$ \bigl\langle T(u), v-u \bigr\rangle \ge0 \quad \Rightarrow \quad \bigl\langle T(v), v-u \bigr\rangle \geq0,\quad \forall u, v \in K, $$

and the solution set of problem (1), denoted by \(S^{*}\), is nonempty.

The following results will be used in the sequel.

Lemma 1

[23]

Let \(K \subset\mathbb{R}^{n}\) be a nonempty closed convex set and \(P_{K}(\cdot)\) denote the projection on K under the Euclidean norm, that is, \(P_{K}(z)= \arg\min\{ \|z-x\| : x \in K\}\). Then the following statements hold:

  1. (a)

    \(\langle z-P_{K}(z),P_{K}(z)-v\rangle\geq 0\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).

  2. (b)

    \({\|P_{K}(z)-v\|}^{2} \le{\|z-v\|}^{2}-{\| z-P_{K}(z)\|}^{2}\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).

  3. (c)

    \(\|P_{K}(w)-P_{K}(v)\|^{2}\leq\langle w-v,P_{K}(w)-P_{K}(v)\rangle\), \(\forall v,w \in\mathbb{R}^{n}\).

Lemma 2

[7]

\(u^{*}\) is a solution of problem (1) if and only if

$$ u^{*}=P_{K}\bigl[u^{*}-\rho T\bigl(u^{*}\bigr)\bigr], \quad \textit{where } \rho> 0. $$
(3)

From Lemma 2, it is clear that u is a solution of (1) if and only if u is a zero point of the function

$$ e(u,\rho):=u-P_{K}\bigl[u-\rho T(u)\bigr]. $$

The next lemma shows that \(\|e(u,\rho)\|\) is a non-decreasing function, while \(\frac{\|e(u,\rho)\|}{\rho}\) is a non-increasing one with respect to ρ.

Lemma 3

[2426]

For all \(u \in\mathbb{R}^{n}\) and \(\rho' > \rho> 0\),

$$ \bigl\Vert e\bigl(u, \rho'\bigr) \bigr\Vert \ge \bigl\Vert e(u,\rho)\bigr\Vert $$
(4)

and

$$ \frac{\| e(u,\rho') \|}{\rho'} \le\frac{\| e(u,\rho) \|}{\rho}. $$
(5)

3 Algorithm and convergence results

In this section, we suggest and analyze two new methods for solving variational inequalities (1). For given \(u^{k}\in K\) and \(\rho_{k}>0\), each iteration of the first method consists of three steps, the first step offers \(\tilde{u}^{k}\), the second step makes \(\bar{u}^{k}\) and the third step produces the new iterate \(u^{k+1}\).

Algorithm 1

  1. Step 1.

    Given \(u^{0} \in K\), \(\epsilon>0\), \(\rho_{0}=1\), \(\nu>1\), \(\mu\in(0, \sqrt{2})\), \(\sigma\in(0,1)\), \(\zeta\in(0,1)\), \(\eta _{1}\in(0,\zeta)\), \(\eta_{2}\in(\zeta, \nu)\) and let \(k=0\).

  2. Step 2.

    If \(\|e(u^{k},\rho_{k} )\| \leq\epsilon\), then stop. Otherwise, go to Step 3.

  3. Step 3.
    1. (1)

      For a given \(u^{k}\in K\), calculate the two predictors

      $$\begin{aligned}& \tilde{u}^{k} = P_{K} \bigl[u^{k} - \rho_{k} T\bigl(u^{k}\bigr)\bigr], \end{aligned}$$
      (6a)
      $$\begin{aligned}& \bar{u}^{k} = P_{K}\bigl[ \tilde{u}^{k} - \rho_{k} T\bigl( \tilde{u}^{k} \bigr)\bigr]. \end{aligned}$$
      (6b)
    2. (2)

      If \(\|e(\tilde{u}^{k},\rho_{k} )\|\leq\epsilon\), then stop. Otherwise, continue.

    3. (3)

      If \(\rho_{k}\) satisfies both

      $$ r_{1}:=\frac{|\rho_{k}[\langle \tilde{u}^{k}-\bar{u}^{k},T(u^{k})-T(\tilde{u}^{k})\rangle-\langle u^{k}-\bar{u}^{k},T(\tilde{u}^{k})-T(\bar{u}^{k})\rangle]|}{\|\tilde {u}^{k}-\bar{u}^{k}\|^{2}}\leq \mu^{2} $$
      (7)

      and

      $$ r_{2}:=\frac{\|\rho_{k}(T(\tilde{u}^{k})-T(\bar{u}^{k}))\|}{\|\tilde{u}^{k} - \bar{u}^{k}\|} \leq\nu, $$
      (8)

      then go to Step 4; otherwise, continue.

    4. (4)

      Perform an Armijo-like line search via reducing \(\rho_{k}\),

      $$ \rho_{k} := \rho_{k} *\frac{\sigma}{\max(r_{1}, 1)} $$
      (9)

      and go to Step 3.

  4. Step 4.

    Take the new iteration \(u^{k+1}\) by setting

    $$ u^{k+1}(\alpha_{k})= P_{K} \bigl[u^{k} - \alpha_{k}d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr], $$
    (10)

    where

    $$ d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)=\beta_{1}d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)+\beta _{2} \rho_{k}F\bigl(\bar{u}^{k}\bigr) $$
    (11)

    and

    $$ d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr) : =\bigl(\tilde{u}^{k} - \bar{u}^{k} \bigr) - \rho_{k}\bigl(T\bigl( \tilde{u}^{k} \bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr). $$
    (12)
  5. Step 5.

    Adaptive rule of choosing a suitable \(\rho_{k+1}\) as the start prediction step size for the next iteration.

    1. (1)

      Prepare a proper \(\rho_{k+1}\),

      $$ \rho_{k+1} :=\left \{ \begin{array}{l@{\quad}l} {\rho_{k}*\zeta/r_{2}} & \mbox{if } r_{2} \le\eta_{1}, \\ \rho_{k}*\zeta/r_{2} & \mbox{if } r_{2} \ge\eta_{2}, \\ \rho_{k} & \mbox{otherwise}. \end{array} \right . $$
    2. (2)

      Return to Step 2, with k replaced by \(k+1\).

Remark 1

(a) The proposed method can be viewed as a refinement and improvement of the method of He et al. [27] by performing an additional projection step at each iteration and another optimal step length is employed to reach substantial progress in each iteration.

(b) If \(\beta_{1}=0\) and \(\beta_{2}=1\), we obtain the method proposed in [16].

(c) If \(\beta_{1}=1\) and \(\beta_{2}=0\), we obtain a descent method, the new iterate is obtained along a descent direction \(d_{1}(\tilde {u}^{k},\bar{u}^{k})\).

Remark 2

In (9), if \(\sigma>\max(r_{1}, 1)\), it indicates that \(\rho _{k}\) will be too large for the next iteration and will increase the number of Armijo-like line searches. So, we choose \(\rho_{k}\) for the next iteration to be only modestly smaller than \(\rho_{k}\) to avoid expensive implementations in the next iteration.

We now consider the criteria for \(\alpha_{k}\), which ensures that \(u^{k+1}(\alpha_{k})\) is closer to the solution set than \(u^{k}\). For this purpose, we define

$$ \varTheta _{k}(\alpha_{k}) := \bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}-\bigl\Vert u^{k+1}( \alpha_{k})-u^{*}\bigr\Vert ^{2}. $$
(13)

Theorem 1

Let \(u^{*}\in S^{*}\). Then we have

$$ \varTheta (\alpha_{k})\geq \varPhi (\alpha_{k})\geq \varPsi (\alpha_{k}), $$
(14)

where

$$ \varPhi (\alpha_{k})=\bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2 \alpha_{k}\bigl\langle u^{k+1}(\alpha_{k})- \bar{u}^{k}, d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr) \bigr\rangle , $$
(15)

and

$$ \varPsi (\alpha_{k})=2\alpha_{k}(\beta_{1}+ \beta_{2})\bigl\langle u^{k}-\bar{u}^{k}, d_{1}\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle -\alpha_{k}^{2}(\beta_{1}+ \beta_{2})^{2}\bigl\Vert d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2}. $$

Proof

Since \(u^{*}\in K\), setting \(v=u^{*}\) and \(z=u^{k}-\alpha_{k} d(\tilde {u}^{k},\bar{u}^{k})\) in (b), we have

$$ \bigl\Vert u^{k+1}(\alpha_{k})-u^{*}\bigr\Vert ^{2}\leq\bigl\Vert u^{k}-u^{*}-\alpha_{k} d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2}- \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})- \alpha_{k} d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr) \bigr\Vert ^{2}. $$

Using the definition of \(\varTheta (\alpha_{k})\), we get

$$\begin{aligned} \varTheta (\alpha_{k}) \geq& \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})\bigr\Vert ^{2} +2\alpha_{k}\bigl\langle u^{k}-u^{*}, d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{} -2\alpha_{k}\bigl\langle u^{k}-u^{k+1}( \alpha_{k}),d\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr) \bigr\rangle \\ =& \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}\bigl\langle u^{k+1}(\alpha _{k})-u^{*}, d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr) \bigr\rangle . \end{aligned}$$
(16)

For any solution \(u^{*}\in S^{*}\) of problem (1), we have

$$ \bigl\langle \rho_{k} T\bigl(u^{*}\bigr),\bar{u}^{k}-u^{*} \bigr\rangle \geq0. $$

By the pseudomonotonicity of T, we obtain

$$ \bigl\langle \rho_{k} T\bigl(\bar{u}^{k}\bigr), \bar{u}^{k}-u^{*}\bigr\rangle \geq0. $$
(17)

Substituting \(z=\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})\) and \(v=u^{*}\) into (a), we get

$$ \bigl\langle \tilde{u}^{k}-\rho_{k} T\bigl( \tilde{u}^{k}\bigr)-\bar{u}^{k},\bar{u}^{k}-u^{*}\bigr\rangle \geq0. $$
(18)

Adding (17) and (18), and using the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we have

$$ \bigl\langle d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr),\bar{u}^{k}-u^{*}\bigr\rangle \geq0. $$
(19)

Multiplying (19) by \(2\alpha_{k}\beta_{1}\) and (17) by \(2\alpha_{k}\beta_{2}\), then adding the resultants with (16), we obtain

$$\begin{aligned} \varTheta (\alpha_{k}) \geq& \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}\bigl\langle u^{k+1}(\alpha _{k})-\bar{u}^{k}, d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle \\ =&\varPhi (\alpha_{k}) \\ =& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}\beta_{1}\bigl\langle u^{k+1}(\alpha_{k})-\bar{u}^{k}, d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{} +2\alpha_{k}\beta_{2}\bigl\langle u^{k+1}( \alpha_{k})-\bar{u}^{k}, \rho_{k} T\bigl( \bar{u}^{k}\bigr)\bigr\rangle . \end{aligned}$$
(20)

Note that \(\bar{u}^{k}= P_{K} [\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})]\). We can apply (a) with \(v=u^{k+1}\), to obtain

$$ \bigl\langle u^{k+1}(\alpha_{k})- \bar{u}^{k},\tilde{u}^{k}-\rho_{k} T\bigl(\tilde {u}^{k}\bigr)-\bar{u}^{k}\bigr\rangle \leq0. $$
(21)

Multiplying (21) by \(2\alpha_{k}\beta_{2}\) and then adding the resultant to (20) and using the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we have

$$\begin{aligned} \varPhi (\alpha_{k}) \geq& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2} +2 \alpha_{k}\beta_{1}\bigl\langle u^{k+1}( \alpha_{k})-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{}+2\alpha_{k}\beta_{2}\bigl\langle u^{k+1}( \alpha_{k})-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ =& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}(\beta_{1}+ \beta_{2}) \bigl\langle u^{k+1}(\alpha_{k})- \bar{u}^{k}, d_{1}\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle \\ =& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})- \alpha_{k}(\beta_{1}+\beta_{2})d_{1} \bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ &{}+2\alpha_{k}(\beta_{1}+\beta_{2})\bigl\langle u^{k}-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{}-\alpha_{k}^{2}(\beta_{1}+ \beta_{2})^{2}\bigl\Vert d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ \geq&2\alpha_{k}(\beta_{1}+\beta_{2})\bigl\langle u^{k}-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle -\alpha_{k}^{2}( \beta_{1}+\beta_{2})^{2}\bigl\Vert d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ =&\varPsi (\alpha_{k}), \end{aligned}$$

and the theorem is proved. □

Proposition 1

Assume that T is continuously differentiable, then we have

  1. (i)

    \(\varPhi ^{\prime}(\alpha)=2 \langle u^{k+1}(\alpha)- \bar {u}^{k},d(\tilde{u}^{k},\bar{u}^{k})\rangle\);

  2. (ii)

    \(\varPhi ^{\prime}(\alpha)\) is a non-increasing function with respect to \(\alpha\geq0\), and hence, \(\varPhi (\alpha)\) is concave.

Proof

For given \(u^{k}, \tilde{u}^{k}, \bar{u}^{k} \in K\), let

$$ h(\alpha,y)=\bigl\Vert y-\bigl[u^{k}-\alpha d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr]\bigr\Vert ^{2}-\alpha^{2}\bigl\Vert d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\Vert ^{2}-2\alpha\bigl\langle \bar{u}^{k}- {u}^{k},d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle . $$
(22)

It easy to see that the solution of the following problem

$$ \min_{y}\bigl\{ h(\alpha,y) : y\in K\bigr\} $$

is \(y^{*} = P_{K}[u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})]\). By substituting \(y^{*}\) into (22) and simplifying it, we obtain

$$ \varPhi (\alpha)=h(\alpha,y)|_{y = P_{ K}[u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})]} = \min_{y}\bigl\{ h( \alpha,y) : y \in K \bigr\} . $$

\(\varPhi (\alpha)\) is differentiable and its derivative is given by

$$\begin{aligned} \varPhi ^{\prime}(\alpha) =&{\biggl.\frac{\partial h(\alpha,y)}{\partial \alpha}\biggr|_{ y = P_{ K}[u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})]}} \\ =&2\bigl\langle u^{k+1}(\alpha)-u^{k} +\alpha d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr),d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle -2\alpha\bigl\Vert d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ &{}-2 \bigl\langle \bar{u}^{k}-u^{k},d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ =&2 \bigl\langle {u}^{k+1}(\alpha)- \bar{u}^{k},d\bigl( \tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle , \end{aligned}$$

and hence (i) is proved. We now establish the proof of the second assertion. Let \(\bar{\alpha} > \alpha\geq0\). We show that

$$ \varPhi ^{\prime}(\bar{\alpha})\leq \varPhi ^{\prime}(\alpha), $$

that is,

$$ \bigl\langle u^{k+1}(\bar{\alpha})-u^{k+1}( \alpha),d\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle \leq0. $$
(23)

By setting \(z := u^{k}-\bar{\alpha} d(\tilde{u}^{k},\bar{u}^{k})\), \(v := u^{k+1}(\alpha)\) and \(z := u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})\), \(v :=u^{k+1}(\bar{\alpha})\) in (a), respectively, we get

$$ \bigl\langle u^{k}-\bar{\alpha}d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)-u^{k+1}(\bar { \alpha}), u^{k+1}(\alpha)-u^{k+1}(\bar{\alpha})\bigr\rangle \leq0 $$
(24)

and

$$ \bigl\langle u^{k}-{\alpha}d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)-u^{k+1}({\alpha}), u^{k+1}(\bar{ \alpha})-u^{k+1}(\alpha)\bigr\rangle \leq0. $$
(25)

By adding (24) and (25), we obtain

$$ \bigl\langle u^{k+1}(\bar{\alpha})-u^{k+1}(\alpha),u^{k+1}( \bar{\alpha })-u^{k+1}(\alpha) +(\bar{\alpha}-\alpha)d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \leq0, $$

that is,

$$ \bigl\Vert u^{k+1}(\bar{\alpha})-u^{k+1}(\alpha)\bigr\Vert ^{2}+(\bar{\alpha}-\alpha) \bigl\langle u^{k+1}(\bar{ \alpha})-u^{k+1}(\alpha), d\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle \leq0. $$

It follows that

$$ \bigl\langle u^{k+1}(\bar{\alpha})-u^{k+1}(\alpha), d\bigl( \tilde{u}^{k},\bar {u}^{k}\bigr) \bigr\rangle \leq \frac{-1}{(\bar{\alpha}-\alpha)}\bigl\Vert u^{k+1}(\bar{\alpha}) - u^{k+1}( \alpha)\bigr\Vert ^{2} \leq0. $$

We obtain inequality (23) and complete the proof. □

Now for the same kth approximate solution \(u^{k}\), let

$$ \alpha^{*}_{k_{1}}=\arg \max_{\alpha}\bigl\{ \varPsi (\alpha ) \mid \alpha> 0\bigr\} $$

and

$$ \alpha^{*}_{k_{2}}=\arg \max_{\alpha}\bigl\{ \varPhi (\alpha ) \mid \alpha> 0\bigr\} . $$

Since \(\varPsi (\alpha)\) is a quadratic function of α and it reaches its maximum at

$$ \alpha^{*}_{k_{1}}=\frac{\langle u^{k}-\bar{u}^{k}, d_{1}(\tilde{u}^{k},\bar {u}^{k})\rangle}{ (\beta_{1}+\beta_{2})\|d_{1}(\tilde{u}^{k},\bar{u}^{k})\|^{2}} $$
(26)

and

$$ \varPsi \bigl(\alpha^{*}_{k_{1}}\bigr)=\alpha^{*}_{k_{1}}( \beta_{1}+\beta_{2})\bigl\langle u^{k}- \bar{u}^{k}, d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle . $$
(27)

In order to make \(\alpha^{*}_{k_{2}}\) to be obtained more easily, we approximately compute \(\alpha^{*}_{k_{2}}\) by solving the following simple optimization problem:

$$ \alpha^{*}_{k_{2}} = \arg \max_{\alpha} \bigl\{ \varPhi (\alpha )\mid 0< \alpha\leq m_{1}\alpha^{*}_{k_{1}} \bigr\} , $$
(28)

where \(m_{1} \geq1\).

Based on Theorem 1 and Proposition 1, the following result can be proved easily.

Proposition 2

Let \(\alpha^{*}_{k_{1}}\) and \(\alpha^{*}_{k_{2}}\) be defined by (26) and (28), respectively, and let T be pseudomonotone and continuously differentiable. Then

  1. (i)

    \(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}})-u^{*}\|^{2}\geq \varPhi (\alpha^{*}_{k_{2}})\);

  2. (ii)

    \(\| u^{k} - u^{*} \|^{2}-\| u^{k+1}(\alpha^{*}_{k_{1}}) - u^{*} \| ^{2}\geq \varPsi (\alpha^{*}_{k_{1}})\);

  3. (iii)

    \(\varPhi (\alpha^{*}_{k_{2}})\geq \varPsi (\alpha^{*}_{k_{1}})\);

  4. (iv)

    if \(\varPhi ^{\prime}(\alpha^{*}_{k_{2}})=0\), then \(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}}) -u^{*}\|^{2}\geq\|u^{k}- u^{k+1}(\alpha^{*}_{k_{2}}) \|^{2}\).

Remark 3

Let

$$ u_{\mathrm{I}}^{k+1}\bigl(\alpha^{*}_{k_{1}}\bigr) =P_{K}\bigl[u^{k}-\alpha^{*}_{k_{1}} d \bigl(u^{k},\rho_{k}\bigr)\bigr] $$

and

$$ u_{\mathrm{II}}^{k+1}\bigl(\alpha^{*}_{k_{2}}\bigr) =P_{K}\bigl[u^{k}-\alpha^{*}_{k_{2}} d \bigl(u^{k},\rho_{k}\bigr)\bigr] $$

represent the new iterates generated by the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) and \(\alpha_{k}=\alpha^{*}_{k_{2}}\), respectively. Let

$$ \varTheta _{\mathrm{I}}\bigl(\alpha^{*}_{k_{1}}\bigr)=\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}-\bigl\Vert u_{\mathrm{I}}^{k+1}\bigl(\alpha^{*}_{k_{1}}\bigr)-u^{*}\bigr\Vert ^{2} $$

and

$$ \varTheta _{\mathrm{II}}\bigl(\alpha^{*}_{k_{2}}\bigr)=\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}-\bigl\Vert u_{\mathrm{II}}^{k+1}\bigl(\alpha^{*}_{k_{2}}\bigr)-u^{*}\bigr\Vert ^{2} $$

measure the progresses made by the new iterates, respectively. By using Proposition 2, it is easy to show that

$$\begin{aligned}& \varTheta _{\mathrm{I}}\bigl(\alpha^{*}_{k_{1}}\bigr)\geq \varPsi \bigl( \alpha^{*}_{k_{1}}\bigr), \\& \varTheta _{\mathrm{II}}\bigl(\alpha^{*}_{k_{2}}\bigr)\geq \varPhi \bigl( \alpha^{*}_{k_{2}}\bigr) \end{aligned}$$

and

$$ \varPhi \bigl(\alpha^{*}_{k_{2}}\bigr)\geq \varPsi \bigl(\alpha^{*}_{k_{1}} \bigr). $$

The above inequalities show that the proposed method with \(\alpha _{k}=\alpha^{*}_{k_{2}}\) is expected to make more progress than the proposed method with \(\alpha_{k}=\alpha ^{*}_{k_{1}}\) at each iteration, and so it explains theoretically that the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) outperforms the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{1}}\).

In the next theorem, we show that \(\alpha^{*}_{k_{1}}\) and \(\varPsi (\alpha^{*}_{k_{1}})\) are lower bounded away from zero, and it is one of the keys to prove the global convergence results.

Theorem 2

Let \(u^{*}\) be a solution of problem (1). For given \(u^{k} \in K\), let \(\tilde{u}^{k}\), \(\bar{u}^{k}\) be the predictors produced by (6a) and (6b). Then

$$ \alpha^{*}_{k_{1}}\geq\frac{2-\mu^{2}}{(\beta_{1}+\beta_{2})(1+\nu)^{2}} $$
(29)

and

$$ \varPsi \bigl(\alpha^{*}_{k_{1}}\bigr)\geq\frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. $$
(30)

Proof

Note that \(\tilde{u}^{k}= P_{K} [u^{k}-\rho_{k} T(u^{k})]\), \(\bar{u}^{k}= P_{K} [\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})]\). We can apply (c) with \(v=u^{k}-\rho_{k} T(u^{k})\), \(w=\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})\) to obtain

$$ \bigl\langle u^{k}-\rho_{k} T\bigl(u^{k}\bigr)- \bigl(\tilde{u}^{k}-\rho_{k} T\bigl(\tilde{u}^{k} \bigr)\bigr), \tilde{u}^{k}-\bar{u}^{k}\bigr\rangle \geq\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. $$

By some manipulations, we obtain

$$ \bigl\langle u^{k}-\tilde{u}^{k},\tilde{u}^{k}- \bar{u}^{k} \bigr\rangle \geq \bigl\Vert \tilde{u}^{k}- \bar{u}^{k}\bigr\Vert ^{2}+\rho_{k} \bigl\langle \tilde{u}^{k}-\bar {u}^{k},T\bigl(u^{k}\bigr)-T \bigl(\tilde{u}^{k}\bigr)\bigr\rangle . $$

Then we have

$$\begin{aligned} \bigl\langle u^{k}-\tilde{u}^{k},d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle =& \bigl\langle u^{k}-\tilde{u}^{k}, \tilde{u}^{k}- \bar{u}^{k}\bigr\rangle -\rho_{k}\bigl\langle u^{k}- \tilde {u}^{k},T\bigl(\tilde{u}^{k}\bigr)-T\bigl( \bar{u}^{k}\bigr)\bigr\rangle \\ \geq&\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}+\rho_{k}\bigl\langle \tilde{u}^{k}-\bar {u}^{k},T\bigl(u^{k}\bigr)-T\bigl(\tilde{u}^{k} \bigr)\bigr\rangle \\ &{}-\rho_{k}\bigl\langle u^{k}-\tilde{u}^{k},T \bigl(\tilde{u}^{k}\bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr\rangle . \end{aligned}$$
(31)

By using (31), (7) and the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we get

$$\begin{aligned} \bigl\langle u^{k}-\bar{u}^{k},d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle =& \bigl\langle u^{k}-\tilde{u}^{k},d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle +\bigl\langle \tilde{u}^{k}-\bar{u}^{k},d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ \geq&\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}+\rho_{k}\bigl\langle \tilde{u}^{k} -\bar {u}^{k},T\bigl(u^{k}\bigr)-T\bigl(\tilde{u}^{k} \bigr)\bigr\rangle \\ &{}-\rho_{k}\bigl\langle u^{k}-\tilde{u}^{k},T \bigl(\tilde{u}^{k}\bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr\rangle +\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2} \\ &{}-\rho_{k}\bigl\langle \tilde{u}^{k} - \bar{u}^{k},T\bigl(\tilde{u}^{k}\bigr)-T\bigl(\bar {u}^{k}\bigr)\bigr\rangle \\ \geq&\bigl({2-\mu^{2}}\bigr)\bigl\Vert \tilde{u}^{k}- \bar{u}^{k}\bigr\Vert ^{2}. \end{aligned}$$
(32)

Recalling the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\) (see (12)) and applying criterion (8), it is easy to see that

$$ \bigl\Vert d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\Vert ^{2}\leq\bigl(\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert +\bigl\Vert \rho_{k}\bigl(T\bigl(\tilde{u}^{k}\bigr)-T\bigl( \bar{u}^{k}\bigr)\bigr)\bigr\Vert \bigr)^{2}\leq(1+ \nu)^{2}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. $$
(33)

Moreover, by using (32) together with (33), we get

$$ \alpha^{*}_{k_{1}}=\frac{ \langle u^{k}-\bar{u}^{k},d_{1}(\tilde{u}^{k},\bar {u}^{k})\rangle}{(\beta_{1}+\beta_{2}) \|d_{1}(\tilde{u}^{k},\bar{u}^{k})\|^{2}} \geq \frac{2-\mu^{2}}{(\beta_{1}+\beta_{2})(1+\nu)^{2}}>0,\quad \mu\in (0,\sqrt{2}). $$
(34)

By substituting (34) in (27), we get the assertion of this theorem and the proof is completed. □

From the computational point of view, a relaxation factor \(\gamma\in (0,2)\) is preferable in the correction. We are now in the position to prove the contractive property of the iterative sequence.

Lemma 4

Let \(u^{*}\) be a solution of problem (1) and let \(u^{k+1}(\gamma\alpha_{k})\) be the sequence generated by Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) or \(\alpha_{k}=\alpha^{*}_{k_{2}}\). Then \(u^{k}\) is bounded and

$$ \bigl\Vert u^{k+1}(\gamma\alpha_{k}) - u^{*} \bigr\Vert ^{2}\leq\bigl\Vert u^{k} -u^{*}\bigr\Vert ^{2}-c\bigl\Vert u^{k}-\tilde{u}^{k}\bigr\Vert ^{2}, $$
(35)

where

$$ c := \frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}>0. $$

Proof

If \(\alpha_{k} = \alpha^{*}_{k_{2}}\). It follows from (14), (29) and (30) that

$$\begin{aligned} \bigl\Vert u^{k+1}\bigl(\gamma\alpha^{*}_{k_{2}} \bigr)-u^{*}\bigr\Vert ^{2} \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \varPhi \bigl(\alpha^{*}_{k_{2}}\bigr) \\ \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \varPsi \bigl( \alpha^{*}_{k_{1}}\bigr) \\ \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. \end{aligned}$$

If \(\alpha_{k}=\alpha^{*}_{k_{1}}\), then we have

$$\begin{aligned} \bigl\Vert u^{k+1}\bigl(\gamma\alpha^{*}_{k_{1}} \bigr)-u^{*}\bigr\Vert ^{2} \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \varPsi \bigl(\alpha^{*}_{k_{1}}\bigr) \\ \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. \end{aligned}$$

Since \(\mu\in(0,\sqrt{2})\), we have

$$ \bigl\Vert u^{k+1}(\gamma\alpha_{k})-u^{*}\bigr\Vert \leq \bigl\Vert u^{k}-u^{*}\bigr\Vert \leq\cdots\leq\bigl\Vert u^{0}-u^{*}\bigr\Vert . $$

From the above inequality, it is easy to verify that the sequence \(u^{k}\) is bounded. □

We now present the convergence result of Algorithm 1.

Theorem 3

If \(\inf_{k=0}^{\infty} \rho_{k} := \rho> 0\), then any cluster point of the sequence \(\{\tilde{u}^{k}\}\) generated by Algorithm 1 is a solution of problem (1).

Proof

It follows from (35) that

$$ \sum_{k=0}^{\infty}c\bigl\Vert u^{k}-\tilde{u}^{k}\bigr\Vert ^{2}< +\infty, $$

which means that

$$ \lim_{k\rightarrow\infty}\bigl\Vert \tilde{u}^{k}- \bar{u}^{k}\bigr\Vert =0. $$

Since the sequence \(u^{k}\) is bounded, \(\{\tilde{u}^{k}\}\) is too, and it has at least a cluster point. Let \(u^{\infty}\) be a cluster point of \(\{\tilde{u}^{k}\}\) and the subsequence \(\{\tilde{u}^{k_{j}}\}\) converges to \(u^{\infty}\). By the continuity of \(e(u,\rho)\) and inequality (4), it follows that

$$ \bigl\Vert e\bigl(u^{\infty},\rho\bigr)\bigr\Vert =\lim _{k_{j}\rightarrow \infty}\bigl\Vert e\bigl(\tilde{u}^{k_{j}}, \rho\bigr) \bigr\Vert \leq\lim_{k_{j}\rightarrow \infty}\bigl\Vert e\bigl( \tilde{u}^{k_{j}}, \rho_{k_{j}}\bigr)\bigr\Vert =\lim _{k_{j}\rightarrow \infty}\bigl\Vert \tilde{u}^{k_{j}}- \bar{u}^{k_{j}}\bigr\Vert =0. $$

This means that \(u^{\infty}\) is a solution of problem (1).

We now prove that the sequence \(\{u^{k}\}\) has exactly one cluster point. Assume that \(\tilde{u}\) is another cluster point and satisfies

$$ \delta:=\bigl\Vert \tilde{u} - u^{\infty} \bigr\Vert >0. $$

Since \(u^{\infty} \) is a cluster point of the sequence \(\{u^{k}\}\), there is \(k_{0}>0\) such that

$$ \bigl\Vert u^{k_{0}} - u^{\infty} \bigr\Vert \le \frac{\delta}{2}. $$

On the other hand, since \(u^{\infty}\in S^{*}\) and from (35), we have

$$ \bigl\Vert u^{k} -u^{\infty} \bigr\Vert \le\bigl\Vert u^{k_{0}} - u^{\infty}\bigr\Vert \quad \mbox{for all } k\ge k_{0}, $$

it follows that

$$ \bigl\Vert u^{k} - \tilde{u} \bigr\Vert \ge\bigl\Vert \tilde{u} -u^{\infty}\bigr\Vert - \bigl\Vert u^{k} - u^{\infty} \bigr\Vert \ge\frac{\delta}{2} \quad \forall k\ge k_{0}. $$

This contradicts the assumption that \(\tilde{u}\) is a cluster point of \(\{u^{k}\}\). Thus, the sequence \(\{u^{k}\}\) converges to \(u^{\infty} \in S^{*}\). □

4 Numerical experiments

In order to verify the theoretical assertions, we consider the nonlinear complementarity problems

$$ u \ge0,\qquad T(u)\ge0, \qquad u^{\top} T(u) = 0, $$
(36)

where \(T(u)=D(u)+Mu+q\), \(D(u)\) and \(Mu+q\) are the nonlinear part and linear part of \(T(u)\), respectively. Problem (36) is equivalent to problem (1) by taking \(K = \mathbb{R}^{n}_{+}\).

Example 1

We form the linear part in the test problems as

$$ M=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 & 2 & 2 & \cdots& 2 \\ 0 & 1 & 2 & \cdots& 2 \\ 0 & 0 & 1 & \cdots& 2 \\ \vdots& \vdots& \vdots& \ddots& \vdots \\ 0 & 0 & 0 & \cdots& 1 \end{array} \right )_{n \times n} \quad \mbox{and}\quad q=\left ( \begin{array}{@{}c@{}} -1 \\ -1 \\ -1 \\ \vdots\\ - 1 \end{array} \right )_{n \times1}. $$

In \(D(u)\), the nonlinear part of \(T(u)\), the components are chosen to be \(D_{j} (u)= d_{j} *\arctan(u_{j})\), where \(d_{j}\) is a random variable in \((0,1)\).

In all tests we take \(\rho_{0}=1\), \(\zeta=0.7\), \(\eta_{1}=0.3\), \(\eta_{2}=0.95\), \(\mu=0.95\), \(\nu=1.95\), \(\gamma=1.9\), \(\beta_{1}=0.5\), and \(\beta_{2}=1.5\). We employ \(\|e(u^{k},\rho_{k})\|\leq10^{-7}\) as the stopping criterion and choose \(u^{0}=0\) as the initial iterative points. All codes were written in Matlab. We compare Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\), with Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) and the method in [14]. The test results for problem (36) with different dimensions are reported in Tables 1 and 2. k is the number of iterations and l denotes the number of evaluations of mapping T.

Table 1 Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.8}\) (see ( 9 ))
Table 2 Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.4}\) (see ( 9 ))

Example 2

We form the linear part in the test problems similarly as in Harker and Pang [28]. The matrix \(M = A^{\top}A +B\), where A is an \(n\times n\) matrix whose entries are randomly generated in the interval \((-5,+5)\), and a skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval \((-500,500)\). In \(D(u)\), the nonlinear part of \(T(u)\), the components are chosen to be \(D_{j} (u)= d_{j} *\arctan(u_{j})\), where \(d_{j}\) is a random variable in \((0,1)\). A similar type of problem was tested in [29] and [30]. In all tests we take \(\rho_{0}=1\), \(\zeta=0.7\), \(\eta_{1}=0.3\), \(\eta_{2}=0.95\), \(\mu=0.95\), \(\nu=1.95\), \(\gamma=1.9\), \(\beta_{1}=0.5\), \(\beta_{2}=1.5\). We employ \(\|e(u^{k},\rho_{k})\|\leq10^{-7}\) as the stopping criterion and choose \(u^{0}=0\) as the initial iterative points. The test results for problem (36) with different dimensions are reported in Tables 3 and 4.

Table 3 Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.8}\) (see ( 9 ))
Table 4 Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.4}\) (see ( 9 ))

Tables 1-4 show that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is very efficient even for large-scale classical nonlinear complementarity problems. Moreover, it demonstrates computationally that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is more effective than the method presented in [14] and Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) in the sense that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) needs fewer iterations and less evaluation numbers of T, which clearly illustrates its efficiency.

References

  1. Fichera, G: Problemi elettrostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat., Sez. I 7, 91-140 (1964)

    MATH  MathSciNet  Google Scholar 

  2. Stampacchia, G: Formes bilineaires coercivitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 258, 4413-4416 (1964)

    MATH  MathSciNet  Google Scholar 

  3. Stampacchia, G: Variational inequalities. In: Ghizzetti, A (ed.) Theory and Applications of Monotone Operators. Proc. NATO Adv. Study Institute, Venice, Oderisi, Gubbio (1968)

    Google Scholar 

  4. Ansari, QH, Lalitha, CS, Mehta, M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton (2014)

    MATH  Google Scholar 

  5. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003)

    Google Scholar 

  6. Glowinski, R, Lions, JL, Trémolieres, R: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981)

    MATH  Google Scholar 

  7. Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)

    MATH  Google Scholar 

  8. Konnov, IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

  9. Khobotov, EN: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 27, 120-127 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  10. Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996)

    Book  Google Scholar 

  11. Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon 12, 747-756 (1976)

    MATH  Google Scholar 

  12. Iusem, AN, Svaiter, BF: A variant of Korpelevich’s method for variational inequalities with a new strategy. Optimization 42, 309-321 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  13. Auslender, A, Teboule, M: Interior projection-like methods for monotone variational inequalities. Math. Program., Ser. A 104, 39-68 (2005)

    Article  MATH  Google Scholar 

  14. Bnouhachem, A, Noor, MA, Rassias, TR: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput. 183, 436-446 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  15. Bnouhachem, A, Fu, X, Xu, MH, Zhaohan, S: Modified extragradient methods for solving variational inequalities. Comput. Math. Appl. 57, 230-239 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  16. Fu, X: A two-stage prediction-correction method for solving variational inequalities. J. Comput. Appl. Math. 214, 345-355 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  17. He, BS, Liao, LZ: Improvement of some projection methods for monotone variational inequalities. J. Optim. Theory Appl. 112, 111-128 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  18. He, BS, Yang, H, Meng, Q, Han, DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. J. Optim. Theory Appl. 112, 129-143 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  19. Wang, YJ, Xiu, NH, Wang, CY: A new version of extragradient method for variational inequalities problems. Comput. Math. Appl. 42, 969-979 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  20. Xiu, NH, Zhang, JZ: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559-585 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  21. Bnouhachem, A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 309, 136-150 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  22. He, BS, Yang, ZH, Yuan, XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 300, 362-374 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  23. Zarantonello, EH: Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello, EH (ed.) Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971)

    Google Scholar 

  24. Calamai, PH, Moré, JJ: Projected gradient methods for linearly constrained problems. Math. Program. 39, 93-116 (1987)

    Article  MATH  Google Scholar 

  25. Gafni, EM, Bertsekas, DP: Two-metric projection methods for constrained optimization. SIAM J. Control Optim. 22, 936-964 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  26. Peng, JM, Fukushima, M: A hybrid Newton method for solving the variational inequality problem via the D-gap function. Math. Program. 86, 367-386 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  27. He, BS, Yuan, XM, Zhang, JZ: Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl. 27, 247-267 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  28. Harker, PT, Pang, JS: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265-284 (1990)

    MathSciNet  Google Scholar 

  29. Marcotte, P, Dussault, JP: A note on a globally convergent Newton method for solving variational inequalities. Oper. Res. Lett. 6, 35-42 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  30. Taji, K, Fukushima, M, Ibaraki, T: A globally convergent Newton method for solving strongly monotone variational inequalities. Math. Program. 58, 369-383 (1993)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research part of second author was done during his visit to King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. In this research, third author was partially supported by the grant MOST 103-2115-M-037-001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ching-Feng Wen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have equal contribution. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bnouhachem, A., Ansari, Q.H. & Wen, CF. A projection descent method for solving variational inequalities. J Inequal Appl 2015, 143 (2015). https://doi.org/10.1186/s13660-015-0665-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0665-9

MSC

Keywords