Skip to main content

A smoothing inexact Newton method for variational inequalities with nonlinear constraints

Abstract

In this paper, we propose a smoothing inexact Newton method for solving variational inequalities with nonlinear constraints. Based on the smoothed Fischer-Burmeister function, the variational inequality problem is reformulated as a system of parameterized smooth equations. The corresponding linear system of each iteration is solved approximately. Under some mild conditions, we establish the global and local quadratic convergence. Some numerical results show that the method is effective.

1 Introduction

We consider the variational inequality problem (VI for abbreviation), which is to find a vector \(x^{*}\in\Omega\) such that

$$ \text{VI}(\Omega,F)\quad \bigl(x-x^{*} \bigr)^{\top}F \bigl(x^{*} \bigr)\geq0, \quad\forall x\in\Omega, $$
(1)

where Ω is a nonempty, closed and convex subset of \({\mathcal {R}}^{n}\) and F is a continuous differentiable mapping from \({\mathcal {R}}^{n}\) into \({\mathcal {R}}^{n}\). In this paper, without loss of generality, we assume that

$$ \Omega:= \bigl\{ x\in R^{n}|g(x)\geq0 \bigr\} , $$
(2)

where \(g:R^{n} \to R^{m}\) and \(g_{i}:R^{n} \to R\), \((i\in I=\{1,2,\ldots ,m\})\) are twice continuously differentiable concave functions. When \(\Omega =R^{n}_{+}\), VI reduces to the nonlinear complementarity problem (NCP for abbreviation)

$$ x^{*}\in R^{n}_{+},\qquad F \bigl(x^{*} \bigr)\in R^{n}_{+},\qquad {x^{*}}^{\top}F \bigl(x^{*} \bigr)=0. $$
(3)

Variational inequalities have important applications in mathematical programming, economics, signal processing, transportation and structural analysis [1–3]. So, there are various numerical methods which have been studied by many researchers; e.g., see [4].

A popular way to solve the \(\mbox{VI}(\Omega,F)\) is to reformulate (1) to a nonsmooth equation via a KKT system of variational inequalities and an NCP-function. It is well known that the KKT system of \(\mbox{VI}(\Omega,F)\) can be given as follows:

$$ \textstyle\begin{cases}F(x)-\nabla g(x)^{\top}\lambda=0,\\ g(x)-z=0,\\ \lambda\geq0, z\geq0, \quad \lambda^{\top}z=0, \end{cases} $$
(4)

and the NCP-function \(\phi(a,b)\) is defined by the following condition:

$$ \phi(a,b)=0 \quad\iff \quad a\geq0, b\geq0, ab=0. $$
(5)

Then problem (1) and (2) is equivalent to the following nonsmoothing equation:

$$ \begin{pmatrix}F(x)-\nabla g(x)^{\top}\lambda\\ g(x)-z\\ \phi(\lambda,z) \end{pmatrix}=0. $$
(6)

Hence, problem (1) and (2) can be translated into (6).

We all know that the smoothing method is a fundamental approach to solve the nonsmooth equation (6). Recently, there has been strong interest in smoothing Newton methods for solving NCP [5–12]. The idea of this method is to construct a smooth function to approximate \(\phi(\lambda,z)\). In the past few years, there have been many different smoothing functions which were employed to smooth equation (6). Here, we define

$$ H(\mu,x,\lambda,z):= \begin{pmatrix}\mu\\ F(x)-\nabla g(x)^{\top}\lambda\\ g(x)-z\\ \Phi(\mu,\lambda,z) \end{pmatrix}, $$
(7)

where

$$ \Phi(\mu,\lambda,z):= \begin{pmatrix} \varphi(\mu,\lambda_{1},z_{1})\\ \vdots\\ \varphi(\mu,\lambda_{m},z_{m}) \end{pmatrix} $$
(8)

and

$$ \varphi(\mu,a,b)=a+b-\sqrt{a^{2}+b^{2}+ \mu^{2}}, \quad\forall(\mu,a,b)\in R^{3}. $$
(9)

It follows from equations (4)-(9) that \(H(\mu ,x,\lambda,z)=0\) is equivalent to \(\mu=0\) and \((x,\lambda,z)\) is the solution of (6). Thus, we may solve the system of smoothing equation \(H(\mu,x,\lambda,z)=0\) and reduce μ to zero gradually while iteratively solving the equation.

Based on the above symmetric perturbed Fischer-Burmeister function (9), Chen et al. [13] proposed the first globally and superlinearly convergent smoothing Newton method. They dealt with general box constrained variational inequalities. And Rui et al. [14] proposed an inexact Newton-GMRES method for a large-scale variational inequality problem under the assumption of linear inequality constraints.

In reality, variational inequalities with nonlinear constraints are more attractive. These problems have wide applications in economic networks [15], image restoration [3, 16] and so on. So, in this paper, under the framework of smoothing Newton method, we propose a new inexact Newton method for solving \(\mbox{VI}(\Omega,F)\) with nonlinear constraints, which extends the scope of constraints. We also prove the global and local quadratic convergence and present some numerical results which show the efficiency of the proposed method.

Throughout this paper, we always assume that the solution set of problem (1) and (2), denoted by \(\Omega^{*}\), is nonempty. \({\mathcal {R}}_{+}\) and \({\mathcal {R}}_{++}\) mean the nonnegative and positive real sets. Symbol \(\Vert \cdot \Vert \) stands for the 2-norm.

The rest of this paper is organized as follows. In Section 2, we summarize some useful properties and definitions. In Section 3, we describe the inexact Newton method formally and then prove its local quadratic convergence. We also give global convergence in Section 4. In Section 5, we report our numerical results. Finally, we give some conclusions in Section 6.

2 Preliminaries

In this section, we denote some basic definitions and properties that will be used in the subsequent sections.

Definition 2.1

The operator F is monotone if, for any \(u,v \in{\mathcal {R}}^{n}\),

$$(u-v)^{\top}\bigl(F(u)-F(v) \bigr)\geq0; $$

F is strongly monotone with modulus \(\mu> 0\) if, for any \(u,v \in{\mathcal {R}}^{n}\),

$$(u-v)^{\top}\bigl(F(u)-F(v) \bigr)\geq\mu \Vert u-v \Vert ^{2}; $$

F is Lipschitz continuous with a positive constant \(L>0\) if, for any \(u,v \in{\mathcal {R}}^{n}\),

$$\bigl\Vert F(u)-F(v) \bigr\Vert \leq L \Vert u-v \Vert . $$

The following lemma gives some properties of H and its corresponding Jacobian.

Lemma 2.1

Let \(H(\mu,x,\lambda,z)\) be defined by (7). Assume that F is continuously differentiable and strongly monotone, g is twice continuously differentiable concave, \((\mu^{*},x^{*},\lambda^{*},z^{*})\) in \({\mathcal {R}}_{+}\times{\mathcal {R}}^{n}\times{{\mathcal {R}}^{m}}\times{\mathcal {R}}^{m}\) is the solution of \(H(\mu,x,\lambda,z)=0\), the rows of \(\nabla g(x^{*})\) are linearly independent and \((\lambda^{*},z^{*})\) satisfies the strict complementarity condition. Then

  1. (i)

    \(H(\mu,x,\lambda,z)\) is continuously differentiable on \({\mathcal {R}}_{+}\times{\mathcal {R}}^{n}\times{\mathcal {R}}^{m}\times{\mathcal {R}}^{m}\).

  2. (ii)

    \(\nabla H(\mu^{*},x^{*},\lambda^{*},z^{*})\) is nonsingular, where

    $$\begin{aligned} &\nabla{ H}(\mu,x,\lambda,z)= \begin{pmatrix} 1&0&0&0\\ 0 &\nabla F(x)-\nabla^{2}g(x)^{\top}\lambda& -\nabla g(x)^{\top}& 0\\ 0 &\nabla g(x)& 0 &-I\\ D_{\mu}& 0 &D_{\lambda}& D_{z} \end{pmatrix}, \\ & D_{\mu}=\operatorname{vec} \biggl(-\frac{\mu}{ \sqrt{\lambda_{i}^{2}+z_{i}^{2}+\mu^{2}}} \biggr), \\ & D_{\lambda}=\operatorname{diag} \biggl(1-\frac{\lambda_{i}}{ \sqrt{\lambda_{i}^{2}+z_{i}^{2}+\mu^{2}}} \biggr)\quad (i=1, \ldots, m), \\ & D_{z}=\operatorname{diag} \biggl(1-\frac{z_{i}}{ \sqrt{\lambda_{i}^{2}+z_{i}^{2}+\mu^{2}}} \biggr)\quad (i=1, \ldots, m) \quad \textit{and} \\ & \nabla^{2} g(x)^{\top}\lambda= \sum^{m}_{j=1}\nabla^{2} g_{j}(x)^{\top}\lambda_{j}. \end{aligned}$$
    (10)

Proof

It is not hard to show that H is continuously differentiable on \({\mathcal {R}}_{++}\times{\mathcal {R}}^{n}\times{\mathcal {R}}^{m}\times{\mathcal {R}}^{m}\). From \(H(\mu^{*},x^{*},\lambda^{*},z^{*})=0\), by (7) we get that \(\mu^{*}=0\) easily. Since \((\lambda^{*},z^{*})\) satisfies the strict complementarity condition, i.e., \(\lambda_{i}^{*}\), \(z_{i}^{*}\) are not equal to 0 at the same time, we have that H is also continuously differentiable on \((\mu^{*},x^{*},\lambda^{*},z^{*})\). That is, (i) holds.

Now, we prove (ii). Let \(q=(q_{1}, q_{2}, q_{3}, q_{4})^{\top}\in{\mathcal {R}}_{+}\times {\mathcal {R}}^{n}\times{\mathcal {R}}^{m}\times{\mathcal {R}}^{m}\), \(q_{1}=(q_{11})\), \(q_{2}=(q_{21}, q_{22}, \ldots, q_{2n})\), \(q_{3}=(q_{31}, q_{32}, \ldots, q_{3m})\), \(q_{4}=(q_{41}, q_{42}, \ldots, q_{4m})\), and

$$\begin{aligned} \nabla H \bigl(\mu^{*},x^{*},\lambda^{*},z^{*} \bigr) \begin{pmatrix}q_{1}\\q_{2}\\q_{3}\\q_{4} \end{pmatrix}=0. \end{aligned}$$
(11)

Hence, we have

figure a

where

$$ D_{\lambda^{*}}=\operatorname{diag} \biggl(1-\frac{\lambda_{i}^{*}}{ \sqrt{\lambda_{i}^{*2}+z_{i}^{*2}}} \biggr),\qquad D_{z^{*}}=\operatorname{diag} \biggl(1-\frac{z_{i}^{*}}{ \sqrt{\lambda_{i}^{*2}+z_{i}^{*2}}} \biggr). $$

We can observe \(q_{1}=0\) easily by (12).

Next, we discuss formula (15). The full form of (15) can be described as follows:

$$\begin{aligned} \begin{pmatrix} (1-\frac{\lambda_{1}^{*}}{\sqrt{\lambda_{1}^{*2}+z_{1}^{*2}}} )q_{31}+ (1-\frac{z_{1}^{*}}{\sqrt{\lambda_{1}^{*2}+z_{1}^{*2}}} )q_{41}\\ (1-\frac{\lambda_{2}^{*}}{\sqrt{\lambda_{2}^{*2}+z_{2}^{*2}}} )q_{32}+ (1-\frac{z_{2}^{*}}{\sqrt{\lambda_{2}^{*2}+z_{2}^{*2}}} )q_{42}\\ \vdots\\ (1-\frac{\lambda_{m}^{*}}{\sqrt{\lambda_{m}^{*2}+z_{m}^{*2}}} )q_{3m}+ (1-\frac{z_{m}^{*}}{\sqrt{\lambda_{m}^{*2}+z_{m}^{*2}}} )q_{4m} \end{pmatrix}=0. \end{aligned}$$
(16)

According to the strict complementarity condition of \((\lambda^{*},z^{*})\), we have \(\lambda_{i}^{*}\geq0\), \(z_{i}^{*}\geq0\), \(\lambda _{i}^{*}\top z_{i}^{*} =0\) and \(\lambda_{i}^{*}\), \(z_{i}^{*}\) are not equal to 0 at the same time. If \(z_{i}^{*}= 0\), then \(\lambda_{i}^{*}> 0\), and

$$ \biggl(1-\frac{\lambda_{i}^{*}}{ \sqrt{\lambda_{i}^{*2}+z_{i}^{*2}}} \biggr)=0,\qquad \biggl(1-\frac{z_{i}^{*}}{ \sqrt{\lambda_{i}^{*2}+z_{i}^{*2}}} \biggr)=1. $$

From (16) we get that \(q_{4i}=0\) and \(q_{3i}^{\top}q_{4i}=0\). Similarly, if \(\lambda_{i}^{*}= 0\), then \(z_{i}^{*}> 0\). We get that \(q_{3i}=0\) and \(q_{3i}^{\top}q_{4i}=0\). Hence, \(q_{3}^{\top}q_{4}=0\).

Multiplying the equation of (14) by \(q_{3}^{\top}\) on the left-hand side and using \(q_{3}^{\top}q_{4}=0\), we have

$$ q_{3}^{\top}\nabla g \bigl(x^{*} \bigr) q_{2}=0. $$
(17)

Multiplying the equation of (13) by \(q_{2}^{\top}\) on the left-hand side and using (17), we have

$$ q_{2}^{\top}\bigl(\nabla F \bigl(x^{*} \bigr)- \nabla^{2} g \bigl(x^{*} \bigr)^{\top}\lambda^{*} \bigr)q_{2}=0. $$
(18)

Meanwhile, because F is strongly monotone, we have that \(\nabla F(x^{*})\) is a positive definite matrix. Besides, since g is concave and \(\lambda^{*}\) is nonnegative, we have that \(\nabla^{2} g(x^{*})^{\top}\lambda^{*} = \sum^{m}_{j=1}\nabla^{2} g_{j}(x^{*})^{\top}\lambda^{*}_{j} \) are nonpositive definite matrices, which implies that \((\nabla F(x^{*})- \nabla^{2} g(x^{*})^{\top}\lambda^{*} )\) is a positive definite matrix. So, we get that \(q_{2}=0\) by (18).

Substituting \(q_{2}=0\) into (13) and using the rows of \(\nabla g(x^{*})\) are linearly independent, we get \(q_{3}=0\). Substituting \(q_{2}=0\) into (14), we get \(q_{4}=0\). Hence we have \(q=0\), which implies that \(\nabla H(\mu^{*},x^{*},\lambda^{*},z^{*})\) is nonsingular. This completes the proof. □

3 The inexact algorithm and its convergence

We are now in the position to describe our smoothing inexact Newton method formally by using the smoothed Fischer-Burmeister function (9) to solve the variational inequalities with nonlinear constraints. We also show that this method has local quadratic convergence.

Algorithm 3.1

Inexact Newton method

Step 0.:

Let \(w=(x,\lambda,z)\) and \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times{\mathcal {R}}^{n+2m}\) be an arbitrary point. Choose \(\mu^{0}>0\) and \(\gamma\in(0,1)\) such that \(\gamma\mu^{0}<\frac{1}{2}\).

Step 1.:

If \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}=0\), then stop.

Step 2.:

Compute \((\Delta\mu^{k},\Delta w^{k})\) by

$$ \textstyle\begin{array}{c}H(\mu^{k},w^{k}) \end{array}\displaystyle + \nabla{H} \bigl( \mu^{k},w^{k} \bigr) \begin{pmatrix}\Delta\mu^{k}\\ \Delta w^{k} \end{pmatrix} = \begin{pmatrix}\rho_{k}\mu^{0}\\ r^{k} \end{pmatrix}, $$
(19)

where \(\rho_{k}=\rho(\mu^{k},w^{k}):=\gamma\min\{1, \Vert H(\mu^{k},w^{k}) \Vert ^{2}\}\), and \(r^{k} \in{\mathcal {R}}^{n+2m}\) such that \(\Vert r^{k} \Vert \leq\rho_{k}\mu^{0}\).

Step 3.:

Set \(\mu^{k+1}=\mu^{k}+\Delta\mu^{k}\) and \(w^{k+1}=w^{k}+\Delta w^{k}\). Set \(k:=k+1\) and go to Step 1.

Remark 1

  1. (1)

    In theory, we use \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}=0\) as a termination of Algorithm 3.1. In practice, we use \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}\leq\varepsilon\) as a termination rule, where ε is a pre-set tolerance error.

  2. (2)

    It is obvious that we have \(\rho_{k}\leq\gamma \Vert H(\mu^{k},w^{k}) \Vert ^{2}\).

  3. (3)

    From (7) and (19), we have \(\mu^{k+1}=\rho_{k}\mu^{0}>0\) for any \(k\geq0\).

Now, we are ready to analyze the convergence. The quadratic convergence of Algorithm 3.1 is given below.

Theorem 3.1

Assume that \((\mu^{*},w^{*})\) satisfies \(H(\mu^{*},w^{*})=0\). Suppose that \(H(\mu,w)\) satisfies the condition of Lemma  2.1 and \(\nabla H(\mu,w)\) is Lipschitz continuous with the constant L. Then we have the following conclusions:

  1. (1)

    There exists a set \(D\subset{\mathcal {R}}_{+}\times{\mathcal {R}}^{n+2m}\) which contains \((\mu^{*},w^{*})\) such that for any \((\mu ^{0},w^{0})\in D\), the iterate points \((\mu^{k},w^{k})\) generated by Algorithm 3.1 are well defined, remain in D and converge to \((\mu^{*},w^{*})\);

  2. (2)
    $$ \bigl\Vert \bigl(\mu^{k+1}-\mu^{*},w^{k+1}-w^{*} \bigr) \bigr\Vert \leq \beta \bigl\Vert \bigl(\mu^{k}-\mu ^{*},w^{k}-w^{*} \bigr) \bigr\Vert ^{2}, $$
    (20)

    where \(\beta:= (L+16\gamma\mu^{0} \Vert \nabla H(\mu^{*},w^{*}) \Vert ^{2}) \Vert \nabla H(\mu^{*},w^{*})^{-1} \Vert \).

Proof

According to Theorem 5.2.1 in [17] and Lemma 2.1, we give the proof in detail.

Denote

$$ u= \begin{pmatrix}\mu\\w \end{pmatrix} , \qquad u^{*}=\begin{pmatrix}\mu^{*}\\w^{*} \end{pmatrix},\qquad \Delta u= \begin{pmatrix}\Delta\mu\\ \Delta w \end{pmatrix},\qquad v^{k}=\begin{pmatrix}\rho_{k}\mu^{0}\\r^{k} \end{pmatrix}. $$

By Step 2 of Algorithm 3.1, we have

$$ \bigl\Vert v^{k} \bigr\Vert \leq\rho_{k} \mu^{0} + \bigl\Vert r^{k} \bigr\Vert \leq2 \rho_{k}\mu ^{0}\leq2\gamma \mu^{0} \bigl\Vert H \bigl(u^{k} \bigr) \bigr\Vert ^{2}. $$
(21)

According to Lemma 2.1, we get that \(\nabla H(u^{*})\) is nonsingular. Then there exist a positive constant \(\bar{t}<\frac{1}{\beta}\) and a neighborhood \(N(u^{*},\bar{t})\) of \(u^{*}\) such that \(L\bar{t} \leq \Vert \nabla H(u^{*}) \Vert \), and for any \(u\in N(u^{*},\bar{t})\), we have that \(\nabla H(u)\) is nonsingular and

$$\begin{aligned} \bigl\Vert \nabla H(u) \bigr\Vert - \bigl\Vert \nabla H \bigl(u^{*} \bigr) \bigr\Vert \leq \bigl\Vert \nabla H(u)- \nabla H \bigl(u^{*} \bigr) \bigr\Vert \leq L \bigl\Vert u-u^{*} \bigr\Vert , \end{aligned}$$
(22)

where the first inequality follows from the triangle inequality and the second inequality follows from the Lipschitz continuity. Hence we have

$$ \bigl\Vert \nabla H(u) \bigr\Vert \leq2 \bigl\Vert \nabla H \bigl(u^{*} \bigr) \bigr\Vert $$
(23)

for any \(u\in N(u^{*},\bar{t})\). Similarly, by the perturbation relation (3.1.20) in [17], we know that \(\nabla H(u)\) is nonsingular and

$$ \bigl\Vert \nabla H(u)^{-1} \bigr\Vert \leq \frac{ \Vert \nabla H(u^{*})^{-1} \Vert }{1- \Vert \nabla H(u^{*})^{-1} (\nabla H(u)-\nabla H(u^{*}) ) \Vert }\leq2 \bigl\Vert \nabla H \bigl(u^{*} \bigr)^{-1} \bigr\Vert . $$
(24)

Besides, for any \(t \in[0,1]\), we have

\(u^{*}+t(u-u^{*})\in N(u^{*},\bar{t})\) and \(H(u)-H(u^{*})=\int_{0}^{1}\nabla H[u^{*}+t(u-u^{*})](u-u^{*})\,dt\).

From \(\Vert H(u^{*}) \Vert =0\), we have

$$\begin{aligned} \bigl\Vert H(u) \bigr\Vert \leq \int_{0}^{1} \bigl\Vert \nabla H \bigl(u^{*}+t \bigl(u-u^{*} \bigr) \bigr) \bigr\Vert \bigl\Vert \bigl(u-u^{*} \bigr) \bigr\Vert \,dt \leq2 \bigl\Vert \nabla H \bigl(u^{*} \bigr) \bigr\Vert \bigl\Vert u-u^{*} \bigr\Vert . \end{aligned}$$
(25)

According to Algorithm 3.1, for any \(u^{k} \in N(u^{*},\bar{t}), k\geq0\), we have

$$\begin{aligned} & u^{k+1}-u^{*} \\ &\quad = u^{k}-u^{*}-\nabla H \bigl(u^{k} \bigr)^{-1} \bigl(H \bigl(u^{k} \bigr)-v^{k} \bigr) \\ &\quad = \nabla H \bigl(u^{k} \bigr)^{-1} \bigl(\nabla H \bigl(u^{k} \bigr) \bigl(u^{k}-u^{*} \bigr)- \bigl(H \bigl(u^{k} \bigr)-H \bigl(u^{*} \bigr) \bigr)+v^{k} \bigr) \\ &\quad = \nabla H \bigl(u^{k} \bigr)^{-1} \\ &\qquad{}\times \biggl( \int_{0}^{1} \bigl(\nabla H \bigl(u^{k} \bigr)-\nabla H \bigl(u^{*}+t \bigl(u^{k}-u^{*} \bigr) \bigr) \bigr) \bigl(u^{k}-u^{*} \bigr)\,dt+v^{k} \biggr). \end{aligned}$$
(26)

Taking norm of both sides, we get

$$\begin{aligned} & \bigl\Vert u^{k+1}-u^{*} \bigr\Vert \\ &\quad\leq \bigl\Vert \nabla H \bigl(u^{k} \bigr)^{-1} \bigr\Vert \biggl( \int_{0}^{1}L(1-t) \bigl\Vert u^{k}-u^{*} \bigr\Vert ^{2}\,dt+ \bigl\Vert v^{k} \bigr\Vert \biggr) \\ &\quad\leq 2 \bigl\Vert \nabla H \bigl(u^{*} \bigr)^{-1} \bigr\Vert \biggl(\frac{1}{2}L \bigl\Vert u^{k}-u^{*} \bigr\Vert ^{2} + 2\gamma\mu ^{0} \bigl\Vert H \bigl(u^{k} \bigr) \bigr\Vert ^{2} \biggr) \\ &\quad\leq 2 \bigl\Vert \nabla H \bigl(u^{*} \bigr)^{-1} \bigr\Vert \biggl(\frac{1}{2}L \bigl\Vert u^{k}-u^{*} \bigr\Vert ^{2}+8\gamma\mu ^{0} \bigl\Vert \nabla H \bigl(u^{*} \bigr) \bigr\Vert ^{2} \bigl\Vert u^{k}-u^{*} \bigr\Vert ^{2} \biggr) \\ &\quad = \bigl(L + 16\gamma\mu^{0} \bigl\Vert \nabla H \bigl(u^{*} \bigr) \bigr\Vert ^{2} \bigr) \bigl\Vert \nabla H \bigl(u^{*} \bigr)^{-1} \bigr\Vert \bigl\Vert u^{k}-u^{*} \bigr\Vert ^{2}, \end{aligned}$$
(27)

where the first inequality follows from the Lipschitz continuity, the second inequality follows from (21), and the third inequality follows from (25).

According to the definition of β and the condition of \(\bar{t}<\frac{1}{\beta}\), we get that \(u^{k}\) converges to \(u^{*}\). Besides, (20) also holds. This completes the proof. □

4 The global inexact algorithm and its convergence

Now, we start our globally convergent method by using the global technique in Algorithm 3.1. We choose a merit function \(h(\mu,w)=\frac{1}{2} \Vert H(\mu,w) \Vert ^{2}\) and modify \((\Delta\mu^{k}, \Delta w^{k})\) such that

$$ -{ \bigl(\Delta\mu^{k}, \Delta w^{k} \bigr)}^{\top}\nabla h \bigl(\mu^{k},w^{k} \bigr)\geq \delta \bigl\Vert \bigl(\Delta\mu^{k}, \Delta w^{k} \bigr) \bigr\Vert \bigl\Vert \nabla h \bigl(\mu^{k},w^{k} \bigr) \bigr\Vert . $$
(28)

We use line search to find a step-length \(t^{k}\in(0,1]\) such that

$$\begin{aligned} & h \bigl(\mu^{k}+t^{k}\Delta \mu^{k},w^{k}+t^{k}\Delta w^{k} \bigr)\leq h \bigl(\mu^{k},w^{k} \bigr)+\bar {\rho}t^{k}\nabla h \bigl(\mu^{k},w^{k} \bigr)^{\top}\bigl(\Delta \mu^{k}, \Delta w^{k} \bigr), \end{aligned}$$
(29)
$$\begin{aligned} &\nabla h \bigl(\mu^{k}+t^{k}\Delta \mu^{k},w^{k}+t^{k}\Delta w^{k} \bigr)^{\top}\bigl(\Delta\mu ^{k}, \Delta w^{k} \bigr) \geq\bar{\sigma}\nabla h \bigl(\mu^{k},w^{k} \bigr)^{\top}\bigl(\Delta\mu ^{k}, \Delta w^{k} \bigr) \end{aligned}$$
(30)

and

$$ \mu^{k}+t^{k}\Delta\mu^{k}\in{ \mathcal {R}}_{++},\qquad w^{k}+t^{k}\Delta w^{k} \in {\mathcal {R}}^{n+2m}, $$
(31)

where \(\bar{\rho}\in(0,0.5), \bar{\sigma} \in(\bar{\rho},1), \delta\in(0,1)\).

Algorithm 4.1

Global inexact Newton method

Step 0.:

Choose \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times {\mathcal {R}}^{n+2m}\) to be an arbitrary point. Choose \(\gamma\in(0,1)\) such that \(\gamma \mu^{0}<\frac{1}{2}\). Choose \(\bar{\rho}\in(0,0.5), \bar{\sigma} \in(\bar{\rho},1), \delta\in(0,1)\).

Step 1.:

If \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}=0\), then stop.

Step 2.:

Find \((\Delta\mu^{k},\Delta w^{k})\) by solving (19). If (28) is not satisfied, then choose \(\tau_{k}\) and compute

$$ \bigl(\Delta\mu^{k}, \Delta w^{k} \bigr)=- \bigl(\nabla H \bigl(\mu^{k},w^{k} \bigr)^{\top}{ \nabla H \bigl(\mu^{k},w^{k} \bigr)}+\tau_{k}I \bigr)^{-1}\nabla h \bigl(\mu^{k},w^{k} \bigr), $$
(32)

such that (28) is satisfied.

Step 3.:

Find a step-length \(t^{k}\in(0,1]\) satisfying (29)-(31).

\(\mu^{k+1}=\mu^{k}+t^{k}\Delta\mu^{k}, w^{k+1}=w^{k}+t^{k}\Delta w^{k}\). Set \(k:=k+1\) and go to Step 1.

Remark 2

In Step 2, if (28) is not satisfied, then the technique in [18], pp.264-265, is used to choose \(\tau_{k}\). From Lemma 3.1 in [18], it is not difficult to find \(t^{k}\) that can satisfy (29)-(31).

In order to obtain the global convergence of Algorithm 4.1, throughout the rest of this paper, we define the level set \(\mathcal{L}(\mu^{0},w^{0})=\{(\mu,w)|h(\mu,w)\leq h(\mu ^{0},w^{0})\}\) for \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times{\mathcal {R}}^{n+2m}\).

Theorem 4.1

Suppose that \(\nabla H(\mu,w)\) is Lipschitz continuous in \(\mathcal {L}(\mu^{0},w^{0})\). Then we have

$$ \lim_{k\rightarrow\infty}\nabla h \bigl(\mu^{k}, w^{k} \bigr)=0. $$

Proof

The proof follows Theorem 3.2 in [18] and condition (28). □

Theorem 4.2

Let \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times{\mathcal {R}}^{n+2m}\), \(H(\mu,x,\lambda,z)\) be defined by (7). Assume that \(\nabla H(\mu,w)\) is Lipschitz continuous in \(\mathcal {L}(\mu^{0},w^{0})\), \(t^{k}=1\) is admissible and (28) is satisfied for all \(k\geq k_{0}\), \(\nabla H(\mu^{*},w^{*})\) is nonsingular where \(k_{0}\) is sufficiently great, and \((\mu^{*},w^{*})\) is a limited point of \(\{(\mu^{k},w^{k})\}\) generated by Algorithm 4.1. Then the sequence \(\{(\mu^{k},w^{k})\}\) converges to \((\mu^{*},w^{*})\) quadratically.

Proof

From Theorem 4.1, we have

$$ \lim_{k\rightarrow\infty}\nabla h \bigl(\mu^{k}, w^{k} \bigr)=0, $$

where \(\nabla h(\mu^{k}, w^{k})=\nabla H(\mu^{k}, w^{k})H(\mu^{k}, w^{k})\). That is, the sequence \(\{(\mu^{k},w^{k})\}\) is convergent. Since \(\nabla H(\mu^{*},w^{*})\) is nonsingular and \((\mu^{*},w^{*})\) is a limited point of \(\{(\mu^{k},w^{k})\}\) generated by Algorithm 4.1, we have

$$ \lim_{k\rightarrow\infty}H \bigl(\mu^{k}, w^{k} \bigr)=0. $$

According to the assumption that there exists \(k_{0}\) such that \(t^{k}=1\) is admissible and (28) is satisfied for all \(k\geq k_{0}\), \(\{(\mu^{k},w^{k})\}\) can be generated by Algorithm 3.1 for \(k> k_{0}\). We can get the conclusion from Theorem 3.1 directly. This completes the proof. □

5 Numerical results

In this section, we present some numerical results for Algorithm 4.1. All codes are written in Matlab and run on a RTM i5-3210M personal computer. In the algorithm, we choose \(\gamma=0.001\). We also use \(\Vert H(\mu^{k},w^{k}) \Vert \leq10^{-5}\) as the stopping rule for all examples.

It is not easy to find proper test examples for the variational inequalities with nonlinear constraints. Hence, we modify some test examples in references and solve them by Algorithm 4.1.

Example 5.1

see [19]

Let

$$ F(x)= \begin{pmatrix}2x_{1}+0.2x_{1}^{3}-0.5x_{2}+0.1x_{3}-4\\ -0.5x_{1}+x_{2}+0.1x_{2}^{3}+0.5\\ 0.5x_{1}-0.2x_{2}+2x_{3}-0.5 \end{pmatrix} $$

and

$$ g(x)=-x_{1}^{2}-0.4x_{2}^{2}-0.6x_{3}^{2}+1. $$

It is verified that the problem has the solution \(x^{*}=(1, 0, 0)\) easily. The initial point is \(x^{0}=(0,1,1)\) and \(\mu^{0}=0.2\).

Example 5.2

This example is derived from [20]. Because the original problem is an optimization problem, we give its form of variational inequalities by the optimality condition, i.e.,

$$ F(x)= \begin{pmatrix}2x_{1}-5\\ 2x_{2}-5\\ 4x_{3}-21\\ 2x_{4}+7 \end{pmatrix} ,\qquad g(x)=\begin{pmatrix}-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}-x_{4}^{2}-x_{1}+x_{2}-x_{3}+x_{4}+8\\ -x_{1}^{2}-2x_{2}^{2}-x_{3}^{2}-2x_{4}^{2}+x_{1}+x_{4}+10\\ -2x_{1}^{2}-x_{2}^{2}-x_{3}^{2}-2x_{1}+x_{2}+x_{4}+5 \end{pmatrix}. $$

The solution of Example 5.2 is \(x^{*}=(0,1,2,-1)\). The initial point is \(x^{0}=(0,0,0,0)\) and \(\mu^{0}=0.2\).

In Tables 1-2, ‘k’ means the number of iterations, ‘\(\Vert H(\mu ^{k},w^{k}) \Vert \)’ means the 2-norm of \(H(\mu^{k},w^{k})\). From Tables 1-2, we can observe that Algorithm 4.1 can find the solution in a smaller number of iterations for the above two examples. In order to further show the efficiency of Algorithm 4.1, we give other two examples where the dimension of the problems is from 100 to 1,000.

Table 1 Numerical results for Example  5.1 with \(\pmb{x^{0}=(0,1,1)}\)
Table 2 Numerical results for Example  5.2 with \(\pmb{x^{0}=(0,0,0,0)}\)

In the following tests, we solve Δw of the linear systems by using GMRES(m) package with \(m=10\), allowing a maximum of 100 cycles (2,000 iterations). And we choose \(\mu^{0}\) as a random number in \((0,5)\).

Example 5.3

We consider the problem with nonlinear constraints. The problem is derived from [21] with different sizes. Based on the linear constraints of the original problem, we also add some nonlinear constraints to the problem. In this example,

$$\begin{aligned} F(x)= \begin{pmatrix}4& -2& \cdots& 0 &0 \\ 1 &4 &\cdots& 0 & 0\\ &&\cdots&&\\ 0 &0 &\cdots& 4&-2\\ 0 &0 &\cdots& 1 & 4 \end{pmatrix} \begin{pmatrix} x_{1}\\x_{2}\\x_{3}\\ \vdots\\ x_{n} \end{pmatrix} + \begin{pmatrix} -1\\-1\\-1\\ \vdots\\ -1 \end{pmatrix},\qquad g(x)=\begin{pmatrix} x_{1}+10\\ x_{2}+10\\ \cdots\\ x_{n}+10\\ 100-x_{1}^{2}\\ 100-x_{2}^{2}\\ \cdots\\ 100-x_{n}^{2} \end{pmatrix} . \end{aligned}$$

Example 5.4

The example is the NCP. \(F(x)=D(x)+Mx+q\). The components of \(D(x)\) are \(D_{j}(x)=d_{j}\cdot \operatorname{arctan}(x_{j})\), where \(d_{j}\) is a random variable in \((0,5)\). The matrix \(M=A^{\top}A+B\), where A is an \(n\times n\) matrix whose entries are randomly generated in the interval \((-5,5)\), and the skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval \((-50,50)\).

In Tables 3-4, ‘n’ means the dimension of problems, ‘No.it’ means the number of iterations, ‘CPU’ means the cpu time in seconds. ‘\(\Vert H(\mu^{k},w^{k}) \Vert \)’ means the 2-norm of \(H(\mu^{k},w^{k})\). From Tables 3-4, we find that Algorithm 4.1 is robust to the different sizes for these two problems. Moreover, the iterative number is insensitive to the size of problems in our algorithm. In other words, our algorithm is more effective for two problems.

Table 3 Numerical results for Example  5.3
Table 4 Numerical results for Example  5.4

6 Conclusions

Based on the framework of smoothing Newton method, we propose a new smoothing inexact Newton algorithm for variational inequalities with nonlinear constraints. Under some mild conditions, we establish the global and local quadratic convergence. Furthermore, we also present some preliminary numerical results which show efficiency of the algorithm.

References

  1. Ferris, MC, Pang, JS: Engineering and economic applications of complementarity problems. SIAM Rev. 39, 669-713 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  2. Fischer, A: Solution of monotone complementarity problems with locally Lipschitzian functions. Math. Program. 76, 513-532 (1997)

    MathSciNet  MATH  Google Scholar 

  3. Li, YY, Santosa, F: A computational algorithm for minimizing total variation in image restoration. IEEE Trans. Image Process. 5, 987-995 (1996)

    Article  Google Scholar 

  4. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems, Volumes I and II. Springer, Berlin (2003)

    MATH  Google Scholar 

  5. Chen, BT, Harker, PT: Smooth approximations to nonlinear complementarity problems. SIAM J. Optim. 7, 403-420 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chen, BT, Xiu, NH: A global linear and local quadratic noninterior continuation method for nonlinear complementarity problems based on Mangasarian smoothing functions. SIAM J. Optim. 9, 605-623 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  7. Qi, LQ, Sun, DF, Zhou, GL: A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities. Math. Program. 87, 1-35 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  8. Qi, LQ, Sun, DF: Improving the convergence of non-interior point algorithms for nonlinear complementarity problems. Math. Comput. 229, 283-304 (2000)

    MathSciNet  MATH  Google Scholar 

  9. Tseng, P: Error Bounds and Superlinear Convergence Analysis of Some Newton-Type Methods in Optimization. Nonlinear Optimization and Related Topics. Springer, Berlin (2000)

    Book  MATH  Google Scholar 

  10. Ma, CF, Chen, XH: The convergence of a one-step smoothing Newton method for P0-NCP base on a new smoothing NCP-function. Comput. Appl. Math. 216, 1-13 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang, J, Zhang, KC: A variant smoothing Newton method for P0-NCP based on a new smoothing function. J. Comput. Appl. Math. 225, 1-8 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Rui, SP, Xu, CX: A smoothing inexact Newton method for nonlinear complementarity problems. J. Comput. Appl. Math. 233, 2332-2338 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chen, XJ, Qi, LQ, Sun, DF: Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities. Math. Comput. 67, 519-540 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  14. Rui, SP, Xu, CX: A globally and locally superlinearly convergent inexact Newton-GMRES method for large-scale variational inequality problem. Int. J. Comput. Math. 3, 578-587 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. Toyasaki, F, Daniele, P, Wakolbinger, T: A variational inequality formulation of equilibrium models for end-of-life products with nonlinear constraints. Eur. J. Oper. Res. 236, 340-350 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ng, MK, Weiss, P, Yuan, XM: Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods. SIAM J. Sci. Comput. 32, 2710-2736 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  17. Dennis, JE Jr, Schnabel, RB: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Philadelphia (1983)

    MATH  Google Scholar 

  18. Nocedal, J, Wright, SJ: SJ: Numerical Optimization, 2nd edn. Springer, New York (1999)

    Book  MATH  Google Scholar 

  19. Fukushima, M: A relaxed projection method for variational inequalities. Math. Program. 35, 58-70 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  20. Charalambous, C: Nonlinear least pth optimization and nonlinear programming. Math. Program. 12, 195-225 (1977)

    Article  MATH  Google Scholar 

  21. Ahn, BH: Iterative methods for linear complementarity problems with upperbounds on primary variables. Math. Program. 26, 295-315 (1983)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The first author is supported by Funding of Jiangsu Innovation Program for Graduate Education (KYZZ15_0087) and the Fundamental Research Funds for the Central Universities. The second author is supported by NSFC (11471159; 11571169; 61661136001) and the Natural Science Foundation of Jiangsu Province (BK20141409).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhili Ge.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ge, Z., Ni, Q. & Zhang, X. A smoothing inexact Newton method for variational inequalities with nonlinear constraints. J Inequal Appl 2017, 160 (2017). https://doi.org/10.1186/s13660-017-1433-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-017-1433-9

Keywords