A smoothing inexact Newton method for variational inequalities with nonlinear constraints
 Zhili Ge^{1, 2}Email author,
 Qin Ni^{1} and
 Xin Zhang^{3}
https://doi.org/10.1186/s1366001714339
© The Author(s) 2017
Received: 15 March 2017
Accepted: 22 June 2017
Published: 10 July 2017
Abstract
In this paper, we propose a smoothing inexact Newton method for solving variational inequalities with nonlinear constraints. Based on the smoothed FischerBurmeister function, the variational inequality problem is reformulated as a system of parameterized smooth equations. The corresponding linear system of each iteration is solved approximately. Under some mild conditions, we establish the global and local quadratic convergence. Some numerical results show that the method is effective.
Keywords
1 Introduction
Variational inequalities have important applications in mathematical programming, economics, signal processing, transportation and structural analysis [1–3]. So, there are various numerical methods which have been studied by many researchers; e.g., see [4].
Based on the above symmetric perturbed FischerBurmeister function (9), Chen et al. [13] proposed the first globally and superlinearly convergent smoothing Newton method. They dealt with general box constrained variational inequalities. And Rui et al. [14] proposed an inexact NewtonGMRES method for a largescale variational inequality problem under the assumption of linear inequality constraints.
In reality, variational inequalities with nonlinear constraints are more attractive. These problems have wide applications in economic networks [15], image restoration [3, 16] and so on. So, in this paper, under the framework of smoothing Newton method, we propose a new inexact Newton method for solving \(\mbox{VI}(\Omega,F)\) with nonlinear constraints, which extends the scope of constraints. We also prove the global and local quadratic convergence and present some numerical results which show the efficiency of the proposed method.
Throughout this paper, we always assume that the solution set of problem (1) and (2), denoted by \(\Omega^{*}\), is nonempty. \({\mathcal {R}}_{+}\) and \({\mathcal {R}}_{++}\) mean the nonnegative and positive real sets. Symbol \(\Vert \cdot \Vert \) stands for the 2norm.
The rest of this paper is organized as follows. In Section 2, we summarize some useful properties and definitions. In Section 3, we describe the inexact Newton method formally and then prove its local quadratic convergence. We also give global convergence in Section 4. In Section 5, we report our numerical results. Finally, we give some conclusions in Section 6.
2 Preliminaries
In this section, we denote some basic definitions and properties that will be used in the subsequent sections.
Definition 2.1
The following lemma gives some properties of H and its corresponding Jacobian.
Lemma 2.1
 (i)
\(H(\mu,x,\lambda,z)\) is continuously differentiable on \({\mathcal {R}}_{+}\times{\mathcal {R}}^{n}\times{\mathcal {R}}^{m}\times{\mathcal {R}}^{m}\).
 (ii)\(\nabla H(\mu^{*},x^{*},\lambda^{*},z^{*})\) is nonsingular, where$$\begin{aligned} &\nabla{ H}(\mu,x,\lambda,z)= \begin{pmatrix} 1&0&0&0\\ 0 &\nabla F(x)\nabla^{2}g(x)^{\top}\lambda& \nabla g(x)^{\top}& 0\\ 0 &\nabla g(x)& 0 &I\\ D_{\mu}& 0 &D_{\lambda}& D_{z} \end{pmatrix}, \\ & D_{\mu}=\operatorname{vec} \biggl(\frac{\mu}{ \sqrt{\lambda_{i}^{2}+z_{i}^{2}+\mu^{2}}} \biggr), \\ & D_{\lambda}=\operatorname{diag} \biggl(1\frac{\lambda_{i}}{ \sqrt{\lambda_{i}^{2}+z_{i}^{2}+\mu^{2}}} \biggr)\quad (i=1, \ldots, m), \\ & D_{z}=\operatorname{diag} \biggl(1\frac{z_{i}}{ \sqrt{\lambda_{i}^{2}+z_{i}^{2}+\mu^{2}}} \biggr)\quad (i=1, \ldots, m) \quad \textit{and} \\ & \nabla^{2} g(x)^{\top}\lambda= \sum^{m}_{j=1}\nabla^{2} g_{j}(x)^{\top}\lambda_{j}. \end{aligned}$$(10)
Proof
It is not hard to show that H is continuously differentiable on \({\mathcal {R}}_{++}\times{\mathcal {R}}^{n}\times{\mathcal {R}}^{m}\times{\mathcal {R}}^{m}\). From \(H(\mu^{*},x^{*},\lambda^{*},z^{*})=0\), by (7) we get that \(\mu^{*}=0\) easily. Since \((\lambda^{*},z^{*})\) satisfies the strict complementarity condition, i.e., \(\lambda_{i}^{*}\), \(z_{i}^{*}\) are not equal to 0 at the same time, we have that H is also continuously differentiable on \((\mu^{*},x^{*},\lambda^{*},z^{*})\). That is, (i) holds.
Meanwhile, because F is strongly monotone, we have that \(\nabla F(x^{*})\) is a positive definite matrix. Besides, since g is concave and \(\lambda^{*}\) is nonnegative, we have that \(\nabla^{2} g(x^{*})^{\top}\lambda^{*} = \sum^{m}_{j=1}\nabla^{2} g_{j}(x^{*})^{\top}\lambda^{*}_{j} \) are nonpositive definite matrices, which implies that \((\nabla F(x^{*}) \nabla^{2} g(x^{*})^{\top}\lambda^{*} )\) is a positive definite matrix. So, we get that \(q_{2}=0\) by (18).
Substituting \(q_{2}=0\) into (13) and using the rows of \(\nabla g(x^{*})\) are linearly independent, we get \(q_{3}=0\). Substituting \(q_{2}=0\) into (14), we get \(q_{4}=0\). Hence we have \(q=0\), which implies that \(\nabla H(\mu^{*},x^{*},\lambda^{*},z^{*})\) is nonsingular. This completes the proof. □
3 The inexact algorithm and its convergence
We are now in the position to describe our smoothing inexact Newton method formally by using the smoothed FischerBurmeister function (9) to solve the variational inequalities with nonlinear constraints. We also show that this method has local quadratic convergence.
Algorithm 3.1
Inexact Newton method
 Step 0.:

Let \(w=(x,\lambda,z)\) and \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times{\mathcal {R}}^{n+2m}\) be an arbitrary point. Choose \(\mu^{0}>0\) and \(\gamma\in(0,1)\) such that \(\gamma\mu^{0}<\frac{1}{2}\).
 Step 1.:

If \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}=0\), then stop.
 Step 2.:

Compute \((\Delta\mu^{k},\Delta w^{k})\) bywhere \(\rho_{k}=\rho(\mu^{k},w^{k}):=\gamma\min\{1, \Vert H(\mu^{k},w^{k}) \Vert ^{2}\}\), and \(r^{k} \in{\mathcal {R}}^{n+2m}\) such that \(\Vert r^{k} \Vert \leq\rho_{k}\mu^{0}\).$$ \textstyle\begin{array}{c}H(\mu^{k},w^{k}) \end{array}\displaystyle + \nabla{H} \bigl( \mu^{k},w^{k} \bigr) \begin{pmatrix}\Delta\mu^{k}\\ \Delta w^{k} \end{pmatrix} = \begin{pmatrix}\rho_{k}\mu^{0}\\ r^{k} \end{pmatrix}, $$(19)
 Step 3.:

Set \(\mu^{k+1}=\mu^{k}+\Delta\mu^{k}\) and \(w^{k+1}=w^{k}+\Delta w^{k}\). Set \(k:=k+1\) and go to Step 1.
Remark 1
 (1)
In theory, we use \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}=0\) as a termination of Algorithm 3.1. In practice, we use \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}\leq\varepsilon\) as a termination rule, where ε is a preset tolerance error.
 (2)
It is obvious that we have \(\rho_{k}\leq\gamma \Vert H(\mu^{k},w^{k}) \Vert ^{2}\).
 (3)
Now, we are ready to analyze the convergence. The quadratic convergence of Algorithm 3.1 is given below.
Theorem 3.1
 (1)
There exists a set \(D\subset{\mathcal {R}}_{+}\times{\mathcal {R}}^{n+2m}\) which contains \((\mu^{*},w^{*})\) such that for any \((\mu ^{0},w^{0})\in D\), the iterate points \((\mu^{k},w^{k})\) generated by Algorithm 3.1 are well defined, remain in D and converge to \((\mu^{*},w^{*})\);
 (2)where \(\beta:= (L+16\gamma\mu^{0} \Vert \nabla H(\mu^{*},w^{*}) \Vert ^{2}) \Vert \nabla H(\mu^{*},w^{*})^{1} \Vert \).$$ \bigl\Vert \bigl(\mu^{k+1}\mu^{*},w^{k+1}w^{*} \bigr) \bigr\Vert \leq \beta \bigl\Vert \bigl(\mu^{k}\mu ^{*},w^{k}w^{*} \bigr) \bigr\Vert ^{2}, $$(20)
Proof
According to Theorem 5.2.1 in [17] and Lemma 2.1, we give the proof in detail.
Besides, for any \(t \in[0,1]\), we have
\(u^{*}+t(uu^{*})\in N(u^{*},\bar{t})\) and \(H(u)H(u^{*})=\int_{0}^{1}\nabla H[u^{*}+t(uu^{*})](uu^{*})\,dt\).
According to the definition of β and the condition of \(\bar{t}<\frac{1}{\beta}\), we get that \(u^{k}\) converges to \(u^{*}\). Besides, (20) also holds. This completes the proof. □
4 The global inexact algorithm and its convergence
Algorithm 4.1
Global inexact Newton method
 Step 0.:

Choose \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times {\mathcal {R}}^{n+2m}\) to be an arbitrary point. Choose \(\gamma\in(0,1)\) such that \(\gamma \mu^{0}<\frac{1}{2}\). Choose \(\bar{\rho}\in(0,0.5), \bar{\sigma} \in(\bar{\rho},1), \delta\in(0,1)\).
 Step 1.:

If \(\Vert H(\mu^{k},w^{k}) \Vert ^{2}=0\), then stop.
 Step 2.:

Find \((\Delta\mu^{k},\Delta w^{k})\) by solving (19). If (28) is not satisfied, then choose \(\tau_{k}\) and computesuch that (28) is satisfied.$$ \bigl(\Delta\mu^{k}, \Delta w^{k} \bigr)= \bigl(\nabla H \bigl(\mu^{k},w^{k} \bigr)^{\top}{ \nabla H \bigl(\mu^{k},w^{k} \bigr)}+\tau_{k}I \bigr)^{1}\nabla h \bigl(\mu^{k},w^{k} \bigr), $$(32)
Remark 2
In Step 2, if (28) is not satisfied, then the technique in [18], pp.264265, is used to choose \(\tau_{k}\). From Lemma 3.1 in [18], it is not difficult to find \(t^{k}\) that can satisfy (29)(31).
In order to obtain the global convergence of Algorithm 4.1, throughout the rest of this paper, we define the level set \(\mathcal{L}(\mu^{0},w^{0})=\{(\mu,w)h(\mu,w)\leq h(\mu ^{0},w^{0})\}\) for \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times{\mathcal {R}}^{n+2m}\).
Theorem 4.1
Theorem 4.2
Let \((\mu^{0},w^{0})\in{\mathcal {R}}_{++}\times{\mathcal {R}}^{n+2m}\), \(H(\mu,x,\lambda,z)\) be defined by (7). Assume that \(\nabla H(\mu,w)\) is Lipschitz continuous in \(\mathcal {L}(\mu^{0},w^{0})\), \(t^{k}=1\) is admissible and (28) is satisfied for all \(k\geq k_{0}\), \(\nabla H(\mu^{*},w^{*})\) is nonsingular where \(k_{0}\) is sufficiently great, and \((\mu^{*},w^{*})\) is a limited point of \(\{(\mu^{k},w^{k})\}\) generated by Algorithm 4.1. Then the sequence \(\{(\mu^{k},w^{k})\}\) converges to \((\mu^{*},w^{*})\) quadratically.
Proof
According to the assumption that there exists \(k_{0}\) such that \(t^{k}=1\) is admissible and (28) is satisfied for all \(k\geq k_{0}\), \(\{(\mu^{k},w^{k})\}\) can be generated by Algorithm 3.1 for \(k> k_{0}\). We can get the conclusion from Theorem 3.1 directly. This completes the proof. □
5 Numerical results
In this section, we present some numerical results for Algorithm 4.1. All codes are written in Matlab and run on a RTM i53210M personal computer. In the algorithm, we choose \(\gamma=0.001\). We also use \(\Vert H(\mu^{k},w^{k}) \Vert \leq10^{5}\) as the stopping rule for all examples.
It is not easy to find proper test examples for the variational inequalities with nonlinear constraints. Hence, we modify some test examples in references and solve them by Algorithm 4.1.
Example 5.1
see [19]
Example 5.2
The solution of Example 5.2 is \(x^{*}=(0,1,2,1)\). The initial point is \(x^{0}=(0,0,0,0)\) and \(\mu^{0}=0.2\).
Numerical results for Example 5.1 with \(\pmb{x^{0}=(0,1,1)}\)
Example 5.1  

k  \(\boldsymbol {x_{1}} \)  \(\boldsymbol {x_{2}} \)  \(\boldsymbol {x_{3}} \)  \(\boldsymbol {\Vert H(\mu^{k},w^{k}) \Vert } \) 
1  1.6663  0.1527  0.7730  4.0941 
2  1.6460  0.3416  0.0000  2.9087 
3  1.6431  0.3212  0.0000  1.7601 
4  1.2659  0.3957  0.0000  0.8018 
5  1.0386  0.1050  0.0007  0.2139 
6  1.0019  0.0174  0.0008  0.0301 
7  1.0021  0.0005  0.0000  0.0043 
8  1.0003  0.0000  0.0000  7.2097e–04 
9  1.0000  0  0  1.8416e–05 
10  1.0000  0.0000  0.0000  9.6147e–06 
Numerical results for Example 5.2 with \(\pmb{x^{0}=(0,0,0,0)}\)
Example 5.2  

k  \(\boldsymbol {x_{1}} \)  \(\boldsymbol {x_{2}} \)  \(\boldsymbol {x_{3}} \)  \(\boldsymbol {x_{4}} \)  \(\boldsymbol {\Vert H(\mu^{k},w^{k}) \Vert } \) 
1  0.3330  1.0661  4.0792  −4.3984  0.7137 
2  3.7727  3.2371  5.7749  −2.3704  0.4308 
3  1.8029  2.4677  2.9939  −1.3529  0.1151 
4  0.6067  1.7571  2.2497  −0.9003  0.0399 
5  0.4095  1.1936  2.1080  −0.7000  0.0076 
6  0.0093  1.0500  2.1182  −0.8442  0.0045 
7  0.0009  0.9994  2.0014  −0.9991  7.8889e–04 
8  0.0002  1.0000  2.0005  −1.0000  3.2619e–05 
9  0.0000  1.0000  2.0000  −1.0000  1.3066e–06 
In the following tests, we solve Δw of the linear systems by using GMRES(m) package with \(m=10\), allowing a maximum of 100 cycles (2,000 iterations). And we choose \(\mu^{0}\) as a random number in \((0,5)\).
Example 5.3
Example 5.4
The example is the NCP. \(F(x)=D(x)+Mx+q\). The components of \(D(x)\) are \(D_{j}(x)=d_{j}\cdot \operatorname{arctan}(x_{j})\), where \(d_{j}\) is a random variable in \((0,5)\). The matrix \(M=A^{\top}A+B\), where A is an \(n\times n\) matrix whose entries are randomly generated in the interval \((5,5)\), and the skewsymmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval \((50,50)\).
6 Conclusions
Based on the framework of smoothing Newton method, we propose a new smoothing inexact Newton algorithm for variational inequalities with nonlinear constraints. Under some mild conditions, we establish the global and local quadratic convergence. Furthermore, we also present some preliminary numerical results which show efficiency of the algorithm.
Declarations
Acknowledgements
The first author is supported by Funding of Jiangsu Innovation Program for Graduate Education (KYZZ15_0087) and the Fundamental Research Funds for the Central Universities. The second author is supported by NSFC (11471159; 11571169; 61661136001) and the Natural Science Foundation of Jiangsu Province (BK20141409).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Ferris, MC, Pang, JS: Engineering and economic applications of complementarity problems. SIAM Rev. 39, 669713 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Fischer, A: Solution of monotone complementarity problems with locally Lipschitzian functions. Math. Program. 76, 513532 (1997) MathSciNetMATHGoogle Scholar
 Li, YY, Santosa, F: A computational algorithm for minimizing total variation in image restoration. IEEE Trans. Image Process. 5, 987995 (1996) View ArticleGoogle Scholar
 Facchinei, F, Pang, JS: FiniteDimensional Variational Inequalities and Complementarity Problems, Volumes I and II. Springer, Berlin (2003) MATHGoogle Scholar
 Chen, BT, Harker, PT: Smooth approximations to nonlinear complementarity problems. SIAM J. Optim. 7, 403420 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Chen, BT, Xiu, NH: A global linear and local quadratic noninterior continuation method for nonlinear complementarity problems based on Mangasarian smoothing functions. SIAM J. Optim. 9, 605623 (1999) MathSciNetView ArticleMATHGoogle Scholar
 Qi, LQ, Sun, DF, Zhou, GL: A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities. Math. Program. 87, 135 (2000) MathSciNetView ArticleMATHGoogle Scholar
 Qi, LQ, Sun, DF: Improving the convergence of noninterior point algorithms for nonlinear complementarity problems. Math. Comput. 229, 283304 (2000) MathSciNetMATHGoogle Scholar
 Tseng, P: Error Bounds and Superlinear Convergence Analysis of Some NewtonType Methods in Optimization. Nonlinear Optimization and Related Topics. Springer, Berlin (2000) View ArticleMATHGoogle Scholar
 Ma, CF, Chen, XH: The convergence of a onestep smoothing Newton method for P0NCP base on a new smoothing NCPfunction. Comput. Appl. Math. 216, 113 (2008) MathSciNetView ArticleMATHGoogle Scholar
 Zhang, J, Zhang, KC: A variant smoothing Newton method for P0NCP based on a new smoothing function. J. Comput. Appl. Math. 225, 18 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Rui, SP, Xu, CX: A smoothing inexact Newton method for nonlinear complementarity problems. J. Comput. Appl. Math. 233, 23322338 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Chen, XJ, Qi, LQ, Sun, DF: Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities. Math. Comput. 67, 519540 (1998) MathSciNetView ArticleMATHGoogle Scholar
 Rui, SP, Xu, CX: A globally and locally superlinearly convergent inexact NewtonGMRES method for largescale variational inequality problem. Int. J. Comput. Math. 3, 578587 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Toyasaki, F, Daniele, P, Wakolbinger, T: A variational inequality formulation of equilibrium models for endoflife products with nonlinear constraints. Eur. J. Oper. Res. 236, 340350 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Ng, MK, Weiss, P, Yuan, XM: Solving constrained totalvariation image restoration and reconstruction problems via alternating direction methods. SIAM J. Sci. Comput. 32, 27102736 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Dennis, JE Jr, Schnabel, RB: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Philadelphia (1983) MATHGoogle Scholar
 Nocedal, J, Wright, SJ: SJ: Numerical Optimization, 2nd edn. Springer, New York (1999) View ArticleMATHGoogle Scholar
 Fukushima, M: A relaxed projection method for variational inequalities. Math. Program. 35, 5870 (1986) MathSciNetView ArticleMATHGoogle Scholar
 Charalambous, C: Nonlinear least pth optimization and nonlinear programming. Math. Program. 12, 195225 (1977) View ArticleMATHGoogle Scholar
 Ahn, BH: Iterative methods for linear complementarity problems with upperbounds on primary variables. Math. Program. 26, 295315 (1983) MathSciNetView ArticleMATHGoogle Scholar