A smoothing inexact Newton method for variational inequalities with nonlinear constraints

In this paper, we propose a smoothing inexact Newton method for solving variational inequalities with nonlinear constraints. Based on the smoothed Fischer-Burmeister function, the variational inequality problem is reformulated as a system of parameterized smooth equations. The corresponding linear system of each iteration is solved approximately. Under some mild conditions, we establish the global and local quadratic convergence. Some numerical results show that the method is effective.


Introduction
We consider the variational inequality problem (VI for abbreviation), which is to find a vector x * ∈ such that where is a nonempty, closed and convex subset of R n and F is a continuous differentiable mapping from R n into R n . In this paper, without loss of generality, we assume that where g : R n → R m and g i : R n → R, (i ∈ I = {, , . . . , m}) are twice continuously differentiable concave functions. When = R n + , VI reduces to the nonlinear complementarity problem (NCP for abbreviation) Variational inequalities have important applications in mathematical programming, economics, signal processing, transportation and structural analysis [-]. So, there are various numerical methods which have been studied by many researchers; e.g., see [].
A popular way to solve the VI( , F) is to reformulate () to a nonsmooth equation via a KKT system of variational inequalities and an NCP-function. It is well known that the KKT system of VI( , F) can be given as follows: and the NCP-function φ(a, b) is defined by the following condition: Then problem () and () is equivalent to the following nonsmoothing equation: Hence, problem () and () can be translated into (). We all know that the smoothing method is a fundamental approach to solve the nonsmooth equation (). Recently, there has been strong interest in smoothing Newton methods for solving NCP [-]. The idea of this method is to construct a smooth function to approximate φ(λ, z). In the past few years, there have been many different smoothing functions which were employed to smooth equation (). Here, we define It follows from equations ()-() that H(μ, x, λ, z) =  is equivalent to μ =  and (x, λ, z) is the solution of (). Thus, we may solve the system of smoothing equation H(μ, x, λ, z) =  and reduce μ to zero gradually while iteratively solving the equation. In reality, variational inequalities with nonlinear constraints are more attractive. These problems have wide applications in economic networks [], image restoration [, ] and so on. So, in this paper, under the framework of smoothing Newton method, we propose a new inexact Newton method for solving VI( , F) with nonlinear constraints, which extends the scope of constraints. We also prove the global and local quadratic convergence and present some numerical results which show the efficiency of the proposed method.
Throughout this paper, we always assume that the solution set of problem () and (), denoted by * , is nonempty. R + and R ++ mean the nonnegative and positive real sets. Symbol · stands for the -norm.
The rest of this paper is organized as follows. In Section , we summarize some useful properties and definitions. In Section , we describe the inexact Newton method formally and then prove its local quadratic convergence. We also give global convergence in Section . In Section , we report our numerical results. Finally, we give some conclusions in Section .

Preliminaries
In this section, we denote some basic definitions and properties that will be used in the subsequent sections.
F is strongly monotone with modulus μ >  if, for any u, v ∈ R n , F is Lipschitz continuous with a positive constant L >  if, for any u, v ∈ R n , The following lemma gives some properties of H and its corresponding Jacobian.
Lemma . Let H(μ, x, λ, z) be defined by (). Assume that F is continuously differentiable and strongly monotone, g is twice continuously differentiable concave, (μ * , x * , λ * , z * ) in R + × R n × R m × R m is the solution of H(μ, x, λ, z) = , the rows of ∇g(x * ) are linearly independent and (λ * , z * ) satisfies the strict complementarity condition. Then . . , m) and where We can observe q  =  easily by (). Next, we discuss formula (). The full form of () can be described as follows: According to the strict complementarity condition of (λ * , z * ), we have From () we get that q i =  and q i q i = . Similarly, if λ * i = , then z * i > . We get that q i =  and q i q i = . Hence, q  q  = .
Multiplying the equation of () by q  on the left-hand side and using q  q  = , we have Multiplying the equation of () by q  on the left-hand side and using (), we have Meanwhile, because F is strongly monotone, we have that ∇F(x * ) is a positive definite matrix. Besides, since g is concave and λ * is nonnegative, we have that Substituting q  =  into () and using the rows of ∇g(x * ) are linearly independent, we get q  = . Substituting q  =  into (), we get q  = . Hence we have q = , which implies that ∇H(μ * , x * , λ * , z * ) is nonsingular. This completes the proof.

The inexact algorithm and its convergence
We are now in the position to describe our smoothing inexact Newton method formally by using the smoothed Fischer-Burmeister function () to solve the variational inequalities with nonlinear constraints. We also show that this method has local quadratic convergence.
Step . Compute ( μ k , w k ) by Step . Set μ k+ = μ k + μ k and w k+ = w k + w k . Set k := k +  and go to Step .

Remark 
() In theory, we use H(μ k , w k )  =  as a termination of Algorithm .. In practice, we use H(μ k , w k )  ≤ ε as a termination rule, where ε is a pre-set tolerance error. () It is obvious that we have ρ k ≤ γ H(μ k , w k )  . () From () and (), we have μ k+ = ρ k μ  >  for any k ≥ . Now, we are ready to analyze the convergence. The quadratic convergence of Algorithm . is given below.
Theorem . Assume that (μ * , w * ) satisfies H(μ * , w * ) = . Suppose that H(μ, w) satisfies the condition of Lemma . and ∇H(μ, w) is Lipschitz continuous with the constant L. Then we have the following conclusions: () There exists a set D ⊂ R + × R n+m which contains (μ * , w * ) such that for any (μ  , w  ) ∈ D, the iterate points (μ k , w k ) generated by Algorithm . are well defined, remain in D and converge to (μ * , w * ); Proof According to Theorem .. in [] and Lemma ., we give the proof in detail. Denote According to Lemma ., we get that ∇H(u * ) is nonsingular. Then there exist a positive constantt <  β and a neighborhood N(u * ,t) of u * such that Lt ≤ ∇H(u * ) , and for any u ∈ N(u * ,t), we have that ∇H(u) is nonsingular and where the first inequality follows from the triangle inequality and the second inequality follows from the Lipschitz continuity. Hence we have for any u ∈ N(u * ,t). Similarly, by the perturbation relation (..) in [], we know that ∇H(u) is nonsingular and Besides, for any t ∈ [, ], we have According to Algorithm ., for any u k ∈ N(u * ,t), k ≥ , we have Taking norm of both sides, we get where the first inequality follows from the Lipschitz continuity, the second inequality follows from (), and the third inequality follows from (). According to the definition of β and the condition oft <  β , we get that u k converges to u * . Besides, () also holds. This completes the proof.

The global inexact algorithm and its convergence
Now, we start our globally convergent method by using the global technique in Algorithm .. We choose a merit function h(μ, w) =   H(μ, w)  and modify ( μ k , w k ) such that We use line search to find a step-length t k ∈ (, ] such that and whereρ ∈ (, .),σ ∈ (ρ, ), δ ∈ (, ).
Step . Find ( μ k , w k ) by solving (). If () is not satisfied, then choose τ k and compute such that () is satisfied.

Remark  In
Step , if () is not satisfied, then the technique in [], pp.-, is used to choose τ k . From Lemma . in [], it is not difficult to find t k that can satisfy ()-().
In order to obtain the global convergence of Algorithm ., throughout the rest of this paper, we define the level set L(μ  , Proof The proof follows Theorem . in [] and condition (). where ∇h(μ k , w k ) = ∇H(μ k , w k )H(μ k , w k ). That is, the sequence {(μ k , w k )} is convergent. Since ∇H(μ * , w * ) is nonsingular and (μ * , w * ) is a limited point of {(μ k , w k )} generated by Algorithm ., we have According to the assumption that there exists k  such that t k =  is admissible and () is satisfied for all k ≥ k  , {(μ k , w k )} can be generated by Algorithm . for k > k  . We can get the conclusion from Theorem . directly. This completes the proof.

Numerical results
In this section, we present some numerical results for Algorithm .. All codes are written in Matlab and run on a RTM i-M personal computer. In the algorithm, we choose γ = .. We also use H(μ k , w k ) ≤  - as the stopping rule for all examples.
It is not easy to find proper test examples for the variational inequalities with nonlinear constraints. Hence, we modify some test examples in references and solve them by Algorithm ..
It is verified that the problem has the solution x * = (, , ) easily. The initial point is x  = (, , ) and μ  = ..
Example . This example is derived from []. Because the original problem is an optimization problem, we give its form of variational inequalities by the optimality condition, i.e., The solution of Example . is x * = (, , , -). The initial point is x  = (, , , ) and μ  = ..
In Tables -, 'k' means the number of iterations, ' H(μ k , w k ) ' means the -norm of H(μ k , w k ). From Tables -, we can observe that Algorithm . can find the solution in a smaller number of iterations for the above two examples. In order to further show the efficiency of Algorithm ., we give other two examples where the dimension of the problems is from  to ,.
In the following tests, we solve w of the linear systems by using GMRES(m) package with m = , allowing a maximum of  cycles (, iterations). And we choose μ  as a random number in (, ).
Example . We consider the problem with nonlinear constraints. The problem is derived from [] with different sizes. Based on the linear constraints of the original problem,  we also add some nonlinear constraints to the problem. In this example, Example . The example is the NCP. F(x) = D(x) + Mx + q. The components of D(x) are D j (x) = d j · arctan(x j ), where d j is a random variable in (, ). The matrix M = A A + B, where A is an n × n matrix whose entries are randomly generated in the interval (-, ), and the skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval (-, ).
In Tables -, 'n' means the dimension of problems, 'No.it' means the number of iterations, 'CPU' means the cpu time in seconds. ' H(μ k , w k ) ' means the -norm of H(μ k , w k ). From Tables -, we find that Algorithm . is robust to the different sizes for these two problems. Moreover, the iterative number is insensitive to the size of problems in our algorithm. In other words, our algorithm is more effective for two problems.

Conclusions
Based on the framework of smoothing Newton method, we propose a new smoothing inexact Newton algorithm for variational inequalities with nonlinear constraints. Under some mild conditions, we establish the global and local quadratic convergence. Furthermore, we also present some preliminary numerical results which show efficiency of the algorithm.